Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-01-01
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-02-09
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.
A novel method for detecting light source for digital images forensic
NASA Astrophysics Data System (ADS)
Roy, A. K.; Mitra, S. K.; Agrawal, R.
2011-06-01
Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.
Simultaneous acquisition of differing image types
Demos, Stavros G
2012-10-09
A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.
A cost-effective line-based light-balancing technique using adaptive processing.
Hsia, Shih-Chang; Chen, Ming-Huei; Chen, Yu-Min
2006-09-01
The camera imaging system has been widely used; however, the displaying image appears to have an unequal light distribution. This paper presents novel light-balancing techniques to compensate uneven illumination based on adaptive signal processing. For text image processing, first, we estimate the background level and then process each pixel with nonuniform gain. This algorithm can balance the light distribution while keeping a high contrast in the image. For graph image processing, the adaptive section control using piecewise nonlinear gain is proposed to equalize the histogram. Simulations show that the performance of light balance is better than the other methods. Moreover, we employ line-based processing to efficiently reduce the memory requirement and the computational cost to make it applicable in real-time systems.
Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image
NASA Astrophysics Data System (ADS)
Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian
2014-07-01
To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.
Multiframe super resolution reconstruction method based on light field angular images
NASA Astrophysics Data System (ADS)
Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao
2017-12-01
The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.
Fringe image processing based on structured light series
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Li, Hongyan
2009-11-01
The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
Light field imaging and application analysis in THz
NASA Astrophysics Data System (ADS)
Zhang, Hongfei; Su, Bo; He, Jingsuo; Zhang, Cong; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin
2018-01-01
The light field includes the direction information and location information. Light field imaging can capture the whole light field by single exposure. The four-dimensional light field function model represented by two-plane parameter, which is proposed by Levoy, is adopted in the light field. Acquisition of light field is based on the microlens array, camera array and the mask. We calculate the dates of light-field to synthetize light field image. The processing techniques of light field data include technology of refocusing rendering, technology of synthetic aperture and technology of microscopic imaging. Introducing the technology of light field imaging into THz, the efficiency of 3D imaging is higher than that of conventional THz 3D imaging technology. The advantages compared with visible light field imaging include large depth of field, wide dynamic range and true three-dimensional. It has broad application prospects.
Non-uniform refractive index field measurement based on light field imaging technique
NASA Astrophysics Data System (ADS)
Du, Xiaokun; Zhang, Yumin; Zhou, Mengjie; Xu, Dong
2018-02-01
In this paper, a method for measuring the non-uniform refractive index field based on the light field imaging technique is proposed. First, the light field camera is used to collect the four-dimensional light field data, and then the light field data is decoded according to the light field imaging principle to obtain image sequences with different acquisition angles of the refractive index field. Subsequently PIV (Particle Image Velocimetry) technique is used to extract ray offset of each image. Finally, the distribution of non-uniform refractive index field can be calculated by inversing the deflection of light rays. Compared with traditional optical methods which require multiple optical detectors from multiple angles to synchronously collect data, the method proposed in this paper only needs a light field camera and shoot once. The effectiveness of the method has been verified by the experiment which quantitatively measures the distribution of the refractive index field above the flame of the alcohol lamp.
NASA Astrophysics Data System (ADS)
McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol
2016-06-01
Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants - Cr(VI), total chlorine, caffeine, and E. coli K12 - at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring.
NASA Astrophysics Data System (ADS)
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Simulation design of light field imaging based on ZEMAX
NASA Astrophysics Data System (ADS)
Zhou, Ke; Xiao, Xiangguo; Luan, Yadong; Zhou, Xiaobin
2017-02-01
Based on the principium of light field imaging, there designed a objective lens and a microlens array for gathering the light field feature, the homologous ZEMAX models was also be built. Then all the parameters were optimized using ZEMAX and the simulation image was given out. It pointed out that the position relationship between the objective lens and the microlens array had a great affect on imaging, which was the guidance when developing a prototype.
Optical image encryption scheme with multiple light paths based on compressive ghost imaging
NASA Astrophysics Data System (ADS)
Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan
2018-02-01
An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
NASA Astrophysics Data System (ADS)
Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun
2017-11-01
An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.
VLC-based indoor location awareness using LED light and image sensors
NASA Astrophysics Data System (ADS)
Lee, Seok-Ju; Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
Recently, indoor LED lighting can be considered for constructing green infra with energy saving and additionally providing LED-IT convergence services such as visible light communication (VLC) based location awareness and navigation services. For example, in case of large complex shopping mall, location awareness to navigate the destination is very important issue. However, the conventional navigation using GPS is not working indoors. Alternative location service based on WLAN has a problem that the position accuracy is low. For example, it is difficult to estimate the height exactly. If the position error of the height is greater than the height between floors, it may cause big problem. Therefore, conventional navigation is inappropriate for indoor navigation. Alternative possible solution for indoor navigation is VLC based location awareness scheme. Because indoor LED infra will be definitely equipped for providing lighting functionality, indoor LED lighting has a possibility to provide relatively high accuracy of position estimation combined with VLC technology. In this paper, we provide a new VLC based positioning system using visible LED lights and image sensors. Our system uses location of image sensor lens and location of reception plane. By using more than two image sensor, we can determine transmitter position less than 1m position error. Through simulation, we verify the validity of the proposed VLC based new positioning system using visible LED light and image sensors.
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Smart Image Enhancement Process
NASA Technical Reports Server (NTRS)
Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)
2012-01-01
Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.
Real-time intraoperative fluorescence imaging system using light-absorption correction.
Themelis, George; Yoo, Jung Sun; Soh, Kwang-Sup; Schulz, Ralf; Ntziachristos, Vasilis
2009-01-01
We present a novel fluorescence imaging system developed for real-time interventional imaging applications. The system implements a correction scheme that improves the accuracy of epi-illumination fluorescence images for light intensity variation in tissues. The implementation is based on the use of three cameras operating in parallel, utilizing a common lens, which allows for the concurrent collection of color, fluorescence, and light attenuation images at the excitation wavelength from the same field of view. The correction is based on a ratio approach of fluorescence over light attenuation images. Color images and video is used for surgical guidance and for registration with the corrected fluorescence images. We showcase the performance metrics of this system on phantoms and animals, and discuss the advantages over conventional epi-illumination systems developed for real-time applications and the limits of validity of corrected epi-illumination fluorescence imaging.
Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung
2017-07-08
A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.
Optimisation approaches for concurrent transmitted light imaging during confocal microscopy.
Collings, David A
2015-01-01
The transmitted light detectors present on most modern confocal microscopes are an under-utilised tool for the live imaging of plant cells. As the light forming the image in this detector is not passed through a pinhole, out-of-focus light is not removed. It is this extended focus that allows the transmitted light image to provide cellular and organismal context for fluorescence optical sections generated confocally. More importantly, the transmitted light detector provides images that have spatial and temporal registration with the fluorescence images, unlike images taken with a separately-mounted camera. Because plants often provide difficulties for taking transmitted light images, with the presence of pigments and air pockets in leaves, this study documents several approaches to improving transmitted light images beginning with ensuring that the light paths through the microscope are correctly aligned (Köhler illumination). Pigmented samples can be imaged in real colour using sequential scanning with red, green and blue lasers. The resulting transmitted light images can be optimised and merged in ImageJ to generate colour images that maintain registration with concurrent fluorescence images. For faster imaging of pigmented samples, transmitted light images can be formed with non-absorbed wavelengths. Transmitted light images of Arabidopsis leaves expressing GFP can be improved by concurrent illumination with green and blue light. If the blue light used for YFP excitation is blocked from the transmitted light detector with a cheap, coloured glass filters, the non-absorbed green light will form an improved transmitted light image. Changes in sample colour can be quantified by transmitted light imaging. This has been documented in red onion epidermal cells where changes in vacuolar pH triggered by the weak base methylamine result in measurable colour changes in the vacuolar anthocyanin. Many plant cells contain visible levels of pigment. The transmitted light detector provides a useful tool for documenting and measuring changes in these pigments while maintaining registration with confocal imaging.
NASA Astrophysics Data System (ADS)
Guan, Jinge; Ren, Wei; Cheng, Yaoyu
2018-04-01
We demonstrate an efficient polarization-difference imaging system in turbid conditions by using the Stokes vector of light. The interaction of scattered light with the polarizer is analyzed by the Stokes-Mueller formalism. An interpolation method is proposed to replace the mechanical rotation of the polarization axis of the analyzer theoretically, and its performance is verified by the experiment at different turbidity levels. We show that compared with direct imaging, the Stokes vector based imaging method can effectively reduce the effect of light scattering and enhance the image contrast.
2010-09-01
external sources ‘L1’ like zodiacal light (or diffuse nebula ) or stray light ‘L2’ and these components change with the telescope pointing. Bk (T,t...Astronomical scene background (zodiacal light, diffuse nebulae , etc.). L2(P A(tk), t): Image background component caused by stray light. MS
PlenoPatch: Patch-Based Plenoptic Image Manipulation.
Zhang, Fang-Lue; Wang, Jue; Shechtman, Eli; Zhou, Zi-Ye; Shi, Jia-Xin; Hu, Shi-Min
2017-05-01
Patch-based image synthesis methods have been successfully applied for various editing tasks on still images, videos and stereo pairs. In this work we extend patch-based synthesis to plenoptic images captured by consumer-level lenselet-based devices for interactive, efficient light field editing. In our method the light field is represented as a set of images captured from different viewpoints. We decompose the central view into different depth layers, and present it to the user for specifying the editing goals. Given an editing task, our method performs patch-based image synthesis on all affected layers of the central view, and then propagates the edits to all other views. Interaction is done through a conventional 2D image editing user interface that is familiar to novice users. Our method correctly handles object boundary occlusion with semi-transparency, thus can generate more realistic results than previous methods. We demonstrate compelling results on a wide range of applications such as hole-filling, object reshuffling and resizing, changing object depth, light field upscaling and parallax magnification.
NASA Astrophysics Data System (ADS)
Rasmi, Chelur K.; Padmanabhan, Sreedevi; Shirlekar, Kalyanee; Rajan, Kanhirodan; Manjithaya, Ravi; Singh, Varsha; Mondal, Partha Pratim
2017-12-01
We propose and demonstrate a light-sheet-based 3D interrogation system on a microfluidic platform for screening biological specimens during flow. To achieve this, a diffraction-limited light-sheet (with a large field-of-view) is employed to optically section the specimens flowing through the microfluidic channel. This necessitates optimization of the parameters for the illumination sub-system (illumination intensity, light-sheet width, and thickness), microfluidic specimen platform (channel-width and flow-rate), and detection sub-system (camera exposure time and frame rate). Once optimized, these parameters facilitate cross-sectional imaging and 3D reconstruction of biological specimens. The proposed integrated light-sheet imaging and flow-based enquiry (iLIFE) imaging technique enables single-shot sectional imaging of a range of specimens of varying dimensions, ranging from a single cell (HeLa cell) to a multicellular organism (C. elegans). 3D reconstruction of the entire C. elegans is achieved in real-time and with an exposure time of few hundred micro-seconds. A maximum likelihood technique is developed and optimized for the iLIFE imaging system. We observed an intracellular resolution for mitochondria-labeled HeLa cells, which demonstrates the dynamic resolution of the iLIFE system. The proposed technique is a step towards achieving flow-based 3D imaging. We expect potential applications in diverse fields such as structural biology and biophysics.
High-performance lighting evaluated by photobiological parameters.
Rebec, Katja Malovrh; Gunde, Marta Klanjšek
2014-08-10
The human reception of light includes image-forming and non-image-forming effects which are triggered by spectral distribution and intensity of light. Ideal lighting is similar to daylight, which could be evaluated by spectral or chromaticity match. LED-based and CFL-based lighting were analyzed here, proposed according to spectral and chromaticity match, respectively. The photobiological effects were expressed by effectiveness for blue light hazard, cirtopic activity, and photopic vision. Good spectral match provides light with more similar effects to those obtained by the chromaticity match. The new parameters are useful for better evaluation of complex human responses caused by lighting.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
SPIM-fluid: open source light-sheet based platform for high-throughput imaging
Gualda, Emilio J.; Pereira, Hugo; Vale, Tiago; Estrada, Marta Falcão; Brito, Catarina; Moreno, Nuno
2015-01-01
Light sheet fluorescence microscopy has recently emerged as the technique of choice for obtaining high quality 3D images of whole organisms/embryos with low photodamage and fast acquisition rates. Here we present an open source unified implementation based on Arduino and Micromanager, which is capable of operating Light Sheet Microscopes for automatized 3D high-throughput imaging on three-dimensional cell cultures and model organisms like zebrafish, oriented to massive drug screening. PMID:26601007
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-06-08
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-01-01
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383
Weighted bi-prediction for light field image coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2017-09-01
Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.
The system analysis of light field information collection based on the light field imaging
NASA Astrophysics Data System (ADS)
Wang, Ye; Li, Wenhua; Hao, Chenyang
2016-10-01
Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.
Shapiro, Jeffrey H.; Venkatraman, Dheera; Wong, Franco N. C.
2013-01-01
Ragy and Adesso argue that quantum discord is involved in the formation of a pseudothermal ghost image. We show that quantum discord plays no role in spatial light modulator ghost imaging, i.e., ghost-image formation based on structured illumination realized with laser light that has undergone spatial light modulation by the output from a pseudorandom number generator. Our analysis thus casts doubt on the degree to which quantum discord is necessary for ghost imaging. PMID:23673426
Distance measurement based on light field geometry and ray tracing.
Chen, Yanqin; Jin, Xin; Dai, Qionghai
2017-01-09
In this paper, we propose a geometric optical model to measure the distances of object planes in a light field image. The proposed geometric optical model is composed of two sub-models based on ray tracing: object space model and image space model. The two theoretic sub-models are derived on account of on-axis point light sources. In object space model, light rays propagate into the main lens and refract inside it following the refraction theorem. In image space model, light rays exit from emission positions on the main lens and subsequently impinge on the image sensor with different imaging diameters. The relationships between imaging diameters of objects and their corresponding emission positions on the main lens are investigated through utilizing refocusing and similar triangle principle. By combining the two sub-models together and tracing light rays back to the object space, the relationships between objects' imaging diameters and corresponding distances of object planes are figured out. The performance of the proposed geometric optical model is compared with existing approaches using different configurations of hand-held plenoptic 1.0 cameras and real experiments are conducted using a preliminary imaging system. Results demonstrate that the proposed model can outperform existing approaches in terms of accuracy and exhibits good performance at general imaging range.
Appearance-based face recognition and light-fields.
Gross, Ralph; Matthews, Iain; Baker, Simon
2004-04-01
Arguably the most important decision to be made when developing an object recognition algorithm is selecting the scene measurements or features on which to base the algorithm. In appearance-based object recognition, the features are chosen to be the pixel intensity values in an image of the object. These pixel intensities correspond directly to the radiance of light emitted from the object along certain rays in space. The set of all such radiance values over all possible rays is known as the plenoptic function or light-field. In this paper, we develop a theory of appearance-based object recognition from light-fields. This theory leads directly to an algorithm for face recognition across pose that uses as many images of the face as are available, from one upwards. All of the pixels, whichever image they come from, are treated equally and used to estimate the (eigen) light-field of the object. The eigen light-field is then used as the set of features on which to base recognition, analogously to how the pixel intensities are used in appearance-based face and object recognition.
Light-field and holographic three-dimensional displays [Invited].
Yamaguchi, Masahiro
2016-12-01
A perfect three-dimensional (3D) display that satisfies all depth cues in human vision is possible if a light field can be reproduced exactly as it appeared when it emerged from a real object. The light field can be generated based on either light ray or wavefront reconstruction, with the latter known as holography. This paper first provides an overview of the advances of ray-based and wavefront-based 3D display technologies, including integral photography and holography, and the integration of those technologies with digital information systems. Hardcopy displays have already been used in some applications, whereas the electronic display of a light field is under active investigation. Next, a fundamental question in this technology field is addressed: what is the difference between ray-based and wavefront-based methods for light-field 3D displays? In considering this question, it is of particular interest to look at the technology of holographic stereograms. The phase information in holography contributes to the resolution of a reconstructed image, especially for deep 3D images. Moreover, issues facing the electronic display system of light fields are discussed, including the resolution of the spatial light modulator, the computational techniques of holography, and the speckle in holographic images.
Single-pixel computational ghost imaging with helicity-dependent metasurface hologram.
Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2017-09-01
Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security.
Single-pixel computational ghost imaging with helicity-dependent metasurface hologram
Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2017-01-01
Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security. PMID:28913433
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Gul, M. Shahzeb Khan; Gunturk, Bahadir K.
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.
Gul, M Shahzeb Khan; Gunturk, Bahadir K
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Ultra-high resolution of radiocesium distribution detection based on Cherenkov light imaging
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Ogata, Yoshimune; Kawachi, Naoki; Suzui, Nobuo; Yin, Yong-Gen; Fujimaki, Shu
2015-03-01
After the nuclear disaster in Fukushima, radiocesium contamination became a serious scientific concern and research of its effects on plants increased. In such plant studies, high resolution images of radiocesium are required without contacting the subjects. Cherenkov light imaging of beta radionuclides has inherently high resolution and is promising for plant research. Since 137Cs and 134Cs emit beta particles, Cherenkov light imaging will be useful for the imaging of radiocesium distribution. Consequently, we developed and tested a Cherenkov light imaging system. We used a high sensitivity cooled charge coupled device (CCD) camera (Hamamatsu Photonics, ORCA2-ER) for imaging Cherenkov light from 137Cs. A bright lens (Xenon, F-number: 0.95, lens diameter: 25 mm) was mounted on the camera and placed in a black box. With a 100-μm 137Cs point source, we obtained 220-μm spatial resolution in the Cherenkov light image. With a 1-mm diameter, 320-kBq 137Cs point source, the source was distinguished within 2-s. We successfully obtained Cherenkov light images of a plant whose root was dipped in a 137Cs solution, radiocesium-containing samples as well as line and character phantom images with our imaging system. Cherenkov light imaging is promising for the high resolution imaging of radiocesium distribution without contacting the subject.
Full-color stereoscopic single-pixel camera based on DMD technology
NASA Astrophysics Data System (ADS)
Salvador-Balaguer, Eva; Clemente, Pere; Tajahuerce, Enrique; Pla, Filiberto; Lancis, Jesús
2017-02-01
Imaging systems based on microstructured illumination and single-pixel detection offer several advantages over conventional imaging techniques. They are an effective method for imaging through scattering media even in the dynamic case. They work efficiently under low light levels, and the simplicity of the detector makes it easy to design imaging systems working out of the visible spectrum and to acquire multidimensional information. In particular, several approaches have been proposed to record 3D information. The technique is based on sampling the object with a sequence of microstructured light patterns codified onto a programmable spatial light modulator while light intensity is measured with a single-pixel detector. The image is retrieved computationally from the photocurrent fluctuations provided by the detector. In this contribution we describe an optical system able to produce full-color stereoscopic images by using few and simple optoelectronic components. In our setup we use an off-the-shelf digital light projector (DLP) based on a digital micromirror device (DMD) to generate the light patterns. To capture the color of the scene we take advantage of the codification procedure used by the DLP for color video projection. To record stereoscopic views we use a 90° beam splitter and two mirrors, allowing us two project the patterns form two different viewpoints. By using a single monochromatic photodiode we obtain a pair of color images that can be used as input in a 3-D display. To reduce the time we need to project the patterns we use a compressive sampling algorithm. Experimental results are shown.
Stray light calibration of the Dawn Framing Camera
NASA Astrophysics Data System (ADS)
Kovacs, Gabor; Sierks, Holger; Nathues, Andreas; Richards, Michael; Gutierrez-Marques, Pablo
2013-10-01
Sensitive imaging systems with high dynamic range onboard spacecrafts are susceptible to ghost and stray-light effects. During the design phase, the Dawn Framing Camera was laid out and optimized to minimize those unwanted, parasitic effects. However, the requirement of low distortion to the optical design and use of a front-lit focal plane array induced an additional stray light component. This paper presents the ground-based and in-flight procedures characterizing the stray-light artifacts. The in-flight test used the Sun as the stray light source, at different angles of incidence. The spacecraft was commanded to point predefined solar elongation positions, and long exposure images were recorded. The PSNIT function was calculated by the known illumination and the ground based calibration information. In the ground based calibration, several extended and point sources were used with long exposure times in dedicated imaging setups. The tests revealed that the major contribution to the stray light is coming from the ghost reflections between the focal plan array and the band pass interference filters. Various laboratory experiments and computer modeling simulations were carried out to quantify the amount of this effect, including the analysis of the diffractive reflection pattern generated by the imaging sensor. The accurate characterization of the detector reflection pattern is the key to successfully predict the intensity distribution of the ghost image. Based on the results, and the properties of the optical system, a novel correction method is applied in the image processing pipeline. The effect of this correction procedure is also demonstrated with the first images of asteroid Vesta.
A method to generate soft shadows using a layered depth image and warping.
Im, Yeon-Ho; Han, Chang-Young; Kim, Lee-Sup
2005-01-01
We present an image-based method for propagating area light illumination through a Layered Depth Image (LDI) to generate soft shadows from opaque and nonrefractive transparent objects. In our approach, using the depth peeling technique, we render an LDI from a reference light sample on a planar light source. Light illumination of all pixels in an LDI is then determined for all the other sample points via warping, an image-based rendering technique, which approximates ray tracing in our method. We use an image-warping equation and McMillan's warp ordering algorithm to find the intersections between rays and polygons and to find the order of intersections. Experiments for opaque and nonrefractive transparent objects are presented. Results indicate our approach generates soft shadows fast and effectively. Advantages and disadvantages of the proposed method are also discussed.
Computer-aided light sheet flow visualization using photogrammetry
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1994-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.
Computer-Aided Light Sheet Flow Visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Computer-aided light sheet flow visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Color image enhancement based on particle swarm optimization with Gaussian mixture
NASA Astrophysics Data System (ADS)
Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho
2015-01-01
This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.
Xin, Zhaowei; Wei, Dong; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2018-02-19
Light-field imaging is a crucial and straightforward way of measuring and analyzing surrounding light worlds. In this paper, a dual-polarized light-field imaging micro-system based on a twisted nematic liquid-crystal microlens array (TN-LCMLA) for direct three-dimensional (3D) observation is fabricated and demonstrated. The prototyped camera has been constructed by integrating a TN-LCMLA with a common CMOS sensor array. By switching the working state of the TN-LCMLA, two orthogonally polarized light-field images can be remapped through the functioned imaging sensors. The imaging micro-system in conjunction with the electric-optical microstructure can be used to perform polarization and light-field imaging, simultaneously. Compared with conventional plenoptic cameras using liquid-crystal microlens array, the polarization-independent light-field images with a high image quality can be obtained in the arbitrary polarization state selected. We experimentally demonstrate characters including a relatively wide operation range in the manipulation of incident beams and the multiple imaging modes, such as conventional two-dimensional imaging, light-field imaging, and polarization imaging. Considering the obvious features of the TN-LCMLA, such as very low power consumption, providing multiple imaging modes mentioned, simple and low-cost manufacturing, the imaging micro-system integrated with this kind of liquid-crystal microstructure driven electrically presents the potential capability of directly observing a 3D object in typical scattering media.
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Theory and analysis of a large field polarization imaging system with obliquely incident light.
Lu, Xiaotian; Jin, Weiqi; Li, Li; Wang, Xia; Qiu, Su; Liu, Jing
2018-02-05
Polarization imaging technology provides information about not only the irradiance of a target but also the polarization degree and angle of polarization, which indicates extensive application potential. However, polarization imaging theory is based on paraxial optics. When a beam of obliquely incident light passes an analyser, the direction of light propagation is not perpendicular to the surface of the analyser and the applicability of the traditional paraxial optical polarization imaging theory is challenged. This paper investigates a theoretical model of a polarization imaging system with obliquely incident light and establishes a polarization imaging transmission model with a large field of obliquely incident light. In an imaging experiment with an integrating sphere light source and rotatable polarizer, the polarization imaging transmission model is verified and analysed for two cases of natural light and linearly polarized light incidence. Although the results indicate that the theoretical model is consistent with the experimental results, the theoretical model distinctly differs from the traditional paraxial approximation model. The results prove the accuracy and necessity of the theoretical model and the theoretical guiding significance for theoretical and systematic research of large field polarization imaging.
Setting Up a Simple Light Sheet Microscope for In Toto Imaging of C. elegans Development
Bertrand, Vincent; Lenne, Pierre-François
2014-01-01
Fast and low phototoxic imaging techniques are pre-requisite to study the development of organisms in toto. Light sheet based microscopy reduces photo-bleaching and phototoxic effects compared to confocal microscopy, while providing 3D images with subcellular resolution. Here we present the setup of a light sheet based microscope, which is composed of an upright microscope and a small set of opto-mechanical elements for the generation of the light sheet. The protocol describes how to build, align the microscope and characterize the light sheet. In addition, it details how to implement the method for in toto imaging of C. elegans embryos using a simple observation chamber. The method allows the capture of 3D two-colors time-lapse movies over few hours of development. This should ease the tracking of cell shape, cell divisions and tagged proteins over long periods of time. PMID:24836407
Active polarization imaging system based on optical heterodyne balanced receiver
NASA Astrophysics Data System (ADS)
Xu, Qian; Sun, Jianfeng; Lu, Zhiyong; Zhou, Yu; Luan, Zhu; Hou, Peipei; Liu, liren
2017-08-01
Active polarization imaging technology has recently become the hot research field all over the world, which has great potential application value in the military and civil area. By introducing active light source, the Mueller matrix of the target can be calculated according to the incident light and the emitted or reflected light. Compared with conventional direct detection technology, optical heterodyne detection technology have higher receiving sensitivities, which can obtain the whole amplitude, frequency and phase information of the signal light. In this paper, an active polarization imaging system will be designed. Based on optical heterodyne balanced receiver, the system can acquire the horizontal and vertical polarization of reflected optical field simultaneously, which contain the polarization characteristic of the target. Besides, signal to noise ratio and imaging distance can be greatly improved.
Visible Light Image-Based Method for Sugar Content Classification of Citrus
Wang, Xuefeng; Wu, Chunyan; Hirafuji, Masayuki
2016-01-01
Visible light imaging of citrus fruit from Mie Prefecture of Japan was performed to determine whether an algorithm could be developed to predict the sugar content. This nondestructive classification showed that the accurate segmentation of different images can be realized by a correlation analysis based on the threshold value of the coefficient of determination. There is an obvious correlation between the sugar content of citrus fruit and certain parameters of the color images. The selected image parameters were connected by addition algorithm. The sugar content of citrus fruit can be predicted by the dummy variable method. The results showed that the small but orange citrus fruits often have a high sugar content. The study shows that it is possible to predict the sugar content of citrus fruit and to perform a classification of the sugar content using light in the visible spectrum and without the need for an additional light source. PMID:26811935
NASA Technical Reports Server (NTRS)
1997-01-01
Based on a Small Business Innovation Research contract from the Jet Propulsion Laboratory, TracePro is state-of-the-art interactive software created by Lambda Research Corporation to detect stray light in optical systems. An image can be ruined by incidental light in an optical system. To maintain image excellence from an optical system, stray light must be detected and eliminated. TracePro accounts for absorption, specular reflection and refraction, scattering and aperture diffraction of light. Output from the software consists of spatial irradiance plots and angular radiance plots. Results can be viewed as contour maps or as ray histories in tabular form. TracePro is adept at modeling solids such as lenses, baffles, light pipes, integrating spheres, non-imaging concentrators, and complete illumination systems. The firm's customer base includes Lockheed Martin, Samsung Electronics and other manufacturing, optical, aerospace, and educational companies worldwide.
NASA Astrophysics Data System (ADS)
Park, Dubok; Han, David K.; Ko, Hanseok
2017-05-01
Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.
Image Mosaic Method Based on SIFT Features of Line Segment
Zhu, Jun; Ren, Mingwu
2014-01-01
This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. PMID:24511326
NASA Astrophysics Data System (ADS)
Yu, Haiyan; Fan, Jiulun
2017-12-01
Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
Chen, Chia-Wei; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung
2017-10-02
Recently even the low-end mobile-phones are equipped with a high-resolution complementary-metal-oxide-semiconductor (CMOS) image sensor. This motivates using a CMOS image sensor for visible light communication (VLC). Here we propose and demonstrate an efficient demodulation scheme to synchronize and demodulate the rolling shutter pattern in image sensor based VLC. The implementation algorithm is discussed. The bit-error-rate (BER) performance and processing latency are evaluated and compared with other thresholding schemes.
Kao, Ya-Ting; Zhu, Xinxin; Xu, Fang; Min, Wei
2012-08-01
Probing biological structures and functions deep inside live organisms with light is highly desirable. Among the current optical imaging modalities, multiphoton fluorescence microscopy exhibits the best contrast for imaging scattering samples by employing a spatially confined nonlinear excitation. However, as the incident laser power drops exponentially with imaging depth into the sample due to the scattering loss, the out-of-focus background eventually overwhelms the in-focus signal, which defines a fundamental imaging-depth limit. Herein we significantly improve the image contrast for deep scattering samples by harnessing reversibly switchable fluorescent proteins (RSFPs) which can be cycled between bright and dark states upon light illumination. Two distinct techniques, multiphoton deactivation and imaging (MPDI) and multiphoton activation and imaging (MPAI), are demonstrated on tissue phantoms labeled with Dronpa protein. Such a focal switch approach can generate pseudo background-free images. Conceptually different from wave-based approaches that try to reduce light scattering in turbid samples, our work represents a molecule-based strategy that focused on imaging probes.
Kao, Ya-Ting; Zhu, Xinxin; Xu, Fang; Min, Wei
2012-01-01
Probing biological structures and functions deep inside live organisms with light is highly desirable. Among the current optical imaging modalities, multiphoton fluorescence microscopy exhibits the best contrast for imaging scattering samples by employing a spatially confined nonlinear excitation. However, as the incident laser power drops exponentially with imaging depth into the sample due to the scattering loss, the out-of-focus background eventually overwhelms the in-focus signal, which defines a fundamental imaging-depth limit. Herein we significantly improve the image contrast for deep scattering samples by harnessing reversibly switchable fluorescent proteins (RSFPs) which can be cycled between bright and dark states upon light illumination. Two distinct techniques, multiphoton deactivation and imaging (MPDI) and multiphoton activation and imaging (MPAI), are demonstrated on tissue phantoms labeled with Dronpa protein. Such a focal switch approach can generate pseudo background-free images. Conceptually different from wave-based approaches that try to reduce light scattering in turbid samples, our work represents a molecule-based strategy that focused on imaging probes. PMID:22876358
Pseudo color ghost coding imaging with pseudo thermal light
NASA Astrophysics Data System (ADS)
Duan, De-yang; Xia, Yun-jie
2018-04-01
We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.
Image ratio features for facial expression recognition application.
Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu
2010-06-01
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
NASA Astrophysics Data System (ADS)
Kurek, A. R.; Stachowski, A.; Banaszek, K.; Pollo, A.
2018-05-01
High-angular-resolution imaging is crucial for many applications in modern astronomy and astrophysics. The fundamental diffraction limit constrains the resolving power of both ground-based and spaceborne telescopes. The recent idea of a quantum telescope based on the optical parametric amplification (OPA) of light aims to bypass this limit for the imaging of extended sources by an order of magnitude or more. We present an updated scheme of an OPA-based device and a more accurate model of the signal amplification by such a device. The semiclassical model that we present predicts that the noise in such a system will form so-called light speckles as a result of light interference in the optical path. Based on this model, we analysed the efficiency of OPA in increasing the angular resolution of the imaging of extended targets and the precise localization of a distant point source. According to our new model, OPA offers a gain in resolved imaging in comparison to classical optics. For a given time-span, we found that OPA can be more efficient in localizing a single distant point source than classical telescopes.
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
Hintz, S R; Cheong, W F; van Houten, J P; Stevenson, D K; Benaron, D A
1999-01-01
Medical optical imaging (MOI) uses light emitted into opaque tissues to determine the interior structure. Previous reports detailed a portable time-of-flight and absorbance system emitting pulses of near infrared light into tissues and measuring the emerging light. Using this system, optical images of phantoms, whole rats, and pathologic neonatal brain specimens have been tomographically reconstructed. We have now modified the existing instrumentation into a clinically relevant headband-based system to be used for optical imaging of structure in the neonatal brain at the bedside. Eight medical optical imaging studies in the neonatal intensive care unit were performed in a blinded clinical comparison of optical images with ultrasound, computed tomography, and magnetic resonance imaging. Optical images were interpreted as correct in six of eight cases, with one error attributed to the age of the clot, and one small clot not seen. In addition, one disagreement with ultrasound, not reported as an error, was found to be the result of a mislabeled ultrasound report rather than because of an inaccurate optical scan. Optical scan correlated well with computed tomography and magnetic resonance imaging findings in one patient. We conclude that light-based imaging using a portable time-of-flight system is feasible and represents an important new noninvasive diagnostic technique, with potential for continuous monitoring of critically ill neonates at risk for intraventricular hemorrhage or stroke. Further studies are now underway to further investigate the functional imaging capabilities of this new diagnostic tool.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-03-23
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-01-01
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690
Microscopic Imaging and Spectroscopy with Scattered Light
Boustany, Nada N.; Boppart, Stephen A.; Backman, Vadim
2012-01-01
Optical contrast based on elastic scattering interactions between light and matter can be used to probe cellular structure and dynamics, and image tissue architecture. The quantitative nature and high sensitivity of light scattering signals to subtle alterations in tissue morphology, as well as the ability to visualize unstained tissue in vivo, has recently generated significant interest in optical scatter based biosensing and imaging. Here we review the fundamental methodologies used to acquire and interpret optical scatter data. We report on recent findings in this field and present current advances in optical scatter techniques and computational methods. Cellular and tissue data enabled by current advances in optical scatter spectroscopy and imaging stand to impact a variety of biomedical applications including clinical tissue diagnosis, in vivo imaging, drug discovery and basic cell biology. PMID:20617940
Machine-Vision Aids for Improved Flight Operations
NASA Technical Reports Server (NTRS)
Menon, P. K.; Chatterji, Gano B.
1996-01-01
The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-03-16
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.
Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.
Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki
2015-03-19
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.
The effect of ambient lighting on Laser Doppler Imaging of a standardized cutaneous injury model.
Pham, Alan Chuong Q; Hei, Erik La; Harvey, John G; Holland, Andrew Ja
2017-01-01
The aim of this study was to investigate the potential confounding effects of four different types of ambient lighting on the results of Laser Doppler Imaging (LDI) of a standardized cutaneous injury model. After applying a mechanical stimulus to the anterior forearm of a healthy volunteer and inducing a wheal and arteriolar flare (the Triple response), we used a Laser Doppler Line Scanner (LDLS) to image the forearm under four different types of ambient lighting: light-emitting-diode (LED), compact fluorescent lighting (CFL), halogen, daylight, and darkness as a control. A spectrometer was used to measure the intensity of light energy at 785 nm, the wavelength used by the scanner for measurement under each type of ambient lighting. Neither the LED nor CFL bulbs emitted detectable light energy at a wavelength of 785 nm. The color-based representation of arbitrary perfusion unit (APU) values of the Triple response measured by the scanner was similar between darkness, LED, and CFL light. Daylight emitted 2 mW at 785 nm, with a slight variation tending more towards lower APU values compared to darkness. Halogen lighting emitted 6 mW of light energy at 785 nm rendering the color-based representation impossible to interpret. Halogen lighting and daylight have the potential to confound results of LDI of cutaneous injuries whereas LED and CFL lighting did not. Any potential sources of daylight should be reduced and halogen lighting completely covered or turned off prior to wound imaging.
Imaging skeletal muscle with linearly polarized light
NASA Astrophysics Data System (ADS)
Li, X.; Ranasinghesagara, J.; Yao, G.
2008-04-01
We developed a polarization sensitive imaging system that can acquire reflectance images in turbid samples using incident light of different polarization states. Using this system, we studied polarization imaging on bovine sternomandibularis muscle strips using light of two orthogonal linearly polarized states. We found the obtained polarization sensitive reflectance images had interesting patterns depending on the polarization states. In addition, we computed four elements of the Mueller matrix from the acquired images. As a comparison, we also obtained polarization images of a 20% Intralipid"R" solution and compared the results with those from muscle samples. We found that the polarization imaging patterns from Intralipid solution can be described with a model based on single-scattering approximation. However, the polarization images in muscle had distinct patterns and can not be explained by this simple model. These results implied that the unique structural properties of skeletal muscle play important roles in modulating the propagation of polarized light.
Survey of on-road image projection with pixel light systems
NASA Astrophysics Data System (ADS)
Rizvi, Sadiq; Knöchelmann, Marvin; Ley, Peer-Phillip; Lachmayer, Roland
2017-12-01
HID, LED and laser-based high resolution automotive headlamps, as of late known as `pixel light systems', are at the forefront of the developing technologies paving the way for autonomous driving. In addition to light distribution capabilities that outperform Adaptive Front Lighting and Matrix Beam systems, pixel light systems provide the possibility of image projection directly onto the street. The underlying objective is to improve the driving experience, in any given scenario, in terms of safety, comfort and interaction for all road users. The focus of this work is to conduct a short survey on this state-of-the-art image projection functionality. A holistic research regarding the image projection functionality can be divided into three major categories: scenario selection, technological development and evaluation design. Consequently, the work presented in this paper is divided into three short studies. Section 1 provides a brief introduction to pixel light systems and a justification for the approach adopted for this study. Section 2 deals with the selection of scenarios (and driving maneuvers) where image projection can play a critical role. Section 3 discusses high power LED and LED array based prototypes that are currently under development. Section 4 demonstrates results from an experiment conducted to evaluate the illuminance of an image space projected using a pixel light system prototype developed at the Institute of Product Development (IPeG). Findings from this work can help to identify and advance future research work relating to: further development of pixel light systems, scenario planning, examination of optimal light sources, behavioral response studies etc.
Full-frame, programmable hyperspectral imager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, Steven P.; Graff, David L.
A programmable, many-band spectral imager based on addressable spatial light modulators (ASLMs), such as micro-mirror-, micro-shutter- or liquid-crystal arrays, is described. Capable of collecting at once, without scanning, a complete two-dimensional spatial image with ASLM spectral processing applied simultaneously to the entire image, the invention employs optical assemblies wherein light from all image points is forced to impinge at the same angle onto the dispersing element, eliminating interplay between spatial position and wavelength. This is achieved, as examples, using telecentric optics to image light at the required constant angle, or with micro-optical array structures, such as micro-lens- or capillary arrays,more » that aim the light on a pixel-by-pixel basis. Light of a given wavelength then emerges from the disperser at the same angle for all image points, is collected at a unique location for simultaneous manipulation by the ASLM, then recombined with other wavelengths to form a final spectrally-processed image.« less
A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading
NASA Astrophysics Data System (ADS)
Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.
2018-05-01
Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.
A comparative study of scintillator combining methods for flat-panel X-ray image sensors
NASA Astrophysics Data System (ADS)
Kim, M. S.; Lim, K. T.; Kim, G.; Cho, G.
2018-02-01
An X-ray transmission imaging based on scintillation detection method is the most widely used radiation technique particularly in the medical and industrial areas. As the name suggests, scintillation detection uses a scintillator as an intermediate material to convert incoming radiation into visible-light particles. Among different types of scintillators, CsI(Tl) in a columnar configuration is the most popular type used for applications that require an energy less than 150 keV due to its capability in obtaining a high spatial resolution with a reduced light spreading effect. In this study, different methods in combining a scintillator with a light-receiving unit are investigated and their relationships are given in terms of the image quality. Three different methods of combining a scintillator with a light-receiving unit are selected to investigate their performance in X-ray imaging: upward or downward oriented needles structure of CsI(Tl), coating layer deposition around CsI(Tl), and insertion of FOP. A charge-coupled device was chosen to serve as the light-receiving unit for the proposed system. From the result, the difference of needle directions in CsI(Tl) had no significant effects in the X-ray image. In contrast, deposition of the coating material around CsI(Tl) showed 17.3% reduction in the DQE. Insertion of the FOP increased the spatial resolution by 38%, however, it decreased the light yield in the acquired image by 56%. In order to have the maximum scintillation performance in X-ray imaging, not only the reflection material but also the bonding method must be considered when combining the scintillator with the light-receiving unit. In addition, the use of FOP should be carefully decided based on the purpose of X-ray imaging, e.g., image sharpness or SNR.
Organic-on-silicon complementary metal-oxide-semiconductor colour image sensors.
Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon
2015-01-12
Complementary metal-oxide-semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor.
Organic-on-silicon complementary metal–oxide–semiconductor colour image sensors
Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon
2015-01-01
Complementary metal–oxide–semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor. PMID:25578322
Planning Image-Based Measurements in Wind Tunnels by Virtual Imaging
NASA Technical Reports Server (NTRS)
Kushner, Laura Kathryn; Schairer, Edward T.
2011-01-01
Virtual imaging is routinely used at NASA Ames Research Center to plan the placement of cameras and light sources for image-based measurements in production wind tunnel tests. Virtual imaging allows users to quickly and comprehensively model a given test situation, well before the test occurs, in order to verify that all optical testing requirements will be met. It allows optimization of the placement of cameras and light sources and leads to faster set-up times, thereby decreasing tunnel occupancy costs. This paper describes how virtual imaging was used to plan optical measurements for three tests in production wind tunnels at NASA Ames.
Research and application on imaging technology of line structure light based on confocal microscopy
NASA Astrophysics Data System (ADS)
Han, Wenfeng; Xiao, Zexin; Wang, Xiaofen
2009-11-01
In 2005, the theory of line structure light confocal microscopy was put forward firstly in China by Xingyu Gao and Zexin Xiao in the Institute of Opt-mechatronics of Guilin University of Electronic Technology. Though the lateral resolution of line confocal microscopy can only reach or approach the level of the traditional dot confocal microscopy. But compared with traditional dot confocal microscopy, it has two advantages: first, by substituting line scanning for dot scanning, plane imaging only performs one-dimensional scanning, with imaging velocity greatly improved and scanning mechanism simplified, second, transfer quantity of light is greatly improved by substituting detection hairline for detection pinhole, and low illumination CCD is used directly to collect images instead of photoelectric intensifier. In order to apply the line confocal microscopy to practical system, based on the further research on the theory of the line confocal microscopy, imaging technology of line structure light is put forward on condition of implementation of confocal microscopy. Its validity and reliability are also verified by experiments.
Twin imaging phenomenon of integral imaging.
Hu, Juanmei; Lou, Yimin; Wu, Fengmin; Chen, Aixi
2018-05-14
The imaging principles and phenomena of integral imaging technique have been studied in detail using geometrical optics, wave optics, or light filed theory. However, most of the conclusions are only suit for the integral imaging systems using diffused illumination. In this work, a kind of twin imaging phenomenon and mechanism has been observed in a non-diffused illumination reflective integral imaging system. Interactive twin images including a real and a virtual 3D image of one object can be activated in the system. The imaging phenomenon is similar to the conjugate imaging effect of hologram, but it base on the refraction and reflection instead of diffraction. The imaging characteristics and mechanisms different from traditional integral imaging are deduced analytically. Thin film integral imaging systems with 80μm thickness have also been made to verify the imaging phenomenon. Vivid lighting interactive twin 3D images have been realized using a light-emitting diode (LED) light source. When the LED is moving, the twin 3D images are moving synchronously. This interesting phenomenon shows a good application prospect in interactive 3D display, argument reality, and security authentication.
Illuminant-adaptive color reproduction for mobile display
NASA Astrophysics Data System (ADS)
Kim, Jong-Man; Park, Kee-Hyon; Kwon, Oh-Seol; Cho, Yang-Ho; Ha, Yeong-Ho
2006-01-01
This paper proposes an illuminant-adaptive reproduction method using light adaptation and flare conditions for a mobile display. Mobile displays, such as PDAs and cellular phones, are viewed under various lighting conditions. In particular, images displayed in daylight are perceived as quite dark due to the light adaptation of the human visual system, as the luminance of a mobile display is considerably lower than that of an outdoor environment. In addition, flare phenomena decrease the color gamut of a mobile display by increasing the luminance of dark areas and de-saturating the chroma. Therefore, this paper presents an enhancement method composed of lightness enhancement and chroma compensation. First, the ambient light intensity is measured using a lux-sensor, then the flare is calculated based on the reflection ratio of the display device and the ambient light intensity. The relative cone response is nonlinear to the input luminance. This is also changed by the ambient light intensity. Thus, to improve the perceived image, the displayed luminance is enhanced by lightness linearization. In this paper, the image's luminance is transformed by linearization of the response to the input luminance according to the ambient light intensity. Next, the displayed image is compensated according to the physically reduced chroma, resulting from flare phenomena. The reduced chroma value is calculated according to the flare for each intensity. The chroma compensation method to maintain the original image's chroma is applied differently for each hue plane, as the flare affects each hue plane differently. At this time, the enhanced chroma also considers the gamut boundary. Based on experimental observations, the outer luminance-intensity generally ranges from 1,000 lux to 30,000 lux. Thus, in the case of an outdoor environment, i.e. greater than 1,000 lux, this study presents a color reproduction method based on an inverse cone response curve and flare condition. Consequently, the proposed algorithm improves the quality of the perceived image adaptive to an outdoor environment.
Light-emitting diode-based multiwavelength diffuse optical tomography system guided by ultrasound
Yuan, Guangqian; Alqasemi, Umar; Chen, Aaron; Yang, Yi; Zhu, Quing
2014-01-01
Abstract. Laser diodes are widely used in diffuse optical tomography (DOT) systems but are typically expensive and fragile, while light-emitting diodes (LEDs) are cheaper and are also available in the near-infrared (NIR) range with adequate output power for imaging deeply seated targets. In this study, we introduce a new low-cost DOT system using LEDs of four wavelengths in the NIR spectrum as light sources. The LEDs were modulated at 20 kHz to avoid ambient light. The LEDs were distributed on a hand-held probe and a printed circuit board was mounted at the back of the probe to separately provide switching and driving current to each LED. Ten optical fibers were used to couple the reflected light to 10 parallel photomultiplier tube detectors. A commercial ultrasound system provided simultaneous images of target location and size to guide the image reconstruction. A frequency-domain (FD) laser-diode-based system with ultrasound guidance was also used to compare the results obtained from those of the LED-based system. Results of absorbers embedded in intralipid and inhomogeneous tissue phantoms have demonstrated that the LED-based system provides a comparable quantification accuracy of targets to the FD system and has the potential to image deep targets such as breast lesions. PMID:25473884
NASA Astrophysics Data System (ADS)
Skotheim, Øystein; Schumann-Olsen, Henrik; Thorstensen, Jostein; Kim, Anna N.; Lacolle, Matthieu; Haugholt, Karl-Henrik; Bakke, Thor
2015-03-01
Structured light is a robust and accurate method for 3D range imaging in which one or more light patterns are projected onto the scene and observed with an off-axis camera. Commercial sensors typically utilize DMD- or LCD-based LED projectors, which produce good results but have a number of drawbacks, e.g. limited speed, limited depth of focus, large sensitivity to ambient light and somewhat low light efficiency. We present a 3D imaging system based on a laser light source and a novel tip-tilt-piston micro-mirror. Optical interference is utilized to create sinusoidal fringe patterns. The setup allows fast and easy control of both the frequency and the phase of the fringe patterns by altering the axes of the micro-mirror. For 3D reconstruction we have adapted a Dual Frequency Phase Shifting method which gives robust range measurements with sub-millimeter accuracy. The use of interference for generating sine patterns provides high light efficiency and good focusing properties. The use of a laser and a bandpass filter allows easy removal of ambient light. The fast response of the micro-mirror in combination with a high-speed camera and real-time processing on the GPU allows highly accurate 3D range image acquisition at video rates.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
System and method for optical fiber based image acquisition suitable for use in turbine engines
Baleine, Erwan; A V, Varun; Zombo, Paul J.; Varghese, Zubin
2017-05-16
A system and a method for image acquisition suitable for use in a turbine engine are disclosed. Light received from a field of view in an object plane is projected onto an image plane through an optical modulation device and is transferred through an image conduit to a sensor array. The sensor array generates a set of sampled image signals in a sensing basis based on light received from the image conduit. Finally, the sampled image signals are transformed from the sensing basis to a representation basis and a set of estimated image signals are generated therefrom. The estimated image signals are used for reconstructing an image and/or a motion-video of a region of interest within a turbine engine.
An efficient method for the fusion of light field refocused images
NASA Astrophysics Data System (ADS)
Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei
2018-04-01
Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.
Active confocal imaging for visual prostheses
Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli
2014-01-01
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710
HUBBLE FINDS A BARE BLACK HOLE POURING OUT LIGHT
NASA Technical Reports Server (NTRS)
2002-01-01
NASA's Hubble Space Telescope has provided a never-before-seen view of a warped disk flooded with a torrent of ultraviolet light from hot gas trapped around a suspected massive black hole. [Right] This composite image of the core of the galaxy was constructed by combining a visible light image taken with Hubble's Wide Field Planetary Camera 2 (WFPC2), with a separate image taken in ultraviolet light with the Faint Object Camera (FOC). While the visible light image shows a dark dust disk, the ultraviolet image (color-coded blue) shows a bright feature along one side of the disk. Because Hubble sees ultraviolet light reflected from only one side of the disk, astronomers conclude the disk must be warped like the brim of a hat. The bright white spot at the image's center is light from the vicinity of the black hole which is illuminating the disk. [Left] A ground-based telescopic view of the core of the elliptical galaxy NGC 6251. The inset box shows Hubble Space Telescope's field of view. The galaxy is 300 million light-years away in the constellation Ursa Minor. Photo Credit: Philippe Crane (European Southern Observatory), and NASA
Canny edge-based deformable image registration
NASA Astrophysics Data System (ADS)
Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping
2017-02-01
This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.
Welding studs detection based on line structured light
NASA Astrophysics Data System (ADS)
Geng, Lei; Wang, Jia; Wang, Wen; Xiao, Zhitao
2018-01-01
The quality of welding studs is significant for installation and localization of components of car in the process of automobile general assembly. A welding stud detection method based on line structured light is proposed. Firstly, the adaptive threshold is designed to calculate the binary images. Then, the light stripes of the image are extracted after skeleton line extraction and morphological filtering. The direction vector of the main light stripe is calculated using the length of the light stripe. Finally, the gray projections along the orientation of the main light stripe and the vertical orientation of the main light stripe are computed to obtain curves of gray projection, which are used to detect the studs. Experimental results demonstrate that the error rate of proposed method is lower than 0.1%, which is applied for automobile manufacturing.
Tabletop computed lighting for practical digital photography.
Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby
2007-01-01
We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.
The effect of ambient lighting on Laser Doppler Imaging of a standardized cutaneous injury model
Pham, Alan Chuong Q; Hei, Erik La; Harvey, John G; Holland, Andrew JA
2017-01-01
Objective: The aim of this study was to investigate the potential confounding effects of four different types of ambient lighting on the results of Laser Doppler Imaging (LDI) of a standardized cutaneous injury model. Methods: After applying a mechanical stimulus to the anterior forearm of a healthy volunteer and inducing a wheal and arteriolar flare (the Triple response), we used a Laser Doppler Line Scanner (LDLS) to image the forearm under four different types of ambient lighting: light-emitting-diode (LED), compact fluorescent lighting (CFL), halogen, daylight, and darkness as a control. A spectrometer was used to measure the intensity of light energy at 785 nm, the wavelength used by the scanner for measurement under each type of ambient lighting. Results: Neither the LED nor CFL bulbs emitted detectable light energy at a wavelength of 785 nm. The color-based representation of arbitrary perfusion unit (APU) values of the Triple response measured by the scanner was similar between darkness, LED, and CFL light. Daylight emitted 2 mW at 785 nm, with a slight variation tending more towards lower APU values compared to darkness. Halogen lighting emitted 6 mW of light energy at 785 nm rendering the color-based representation impossible to interpret. Conclusions: Halogen lighting and daylight have the potential to confound results of LDI of cutaneous injuries whereas LED and CFL lighting did not. Any potential sources of daylight should be reduced and halogen lighting completely covered or turned off prior to wound imaging. PMID:29348978
Hiding Information Using different lighting Color images
NASA Astrophysics Data System (ADS)
Majead, Ahlam; Awad, Rash; Salman, Salema S.
2018-05-01
The host medium for the secret message is one of the important principles for the designers of steganography method. In this study, the best color image was studied to carrying any secret image.The steganography approach based Lifting Wavelet Transform (LWT) and Least Significant Bits (LSBs) substitution. The proposed method offers lossless and unnoticeable changes in the contrast carrier color image and imperceptible by human visual system (HVS), especially the host images which was captured in dark lighting conditions. The aim of the study was to study the process of masking the data in colored images with different light intensities. The effect of the masking process was examined on the images that are classified by a minimum distance and the amount of noise and distortion in the image. The histogram and statistical characteristics of the cover image the results showed the efficient use of images taken with different light intensities in hiding data using the least important bit substitution method. This method succeeded in concealing textual data without distorting the original image (low light) Lire developments due to the concealment process.The digital image segmentation technique was used to distinguish small areas with masking. The result is that smooth homogeneous areas are less affected as a result of hiding comparing with high light areas. It is possible to use dark color images to send any secret message between two persons for the purpose of secret communication with good security.
Qu, Yufu; Zou, Zhaofan
2017-10-16
Photographic images taken in foggy or hazy weather (hazy images) exhibit poor visibility and detail because of scattering and attenuation of light caused by suspended particles, and therefore, image dehazing has attracted considerable research attention. The current polarization-based dehazing algorithms strongly rely on the presence of a "sky area", and thus, the selection of model parameters is susceptible to external interference of high-brightness objects and strong light sources. In addition, the noise of the restored image is large. In order to solve these problems, we propose a polarization-based dehazing algorithm that does not rely on the sky area ("non-sky"). First, a linear polarizer is used to collect three polarized images. The maximum- and minimum-intensity images are then obtained by calculation, assuming the polarization of light emanating from objects is negligible in most scenarios involving non-specular objects. Subsequently, the polarization difference of the two images is used to determine a sky area and calculate the infinite atmospheric light value. Next, using the global features of the image, and based on the assumption that the airlight and object radiance are irrelevant, the degree of polarization of the airlight (DPA) is calculated by solving for the optimal solution of the correlation coefficient equation between airlight and object radiance; the optimal solution is obtained by setting the right-hand side of the equation to zero. Then, the hazy image is subjected to dehazing. Subsequently, a filtering denoising algorithm, which combines the polarization difference information and block-matching and 3D (BM3D) filtering, is designed to filter the image smoothly. Our experimental results show that the proposed polarization-based dehazing algorithm does not depend on whether the image includes a sky area and does not require complex models. Moreover, the dehazing image except specular object scenarios is superior to those obtained by Tarel, Fattal, Ren, and Berman based on the criteria of no-reference quality assessment (NRQA), blind/referenceless image spatial quality evaluator (BRISQUE), blind anistropic quality index (AQI), and e.
Namikawa, Tsutomu; Fujisawa, Kazune; Munekage, Eri; Iwabu, Jun; Uemura, Sunao; Tsujii, Shigehiro; Maeda, Hiromichi; Kitagawa, Hiroyuki; Fukuhara, Hideo; Inoue, Keiji; Sato, Takayuki; Kobayashi, Michiya; Hanazaki, Kazuhiro
2018-04-04
The natural amino acid 5-aminolevulinic acid (ALA) is a protoporphyrin IX (PpIX) precursor and a new-generation photosensitive substance that accumulates specifically in cancer cells. When indocyanine green (ICG) is irradiated with near-infrared (NIR) light, it shifts to a higher energy state and emits infrared light with a longer wavelength than the irradiated NIR light. Photodynamic diagnosis (PDD) using ALA and ICG-based NIR fluorescence imaging has emerged as a new diagnostic technique. Specifically, in laparoscopic examinations for serosa-invading advanced gastric cancer, peritoneal metastases could be detected by ALA-PDD, but not by conventional visible-light imaging. The HyperEye Medical System (HEMS) can visualize ICG fluorescence as color images simultaneously projected with visible light in real time. This ICG fluorescence method is widely applicable, including for intraoperative identification of sentinel lymph nodes, visualization of blood vessels in organ resection, and blood flow evaluation during surgery. Fluorescence navigation by ALA-PDD and NIR using ICG imaging provides good visualization and detection of the target lesions that is not possible with the naked eye. We propose that this technique should be used in fundamental research on the relationship among cellular dynamics, metabolic enzymes, and tumor tissues, and to evaluate clinical efficacy and safety in multicenter cooperative clinical trials.
Digital micromirror devices in Raman trace detection of explosives
NASA Astrophysics Data System (ADS)
Glimtoft, Martin; Svanqvist, Mattias; Ågren, Matilda; Nordberg, Markus; Östmark, Henric
2016-05-01
Imaging Raman spectroscopy based on tunable filters is an established technique for detecting single explosives particles at stand-off distances. However, large light losses are inherent in the design due to sequential imaging at different wavelengths, leading to effective transmission often well below 1 %. The use of digital micromirror devices (DMD) and compressive sensing (CS) in imaging Raman explosives trace detection can improve light throughput and add significant flexibility compared to existing systems. DMDs are based on mature microelectronics technology, and are compact, scalable, and can be customized for specific tasks, including new functions not available with current technologies. This paper has been focusing on investigating how a DMD can be used when applying CS-based imaging Raman spectroscopy on stand-off explosives trace detection, and evaluating the performance in terms of light throughput, image reconstruction ability and potential detection limits. This type of setup also gives the possibility to combine imaging Raman with non-spatially resolved fluorescence suppression techniques, such as Kerr gating. The system used consists of a 2nd harmonics Nd:YAG laser for sample excitation, collection optics, DMD, CMOScamera and a spectrometer with ICCD camera for signal gating and detection. Initial results for compressive sensing imaging Raman shows a stable reconstruction procedure even at low signals and in presence of interfering background signal. It is also shown to give increased effective light transmission without sacrificing molecular specificity or area coverage compared to filter based imaging Raman. At the same time it adds flexibility so the setup can be customized for new functionality.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-01-01
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783
Camera array based light field microscopy
Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai
2015-01-01
This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490
Light-sheet enhanced resolution of light field microscopy for rapid imaging of large volumes
NASA Astrophysics Data System (ADS)
Madrid Wolff, Jorge; Castro, Diego; Arbeláez, Pablo; Forero-Shelton, Manu
2018-02-01
Whole-brain imaging is challenging because it demands microscopes with high temporal and spatial resolution, which are often at odds, especially in the context of large fields of view. We have designed and built a light-sheet microscope with digital micromirror illumination and light-field detection. On the one hand, light sheets provide high resolution optical sectioning on live samples without compromising their viability. On the other hand, light field imaging makes it possible to reconstruct full volumes of relatively large fields of view from a single camera exposure; however, its enhanced temporal resolution comes at the expense of spatial resolution, limiting its applicability. We present an approach to increase the resolution of light field images using DMD-based light sheet illumination. To that end, we develop a method to produce synthetic resolution targets for light field microscopy and a procedure to correct the depth at which planes are refocused with rendering software. We measured the axial resolution as a function of depth and show a three-fold potential improvement with structured illumination, albeit by sacrificing some temporal resolution, also three-fold. This results in an imaging system that may be adjusted to specific needs without having to reassemble and realign it. This approach could be used to image relatively large samples at high rates.
NASA Astrophysics Data System (ADS)
Sivasubramanian, Kathyayini; Periyasamy, Vijitha; Wen, Kew Kok; Pramanik, Manojit
2017-03-01
Photoacoustic tomography is a hybrid imaging modality that combines optical and ultrasound imaging. It is rapidly gaining attention in the field of medical imaging. The challenge is to translate it into a clinical setup. In this work, we report the development of a handheld clinical photoacoustic imaging system. A clinical ultrasound imaging system is modified to integrate photoacoustic imaging with the ultrasound imaging. Hence, light delivery has been integrated with the ultrasound probe. The angle of light delivery is optimized in this work with respect to the depth of imaging. Optimization was performed based on Monte Carlo simulation for light transport in tissues. Based on the simulation results, the probe holders were fabricated using 3D printing. Similar results were obtained experimentally using phantoms. Phantoms were developed to mimic sentinel lymph node imaging scenario. Also, in vivo sentinel lymph node imaging was done using the same system with contrast agent methylene blue up to a depth of 1.5 cm. The results validate that one can use Monte Carlo simulation as a tool to optimize the probe holder design depending on the imaging needs. This eliminates a trial and error approach generally used for designing a probe holder.
NASA Astrophysics Data System (ADS)
Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.
2012-03-01
Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.
Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images
Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki
2015-01-01
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767
Laboratory Demonstration of Axicon-Lens Coronagraph
NASA Astrophysics Data System (ADS)
Choi, Jaeho; Jea, Geonho
2018-01-01
The results of laboratory based experiments of the proposed coronagraph using axicon-lenses that is conjunction with a method of noninterferometric quantitative phase imaging for direct imaging of exoplanets is will present. The light source is passing through tiny holes drilled on the thin metal plate is used as the simulated stellar and its companions. Those diffracted light at the edge of the holes bears a similarity to the light from the bright stellar. Those images are evaginated about the optical axis after the maximum focal length of the first axicon lens. Then the evaginated images of have cut off using the motorized iris which means the suppressed the central stellar light preferentially. Various length between the holes which represent the angular distance are examined. The laboratory experimental results are shown that the axicon-lens coronagraph has feature of ability to achieve the smaller IWA than l/D and high-contrast direct imaging. The laboratory based axicon-lens coronagraph imaging support the symbolic computation results which has potential in direct imaging for finding exoplanet and various astrophysical activities. The setup of the coronagraph is simple to build and is durable to operate. Moreover it can be transported the planets images to a broadband spectrometric instrument that able to investigate the constituent of the planetary system.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
Femtowatt incoherent image conversion from mid-infrared light to near-infrared light
NASA Astrophysics Data System (ADS)
Huang, Nan; Liu, Hongjun; Wang, Zhaolu; Han, Jing; Zhang, Shuan
2017-03-01
We report on the experimental conversion imaging of an incoherent continuous-wave dim source from mid-infrared light to near-infrared light with a lowest input power of 31 femtowatt (fW). Incoherent mid-infrared images of light emission from a heat lamp bulb with an adjustable power supply at window wavelengths ranging from 2.9 µm to 3.5 µm are used for upconversion. The sum-frequency generation is realized in a laser cavity with the resonant wavelength of 1064 nm pumped by an LD at 806 nm built around a periodically poled lithium niobate (PPLN) crystal. The converted infrared image in the wavelength range ~785 nm with a resolution of about 120 × 70 is low-noise detected using a silicon-based camera. By optimizing the system parameters, the upconversion quantum efficiency is predicted to be 28% for correctly polarized, on-axis and phase-matching light.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
A smartphone-based chip-scale microscope using ambient illumination.
Lee, Seung Ah; Yang, Changhuei
2014-08-21
Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone's camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the image resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction are performed on the device using a custom-built Android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.
A smartphone-based chip-scale microscope using ambient illumination
Lee, Seung Ah; Yang, Changhuei
2014-01-01
Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the imaging resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system. PMID:24964209
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
New method of contour image processing based on the formalism of spiral light beams
NASA Astrophysics Data System (ADS)
Volostnikov, Vladimir G.; Kishkin, S. A.; Kotova, S. P.
2013-07-01
The possibility of applying the mathematical formalism of spiral light beams to the problems of contour image recognition is theoretically studied. The advantages and disadvantages of the proposed approach are evaluated; the results of numerical modelling are presented.
2009. Rob's areas of expertise are daylighting, physically based lighting simulation, the integration of lighting simulation with whole-building energy simulations, and high-dynamic range imaging. He has simulation, and high-dynamic range imaging. Rob is an advisory member of the Illuminating Engineering Society
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-02-15
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
Kamarudin, Nur Diyana; Ooi, Chia Yee; Kawanabe, Tadaaki; Odaguchi, Hiroshi; Kobayashi, Fuminori
2017-01-01
In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k -means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k -means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds.
Ooi, Chia Yee; Kawanabe, Tadaaki; Odaguchi, Hiroshi; Kobayashi, Fuminori
2017-01-01
In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k-means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k-means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds. PMID:29065640
Adaptive polarization image fusion based on regional energy dynamic weighted average
NASA Astrophysics Data System (ADS)
Zhao, Yong-Qiang; Pan, Quan; Zhang, Hong-Cai
2005-11-01
According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations, most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.
Optimal integration of daylighting and electric lighting systems using non-imaging optics
NASA Astrophysics Data System (ADS)
Scartezzini, J.-L.; Linhart, F.; Kaegi-Kolisnychenko, E.
2007-09-01
Electric lighting is responsible for a significant fraction of electricity consumption within non-residential buildings. Making daylight more available in office and commercial buildings can lead as a consequence to important electricity savings, as well as to the improvement of occupants' visual performance and wellbeing. Over the last decades, daylighting technologies have been developed for that purpose, some of them having proven to be highly efficient such as anidolic daylighting systems. Based on non-imaging optics these optical devices were designed to achieve an efficient collection and redistribution of daylight within deep office rooms. However in order to benefit from the substantial daylight provision obtained through these systems and convert it into effective electricity savings, novel electric lighting strategies are required. An optimal integration of high efficacy light sources and efficient luminaries based on non-imaging optics with anidolic daylighting systems can lead to such novel strategies. Starting from the experience gained through the development of an Anidolic Integrated Ceiling (AIC), this paper presents an optimal integrated daylighting and electric lighting system. Computer simulations based on ray-tracing techniques were used to achieve the integration of 36W fluorescent tubes and non-imaging reflectors with an advanced daylighting system. Lighting power densities lower than 4 W/m2 can be achieved in this way within the corresponding office room. On-site monitoring of an integrated daylighting and electric lighting system carried out on a solar experimental building confirmed the energy and visual performance of such a system: it showed that low lighting power densities can be achieved by combining an anidolic daylighting system with very efficient electric light sources and luminaries.
NASA Astrophysics Data System (ADS)
Upputuri, Paul Kumar; Pramanik, Manojit
2018-02-01
Phase shifting white light interferometry (PSWLI) has been widely used for optical metrology applications because of their precision, reliability, and versatility. White light interferometry using monochrome CCD makes the measurement process slow for metrology applications. WLI integrated with Red-Green-Blue (RGB) CCD camera is finding imaging applications in the fields optical metrology and bio-imaging. Wavelength dependent refractive index profiles of biological samples were computed from colour white light interferograms. In recent years, whole-filed refractive index profiles of red blood cells (RBCs), onion skin, fish cornea, etc. were measured from RGB interferograms. In this paper, we discuss the bio-imaging applications of colour CCD based white light interferometry. The approach makes the measurement faster, easier, cost-effective, and even dynamic by using single fringe analysis methods, for industrial applications.
NASA Astrophysics Data System (ADS)
Kumar, Manish; Kishore, Sandeep; Nasenbeny, Jordan; McLean, David L.; Kozorovitskiy, Yevgenia
2018-05-01
Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity.
Kumar, Manish; Kishore, Sandeep; Nasenbeny, Jordan; McLean, David L; Kozorovitskiy, Yevgenia
2018-05-14
Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi, /sōpī/) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi's flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
Ferradal, Silvina L; Eggebrecht, Adam T; Hassanpour, Mahlega; Snyder, Abraham Z; Culver, Joseph P
2014-01-15
Diffuse optical imaging (DOI) is increasingly becoming a valuable neuroimaging tool when fMRI is precluded. Recent developments in high-density diffuse optical tomography (HD-DOT) overcome previous limitations of sparse DOI systems, providing improved image quality and brain specificity. These improvements in instrumentation prompt the need for advancements in both i) realistic forward light modeling for accurate HD-DOT image reconstruction, and ii) spatial normalization for voxel-wise comparisons across subjects. Individualized forward light models derived from subject-specific anatomical images provide the optimal inverse solutions, but such modeling may not be feasible in all situations. In the absence of subject-specific anatomical images, atlas-based head models registered to the subject's head using cranial fiducials provide an alternative solution. In addition, a standard atlas is attractive because it defines a common coordinate space in which to compare results across subjects. The question therefore arises as to whether atlas-based forward light modeling ensures adequate HD-DOT image quality at the individual and group level. Herein, we demonstrate the feasibility of using atlas-based forward light modeling and spatial normalization methods. Both techniques are validated using subject-matched HD-DOT and fMRI data sets for visual evoked responses measured in five healthy adult subjects. HD-DOT reconstructions obtained with the registered atlas anatomy (i.e. atlas DOT) had an average localization error of 2.7mm relative to reconstructions obtained with the subject-specific anatomical images (i.e. subject-MRI DOT), and 6.6mm relative to fMRI data. At the group level, the localization error of atlas DOT reconstruction was 4.2mm relative to subject-MRI DOT reconstruction, and 6.1mm relative to fMRI. These results show that atlas-based image reconstruction provides a viable approach to individual head modeling for HD-DOT when anatomical imaging is not available. Copyright © 2013. Published by Elsevier Inc.
Simulation-Based Evaluation of Light Posts and Street Signs as 3-D Geolocation Targets in SAR Images
NASA Astrophysics Data System (ADS)
Auer, S.; Balss, U.
2017-05-01
The assignment of phase center positions (in 2D or 3D) derived from SAR data to physical object is challenging for many man-made structures such as buildings or bridges. In contrast, light poles and traffic signs are promising targets for tasks based on 3-D geolocation as they often show a prominent and spatially isolated appearance. For a detailed understanding of the nature of both targets, this paper presents results of a dedicated simulation case study, which is based on ray tracing methods (simulator RaySAR). For the first time, the appearance of the targets is analyzed in 2D (image plane) and 3D space (world coordinates of scene model) and reflecting surfaces are identified for related dominant image pixels. The case studies confirms the crucial impact of spatial resolution in the context of light poles and traffic signs and the appropriateness of light poles as target for 3-D geolocation in case of horizontal ground surfaces beneath.
NASA Astrophysics Data System (ADS)
Chun, Wanhee; Do, Dukho; Gweon, Dae-Gab
2013-01-01
We developed a multimodal microscopy based on an optical scanning system in order to obtain diverse optical information of the same area of a sample. Multimodal imaging researches have mostly depended on a commercial microscope platform, easy to use but restrictive to extend imaging modalities. In this work, the beam scanning optics, especially including a relay lens, was customized to transfer broadband (400-1000 nm) lights to a sample without any optical error or loss. The customized scanning optics guarantees the best performances of imaging techniques utilizing the lights within the design wavelength. Confocal reflection, confocal fluorescence, and two-photon excitation fluorescence images were obtained, through respective implemented imaging channels, to demonstrate imaging feasibility for near-UV, visible, near-IR continuous light, and pulsed light in the scanning optics. The imaging performances for spatial resolution and image contrast were verified experimentally; the results were satisfactory in comparison with theoretical results. The advantages of customization, containing low cost, outstanding combining ability and diverse applications, will contribute to vitalize multimodal imaging researches.
A review of snapshot multidimensional optical imaging: measuring photon tags in parallel
Gao, Liang; Wang, Lihong V.
2015-01-01
Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition—also dubbed snapshot imaging—has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications. PMID:27134340
Calibration method for video and radiation imagers
Cunningham, Mark F [Oak Ridge, TN; Fabris, Lorenzo [Knoxville, TN; Gee, Timothy F [Oak Ridge, TN; Goddard, Jr., James S.; Karnowski, Thomas P [Knoxville, TN; Ziock, Klaus-peter [Clinton, TN
2011-07-05
The relationship between the high energy radiation imager pixel (HERIP) coordinate and real-world x-coordinate is determined by a least square fit between the HERIP x-coordinate and the measured real-world x-coordinates of calibration markers that emit high energy radiation imager and reflect visible light. Upon calibration, a high energy radiation imager pixel position may be determined based on a real-world coordinate of a moving vehicle. Further, a scale parameter for said high energy radiation imager may be determined based on the real-world coordinate. The scale parameter depends on the y-coordinate of the moving vehicle as provided by a visible light camera. The high energy radiation imager may be employed to detect radiation from moving vehicles in multiple lanes, which correspondingly have different distances to the high energy radiation imager.
Tiong, T Joyce; Chandesa, Tissa; Yap, Yeow Hong
2017-05-01
One common method to determine the existence of cavitational activity in power ultrasonics systems is by capturing images of sonoluminescence (SL) or sonochemiluminescence (SCL) in a dark environment. Conventionally, the light emitted from SL or SCL was detected based on the number of photons. Though this method is effective, it could not identify the sonochemical zones of an ultrasonic systems. SL/SCL images, on the other hand, enable identification of 'active' sonochemical zones. However, these images often provide just qualitative data as the harvesting of light intensity data from the images is tedious and require high resolution images. In this work, we propose a new image analysis technique using pseudo-colouring images to quantify the SCL zones based on the intensities of the SCL images and followed by comparison of the active SCL zones with COMSOL simulated acoustic pressure zones. Copyright © 2016 Elsevier B.V. All rights reserved.
Research on the Improved Image Dodging Algorithm Based on Mask Technique
NASA Astrophysics Data System (ADS)
Yao, F.; Hu, H.; Wan, Y.
2012-08-01
The remote sensing image dodging algorithm based on Mask technique is a good method for removing the uneven lightness within a single image. However, there are some problems with this algorithm, such as how to set an appropriate filter size, for which there is no good solution. In order to solve these problems, an improved algorithm is proposed. In this improved algorithm, the original image is divided into blocks, and then the image blocks with different definitions are smoothed using the low-pass filters with different cut-off frequencies to get the background image; for the image after subtraction, the regions with different lightness are processed using different linear transformation models. The improved algorithm can get a better dodging result than the original one, and can make the contrast of the whole image more consistent.
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-06-30
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.
Analysis of off-axis holographic system based on improved Jamin interferometer
NASA Astrophysics Data System (ADS)
Li, Baosheng; Dong, Hang; Chen, Lijuan; Zhong, Qi
2018-02-01
In this paper, an improved Interferometer was introduced which based on traditional Jamin Interferometer to solve the twin image where appear in on-axis holographic. Adjust the angle of reference light and object light that projected onto the CCD by change the reflector of the system to separate the zero order of diffraction, the virtual image and the real image, so that could eliminate the influence of the twin image. The result of analysis shows that the system could be realized in theory. After actually building the system, the hologram of the calibration plate is reconstructed and the result is shown to be feasible.
Assessment of illumination conditions in a single-pixel imaging configuration
NASA Astrophysics Data System (ADS)
Garoi, Florin; Udrea, Cristian; Damian, Cristian; Logofǎtu, Petre C.; Colţuc, Daniela
2016-12-01
Single-pixel imaging based on multiplexing is a promising technique, especially in applications where 2D detectors or raster scanning imaging are not readily applicable. With this method, Hadamard masks are projected on a spatial light modulator to encode an incident scene and a signal is recorded at the photodiode detector for each of these masks. Ultimately, the image is reconstructed on the computer by applying the inverse transform matrix. Thus, various algorithms were optimized and several spatial light modulators already characterized for such a task. This work analyses the imaging quality of such a single-pixel arrangement, when various illumination conditions are used. More precisely, the main comparison is made between coherent and incoherent ("white light") illumination and between two multiplexing methods, namely Hadamard and Scanning. The quality of the images is assessed by calculating their SNR, using two relations. The results show better images are obtained with "white light" illumination for the first method and coherent one for the second.
Ambient lighting: setting international standards for the viewing of softcopy chest images
NASA Astrophysics Data System (ADS)
McEntee, Mark F.; Ryan, John; Evanoff, Micheal G.; Keeling, Aoife; Chakraborty, Dev; Manning, David; Brennan, Patrick C.
2007-03-01
Clinical radiological judgments are increasingly being made on softcopy LCD monitors. These monitors are found throughout the hospital environment in radiological reading rooms, outpatient clinics and wards. This means that ambient lighting where clinical judgments from images are made can vary widely. Inappropriate ambient lighting has several deleterious effects: monitor reflections reduce contrast; veiling glare adds brightness; dynamic range and detectability of low contrast objects is limited. Radiological images displayed on LCDs are more sensitive to the impact of inappropriate ambient lighting and with these devices problems described above are often more evident. The current work aims to provide data on optimum ambient lighting, based on lesions within chest images. The data provided may be used for the establishment of workable ambient lighting standards. Ambient lighting at 30cms from the monitor was set at 480 Lux (office lighting) 100 Lux (WHO recommendations), 40 Lux and <10 Lux. All monitors were calibrated to DICOM part 14 GSDF. Sixty radiologists were presented with 30 chest images, 15 images having simulated nodular lesions of varying subtlety and size. Lesions were positioned in accordance with typical clinical presentation and were validated radiologically. Each image was presented for 30 seconds and viewers were asked to identify and score any visualized lesion from 1-4 to indicate confidence level of detection. At the end of the session, sensitivity and specificity were calculated. Analysis of the data suggests that visualization of chest lesions is affected by inappropriate lighting with chest radiologists demonstrating greater ambient lighting dependency. JAFROC analyses are currently being performed.
3D widefield light microscope image reconstruction without dyes
NASA Astrophysics Data System (ADS)
Larkin, S.; Larson, J.; Holmes, C.; Vaicik, M.; Turturro, M.; Jurkevich, A.; Sinha, S.; Ezashi, T.; Papavasiliou, G.; Brey, E.; Holmes, T.
2015-03-01
3D image reconstruction using light microscope modalities without exogenous contrast agents is proposed and investigated as an approach to produce 3D images of biological samples for live imaging applications. Multimodality and multispectral imaging, used in concert with this 3D optical sectioning approach is also proposed as a way to further produce contrast that could be specific to components in the sample. The methods avoid usage of contrast agents. Contrast agents, such as fluorescent or absorbing dyes, can be toxic to cells or alter cell behavior. Current modes of producing 3D image sets from a light microscope, such as 3D deconvolution algorithms and confocal microscopy generally require contrast agents. Zernike phase contrast (ZPC), transmitted light brightfield (TLB), darkfield microscopy and others can produce contrast without dyes. Some of these modalities have not previously benefitted from 3D image reconstruction algorithms, however. The 3D image reconstruction algorithm is based on an underlying physical model of scattering potential, expressed as the sample's 3D absorption and phase quantities. The algorithm is based upon optimizing an objective function - the I-divergence - while solving for the 3D absorption and phase quantities. Unlike typical deconvolution algorithms, each microscope modality, such as ZPC or TLB, produces two output image sets instead of one. Contrast in the displayed image and 3D renderings is further enabled by treating the multispectral/multimodal data as a feature set in a mathematical formulation that uses the principal component method of statistics.
Wide-field high spatial frequency domain imaging of tissue microstructure
NASA Astrophysics Data System (ADS)
Lin, Weihao; Zeng, Bixin; Cao, Zili; Zhu, Danfeng; Xu, M.
2018-02-01
Wide-field tissue imaging is usually not capable of resolving tissue microstructure. We present High Spatial Frequency Domain Imaging (HSFDI) - a noncontact imaging modality that spatially maps the tissue microscopic scattering structures over a large field of view. Based on an analytical reflectance model of sub-diffusive light from forward-peaked highly scattering media, HSFDI quantifies the spatially-resolved parameters of the light scattering phase function from the reflectance of structured light modulated at high spatial frequencies. We have demonstrated with ex vivo cancerous tissue to validate the robustness of HSFDI in significant contrast and differentiation of the microstructutral parameters between different types and disease states of tissue.
A New Concept of Coronagraph using Axicon Lenses
NASA Astrophysics Data System (ADS)
Choi, Jae Ho
2017-06-01
High-contrast direct imaging of faint objects nearby bright stellar is essential to investigate planetary systems. The goal of such effort is to find and characterize planets similar to Earth that is a challenging task due to it requires a high angular resolution and high dynamic range detections concurrently. A coronagraph that can be suppressed the bright stellar light or active galactic nuclei during the direct detection of astrophysical activities became one of the essential instruments to image exoplanets. In this presentation, a novel concept of a coronagraph using axicon-lenses is will be presented that is conjunction with a method of noninterferometric quantitative phase imaging for direct imaging of exoplanets. The essential scheme of the axicon-lenses coronagraph is the apodization carried out by excluding evaginated images of the planetary systems by a pair of axicon lens. The laboratory based coronagraph imaging is carried out with the axicon-lenses coronagraph setup which included the axicon lenses optics and phase contrast imaging unit. A simulated stellar and its companion are provided by illuminating light through small holes drilled on a thin metal plate. Those diffracted light at the edge of the holes bears a similarity to the light from the bright stellar. The images are evaginated about the optical axis by passing the first axicon lens. Then the evaginated beams of its external area have cut off by an iris which means the suppressed its central light of the bright stellar light preferentially. A symbolic calculation also is carried out to verify the scheme of the the axicon-lenses coronagraph using the symbolic computation program. The simulation results are shown that the the axicon-lenses coronagraph has feature of ability to achieve the IWA smaller than l/D. The laboratory based coronagraph imaging and simulation results support its potentials in direct imaging for finding exo-planet and various astrophysical activities.
Optical asymmetric cryptography based on amplitude reconstruction of elliptically polarized light
NASA Astrophysics Data System (ADS)
Cai, Jianjun; Shen, Xueju; Lei, Ming
2017-11-01
We propose a novel optical asymmetric image encryption method based on amplitude reconstruction of elliptically polarized light, which is free from silhouette problem. The original image is analytically separated into two phase-only masks firstly, and then the two masks are encoded into amplitudes of the orthogonal polarization components of an elliptically polarized light. Finally, the elliptically polarized light propagates through a linear polarizer, and the output intensity distribution is recorded by a CCD camera to obtain the ciphertext. The whole encryption procedure could be implemented by using commonly used optical elements, and it combines diffusion process and confusion process. As a result, the proposed method achieves high robustness against iterative-algorithm-based attacks. Simulation results are presented to prove the validity of the proposed cryptography.
Coherent imaging with incoherent light in digital holographic microscopy
NASA Astrophysics Data System (ADS)
Chmelik, Radim
2012-01-01
Digital holographic microscope (DHM) allows for imaging with a quantitative phase contrast. In this way it becomes an important instrument, a completely non-invasive tool for a contrast intravital observation of living cells and a cell drymass density distribution measurement. A serious drawback of current DHMs is highly coherent illumination which makes the lateral resolution worse and impairs the image quality by a coherence noise and a parasitic interference. An uncompromising solution to this problem can be found in the Leith concept of incoherent holography. An off-axis hologram can be formed with arbitrary degree of light coherence in systems equipped with an achromatic interferometer and thus the resolution and the image quality typical for an incoherent-light wide-field microscopy can be achieved. In addition, advanced imaging modes based on limited coherence can be utilized. The typical example is a coherence-gating effect which provides a finite axial resolution and makes DHM image similar to that of a confocal microscope. These possibilities were described theoretically using the formalism of three-dimensional coherent transfer functions and proved experimentally by the coherence-controlled holographic microscope which is DHM based on the Leith achromatic interferometer. Quantitative-phase-contrast imaging is demonstrated with incoherent light by the living cancer cells observation and their motility evaluation. The coherence-gating effect was proved by imaging of model samples through a scattering layer and living cells inside an opalescent medium.
NASA Astrophysics Data System (ADS)
Masada, Genta
2017-08-01
Two-mode squeezed light is an effective resource for quantum entanglement and shows a non-classical correlation between each optical mode. We are developing a two-mode squeezed light source to explore the possibility of quantum radar based on the quantum illumination theory. It is expected that the error probability for discrimination of target presence or absence is improved even in a lossy and noisy environment. We are also expecting to apply two-mode squeezed light source to quantum imaging. In this work we generated two-mode squeezed light and verify its quantum entanglement property towards quantum radar and imaging. Firstly we generated two independent single-mode squeezed light beams utilizing two sub-threshold optical parametric oscillators which include periodically-polled potassium titanyl phosphate crystals for the second order nonlinear interaction. Two single-mode squeezed light beams are combined using a half mirror with the relative optical phase of 90° between each optical field. Then entangled two-mode squeezed light beams can be generated. We observes correlation variances between quadrature phase amplitudes in entangled two-mode fields by balanced homodyne measurement. Finally we verified quantum entanglement property of two-mode squeezed light source based on Duan's and Simon's inseparability criterion.
Compressive hyperspectral time-resolved wide-field fluorescence lifetime imaging
NASA Astrophysics Data System (ADS)
Pian, Qi; Yao, Ruoyang; Sinsuebphon, Nattawut; Intes, Xavier
2017-07-01
Spectrally resolved fluorescence lifetime imaging and spatial multiplexing have offered information content and collection-efficiency boosts in microscopy, but efficient implementations for macroscopic applications are still lacking. An imaging platform based on time-resolved structured light and hyperspectral single-pixel detection has been developed to perform quantitative macroscopic fluorescence lifetime imaging (MFLI) over a large field of view (FOV) and multiple spectral bands simultaneously. The system makes use of three digital micromirror device (DMD)-based spatial light modulators (SLMs) to generate spatial optical bases and reconstruct N by N images over 16 spectral channels with a time-resolved capability (∼40 ps temporal resolution) using fewer than N2 optical measurements. We demonstrate the potential of this new imaging platform by quantitatively imaging near-infrared (NIR) Förster resonance energy transfer (FRET) both in vitro and in vivo. The technique is well suited for quantitative hyperspectral lifetime imaging with a high sensitivity and paves the way for many important biomedical applications.
Enhancement method for rendered images of home decoration based on SLIC superpixels
NASA Astrophysics Data System (ADS)
Dai, Yutong; Jiang, Xiaotong
2018-04-01
Rendering technology has been widely used in the home decoration industry in recent years for images of home decoration design. However, due to the fact that rendered images of home decoration design rely heavily on the parameters of renderer and the lights of scenes, most rendered images in this industry require further optimization afterwards. To reduce workload and enhance rendered images automatically, an algorithm utilizing neural networks is proposed in this manuscript. In addition, considering few extreme conditions such as strong sunlight and lights, SLIC superpixels based segmentation is used to choose out these bright areas of an image and enhance them independently. Finally, these chosen areas are merged with the entire image. Experimental results show that the proposed method effectively enhances the rendered images when compared with some existing algorithms. Besides, the proposed strategy is proven to be adaptable especially to those images with obvious bright parts.
Scattering Removal for Finger-Vein Image Restoration
Yang, Jinfeng; Zhang, Ben; Shi, Yihua
2012-01-01
Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy. PMID:22737028
NASA Astrophysics Data System (ADS)
Beaudette, Kathy; Lo, William; Villiger, Martin; Shishkov, Milen; Godbout, Nicolas; Bouma, Brett E.; Boudoux, Caroline
2016-03-01
There is a strong clinical need for an optical coherence tomography (OCT) system capable of delivering concurrent coagulation light enabling image-guided dynamic laser marking for targeted collection of biopsies, as opposed to a random sampling, to reduce false-negative findings. Here, we present a system based on double-clad fiber (DCF) capable of delivering pulsed laser light through the inner cladding while performing OCT through the core. A previously clinically validated commercial OCT system (NVisionVLE, Ninepoint Medical) was adapted to enable in vivo esophageal image-guided dynamic laser marking. An optimized DCF coupler was implemented into the system to couple both modalities into the DCF. A DCF-based rotary joint was used to couple light to the spinning DCF-based catheter for helical scanning. DCF-based OCT catheters, providing a beam waist diameter of 62μm at a working distance of 9.3mm, for use with a 17-mm diameter balloon sheath, were used for ex vivo imaging of a swine esophagus. Imaging results using the DCF-based clinical system show an image quality comparable with a conventional system with minimal crosstalk-induced artifacts. To further optimize DCF catheter optical design in order to achieve single-pulse marking, a Zemax model of the DCF output and its validation are presented.
NASA Astrophysics Data System (ADS)
Huh, Jae-Won; Yu, Byeong-Hun; Shin, Dong-Myung; Yoon, Tae-Hoon
2015-03-01
Recently, a transparent display has got much attention as one of the next generation display devices. Especially, active studies on a transparent display using organic light-emitting diodes (OLEDs) are in progress. However, since it is not possible to obtain black color using a transparent OLED, it suffers from poor visibility. This inevitable problem can be solved by using a light shutter. Light shutter technology can be divided into two types; light absorption and scattering. However, a light shutter based on light absorption cannot block the background image perfectly and a light shutter based on light scattering cannot provide black color. In this work we demonstrate a light shutter using two liquid crystal (LC) layers, a light absorption layer and a light scattering layer. To realize a light absorption layer and a light scattering layer, we use the planar state of a dye-doped chiral nematic LC (CNLC) cell and the focal-conic state of a long-pitch CNLC cell, respectively. The proposed light shutter device can block the background image perfectly and show black color. We expect that the proposed light shutter can increase the visibility of a transparent display.
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
Hubble Identifies Source of Ultraviolet Light in an Old Galaxy
NASA Technical Reports Server (NTRS)
2000-01-01
This videotape is comprised of four segments: (1) a Video zoom in on galaxy M32 using ground images, (2) Hubble images of galaxy M32, (3) Ground base color image of galaxies M31 and M32, and (4) Black and white ground based images of galaxy M32.
Improved inter-layer prediction for light field content coding with display scalability
NASA Astrophysics Data System (ADS)
Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo
2016-09-01
Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.
Photoacoustic design parameter optimization for deep tissue imaging by numerical simulation
NASA Astrophysics Data System (ADS)
Wang, Zhaohui; Ha, Seunghan; Kim, Kang
2012-02-01
A new design of light illumination scheme for deep tissue photoacoustic (PA) imaging, a light catcher, is proposed and evaluated by in silico simulation. Finite element (FE)-based numerical simulation model was developed for photoacoustic (PA) imaging in soft tissues. In this in silico simulation using a commercially available FE simulation package (COMSOL MultiphysicsTM, COMSOL Inc., USA), a short-pulsed laser point source (pulse length of 5 ns) was placed in water on the tissue surface. Overall, four sets of simulation models were integrated together to describe the physical principles of PA imaging. Light energy transmission through background tissues from the laser source to the target tissue or contrast agent was described by diffusion equation. The absorption of light energy and its conversion to heat by target tissue or contrast agent was modeled using bio-heat equation. The heat then causes the stress and strain change, and the resulting displacement of the target surface produces acoustic pressure. The created wide-band acoustic pressure will propagate through background tissues to the ultrasound detector, which is governed by acoustic wave equation. Both optical and acoustical parameters in soft tissues such as scattering, absorption, and attenuation are incorporated in tissue models. PA imaging performance with different design parameters of the laser source and energy delivery scheme was investigated. The laser light illumination into the deep tissues can be significantly improved by up to 134.8% increase of fluence rate by introducing a designed compact light catcher with highly reflecting inner surface surrounding the light source. The optimized parameters through this simulation will guide the design of PA system for deep tissue imaging, and help to form the base protocols of experimental evaluations in vitro and in vivo.
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
Shedding Light on Nanomedicine
Tong, Rong
2012-01-01
Light is electromagnetic radiation that can convert its energy into different forms (e.g., heat, chemical energy, and acoustic waves). This property has been exploited in phototherapy (e.g., photothermal therapy and photodynamic therapy) and optical imaging (e.g., fluorescence imaging) for therapeutic and diagnostic purposes. Light-controlled therapies can provide minimally or non-invasive spatiotemporal control as well as deep tissue penetration. Nanotechnology provides a numerous advantages, including selective targeting of tissues, prolongation of therapeutic effect, protection of active payloads, and improved therapeutic indices. This review explores the advances that nanotechnology can bring to light-based therapies and diagnostics, and vice versa, including photo-triggered systems, nanoparticles containing photoactive molecules, and nanoparticles that are themselves photoactive. Limitations of light-based therapies such as photic injury and phototoxicity will be discussed. PMID:22887840
Real-time single image dehazing based on dark channel prior theory and guided filtering
NASA Astrophysics Data System (ADS)
Zhang, Zan
2017-10-01
Images and videos taken outside the foggy day are serious degraded. In order to restore degraded image taken in foggy day and overcome traditional Dark Channel prior algorithms problems of remnant fog in edge, we propose a new dehazing method.We first find the fog area in the dark primary color map to obtain the estimated value of the transmittance using quadratic tree. Then we regard the gray-scale image after guided filtering as atmospheric light map and remove haze based on it. Box processing and image down sampling technology are also used to improve the processing speed. Finally, the atmospheric light scattering model is used to restore the image. A plenty of experiments show that algorithm is effective, efficient and has a wide range of application.
Webster, Christie Ann; Koprinarov, Ivaylo; Germann, Stephen; Rowlands, J A
2008-03-01
New x-ray radiographic systems based on large-area flat-panel technology have revolutionized our capability to produce digital x-ray images. However, these imagers are extraordinarily expensive compared to the systems they are replacing. Hence, there is a need for a low-cost digital imaging system for general applications in radiology. A novel potentially low-cost radiographic imaging system based on established technologies is proposed-the X-Ray Light Valve (XLV). This is a potentially high-quality digital x-ray detector made of a photoconducting layer and a liquid-crystal cell, physically coupled in a sandwich structure. Upon exposure to x rays, charge is collected on the surface of the photoconductor. This causes a change in the optical properties of the liquid-crystal cell and a visible image is generated. Subsequently, it is digitized by a scanned optical imager. The image formation is based on controlled modulation of light from an external source. The operation and practical implementation of the XLV system are described. The potential performance of the complete system and issues related to sensitivity, spatial resolution, noise, and speed are discussed. The feasibility of clinical use of an XLV device based on amorphous selenium (a-Se) as the photoconductor and a reflective electrically controlled birefringence cell is analyzed. The results of our analysis indicate that the XLV can potentially be adapted to a wide variety of radiographic tasks.
Scene-based Shack-Hartmann wavefront sensor for light-sheet microscopy
NASA Astrophysics Data System (ADS)
Lawrence, Keelan; Liu, Yang; Dale, Savannah; Ball, Rebecca; VanLeuven, Ariel J.; Sornborger, Andrew; Lauderdale, James D.; Kner, Peter
2018-02-01
Light-sheet microscopy is an ideal imaging modality for long-term live imaging in model organisms. However, significant optical aberrations can be present when imaging into an organism that is hundreds of microns or greater in size. To measure and correct optical aberrations, an adaptive optics system must be incorporated into the microscope. Many biological samples lack point sources that can be used as guide stars with conventional Shack-Hartmann wavefront sensors. We have developed a scene-based Shack-Hartmann wavefront sensor for measuring the optical aberrations in a light-sheet microscopy system that does not require a point-source and can measure the aberrations for different parts of the image. The sensor has 280 lenslets inside the pupil, creates an image from each lenslet with a 500 micron field of view and a resolution of 8 microns, and has a resolution for the wavefront gradient of 75 milliradians per lenslet. We demonstrate the system on both fluorescent bead samples and zebrafish embryos.
Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.
Ripple, Dean C; Hu, Zhishang
2016-03-01
Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.
Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography
NASA Astrophysics Data System (ADS)
Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori
2014-02-01
In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.
A light field microscope imaging spectrometer based on the microlens array
NASA Astrophysics Data System (ADS)
Yao, Yu-jia; Xu, Feng; Xia, Yin-xiang
2017-10-01
A new light field spectrometry microscope imaging system, which was composed by microscope objective, microlens array and spectrometry system was designed in this paper. 5-D information (4-D light field and 1-D spectrometer) of the sample could be captured by the snapshot system in only one exposure, avoiding the motion blur and aberration caused by the scanning imaging process of the traditional imaging spectrometry. Microscope objective had been used as the former group while microlens array used as the posterior group. The optical design of the system was simulated by Zemax, the parameter matching condition between microscope objective and microlens array was discussed significantly during the simulation process. The result simulated in the image plane was analyzed and discussed.
NASA Astrophysics Data System (ADS)
Shen, Xia; Bai, Yan-Feng; Qin, Tao; Han, Shen-Sheng
2008-11-01
Factors influencing the quality of lensless ghost imaging are investigated. According to the experimental results, we find that the imaging quality is determined by the number of independent sub light sources on the imaging plane of the reference arm. A qualitative picture based on advanced wave optics is presented to explain the physics behind the experimental phenomena. The present results will be helpful to provide a basis for improving the quality of ghost imaging systems in future works.
Kim, Dokyoon; Lee, Nohyun; Park, Yong Il; Hyeon, Taeghwan
2017-01-18
Several types of nanoparticle-based imaging probes have been developed to replace conventional luminescent probes. For luminescence imaging, near-infrared (NIR) probes are useful in that they allow deep tissue penetration and high spatial resolution as a result of reduced light absorption/scattering and negligible autofluorescence in biological media. They rely on either an anti-Stokes or a Stokes shift process to generate luminescence. For example, transition metal-doped semiconductor nanoparticles and lanthanide-doped inorganic nanoparticles have been demonstrated as anti-Stokes shift-based agents that absorb NIR light through two- or three-photon absorption process and upconversion process, respectively. On the other hand, quantum dots (QDs) and lanthanide-doped nanoparticles that emit in NIR-II range (∼1000 to ∼1350 nm) were suggested as promising Stokes shift-based imaging agents. In this topical review, we summarize and discuss the recent progress in the development of inorganic nanoparticle-based luminescence imaging probes working in NIR range.
Confocal non-line-of-sight imaging based on the light-cone transform
NASA Astrophysics Data System (ADS)
O’Toole, Matthew; Lindell, David B.; Wetzstein, Gordon
2018-03-01
How to image objects that are hidden from a camera’s view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
Confocal non-line-of-sight imaging based on the light-cone transform.
O'Toole, Matthew; Lindell, David B; Wetzstein, Gordon
2018-03-15
How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
Neuroradiology Using Secure Mobile Device Review.
Randhawa, Privia A; Morrish, William; Lysack, John T; Hu, William; Goyal, Mayank; Hill, Michael D
2016-04-05
Image review on computer-based workstations has made film-based review outdated. Despite advances in technology, the lack of portability of digital workstations creates an inherent disadvantage. As such, we sought to determine if the quality of image review on a handheld device is adequate for routine clinical use. Six CT/CTA cases and six MR/MRA cases were independently reviewed by three neuroradiologists in varying environments: high and low ambient light using a handheld device and on a traditional imaging workstation in ideal conditions. On first review (using a handheld device in high ambient light), a preliminary diagnosis for each case was made. Upon changes in review conditions, neuroradiologists were asked if any additional features were seen that changed their initial diagnoses. Reviewers were also asked to comment on overall clinical quality and if the handheld display was of acceptable quality for image review. After the initial CT review in high ambient light, additional findings were reported in 2 of 18 instances on subsequent reviews. Similarly, additional findings were identified in 4 of 18 instances after the initial MR review in high ambient lighting. Only one of these six additional findings contributed to the diagnosis made on the initial preliminary review. Use of a handheld device for image review is of adequate diagnostic quality based on image contrast, sharpness of structures, visible artefacts and overall display quality. Although reviewers were comfortable with using this technology, a handheld device with a larger screen may be diagnostically superior.
NASA Astrophysics Data System (ADS)
Klaessens, John H. G. M.; Nelisse, Martin; Verdaasdonk, Rudolf M.; Noordmans, Herke Jan
2013-03-01
During clinical interventions objective and quantitative information of the tissue perfusion, oxygenation or temperature can be useful for the surgical strategy. Local (point) measurements give limited information and affected areas can easily be missed, therefore imaging large areas is required. In this study a LED based multispectral imaging system (MSI, 17 different wavelengths 370nm-880nm) and a thermo camera were applied during clinical interventions: tissue flap transplantations (ENT), local anesthetic block and during open brain surgery (epileptic seizure). The images covered an area of 20x20 cm, when doing measurements in an (operating) room, they turned out to be more complicated than laboratory experiments due to light fluctuations, movement of the patient and limited angle of view. By constantly measuring the background light and the use of a white reference, light fluctuations and movement were corrected. Oxygenation concentration images could be calculated and combined with the thermal images. The effectively of local anesthesia of a hand could be predicted in an early stage using the thermal camera and the reperfusion of transplanted skin flap could be imaged. During brain surgery, a temporary hyper-perfused area was witnessed which was probably related to an epileptic attack. A LED based multispectral imaging system combined with thermal imaging provide complementary information on perfusion and oxygenation changes and are promising techniques for real-time diagnostics during clinical interventions.
Live Cell Imaging and Measurements of Molecular Dynamics
Frigault, M.; Lacoste, J.; Swift, J.; Brown, C.
2010-01-01
w3-2 Live cell microscopy is becoming widespread across all fields of the life sciences, as well as, many areas of the physical sciences. In order to accurately obtain live cell microscopy data, the live specimens must be properly maintained on the imaging platform. In addition, the fluorescence light path must be optimized for efficient light transmission in order to reduce the intensity of excitation light impacting the living sample. With low incident light intensities the processes under study should not be altered due to phototoxic effects from the light allowing for the long term visualization of viable living samples. Aspects for maintaining a suitable environment for the living sample, minimizing incident light and maximizing detection efficiency will be presented for various fluorescence based live cell instruments. Raster Image Correlation Spectroscopy (RICS) is a technique that uses the intensity fluctuations within laser scanning confocal images, as well as the well characterized scanning dynamics of the laser beam, to extract the dynamics, concentrations and clustering of fluorescent molecules within the cell. In addition, two color cross-correlation RICS can be used to determine protein-protein interactions in living cells without the many technical difficulties encountered in FRET based measurements. RICS is an ideal live cell technique for measuring cellular dynamics because the potentially damaging high intensity laser bursts required for photobleaching recovery measurements are not required, rather low laser powers, suitable for imaging, can be used. The RICS theory will be presented along with examples of live cell applications.
NASA Astrophysics Data System (ADS)
Held, Marcel Philipp; Ley, Peer-Phillip; Lachmayer, Roland
2018-02-01
High-resolution vehicle headlamps represent a future-oriented technology that increases traffic safety and driving comfort in the dark. A further development to current matrix beam headlamps are LED-based pixellight systems which enable additional lighting functions (e.g. the projection of navigation information on the road) to be activated for given driving scenarios. The image generation is based on spatial light modulators (SLM) such as digital micromirror devices (DMD), liquid crystal displays (LCD), liquid crystal on silicon (LCoS) devices or LED arrays. For DMD-, LCD- and LCoSbased headlamps, the optical system uses illumining optics to ensure a precise illumination of the corresponding SLM. LED arrays, however, have to use imaging optics to project the LED die onto an intermediate image plane and thus create the light distribution via an apposition of gapless juxtapositional LED die images. Nevertheless, the lambertian radiation characteristics complex the design of imaging optics regarding a highefficiency setup with maximum resolution and luminous flux. Simplifying the light source model and its emitting characteristics allows to determine a balanced setup between these parameters by using the Etendue and to ´ calculate the maximum possible efficacy and luminous flux for each technology in an early designing stage. Therefore, we present a calculation comparison of how simplifying the light source model can affect the Etendue ´ conservation and the setup design for two high-resolution technologies. The shown approach is evaluated and compared to simulation models to show the occurring deviation and its applicability.
Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung
2018-01-01
In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447
Ultrahigh resolution retinal imaging by visible light OCT with longitudinal achromatization
Chong, Shau Poh; Zhang, Tingwei; Kho, Aaron; Bernucci, Marcel T.; Dubra, Alfredo; Srinivasan, Vivek J.
2018-01-01
Chromatic aberrations are an important design consideration in high resolution, high bandwidth, refractive imaging systems that use visible light. Here, we present a fiber-based spectral/Fourier domain, visible light OCT ophthalmoscope corrected for the average longitudinal chromatic aberration (LCA) of the human eye. Analysis of complex speckles from in vivo retinal images showed that achromatization resulted in a speckle autocorrelation function that was ~20% narrower in the axial direction, but unchanged in the transverse direction. In images from the improved, achromatized system, the separation between Bruch’s membrane (BM), the retinal pigment epithelium (RPE), and the outer segment tips clearly emerged across the entire 6.5 mm field-of-view, enabling segmentation and morphometry of BM and the RPE in a human subject. Finally, cross-sectional images depicted distinct inner retinal layers with high resolution. Thus, with chromatic aberration compensation, visible light OCT can achieve volume resolutions and retinal image quality that matches or exceeds ultrahigh resolution near-infrared OCT systems with no monochromatic aberration compensation. PMID:29675296
Enhanced light element imaging in atomic resolution scanning transmission electron microscopy.
Findlay, S D; Kohno, Y; Cardamone, L A; Ikuhara, Y; Shibata, N
2014-01-01
We show that an imaging mode based on taking the difference between signals recorded from the bright field (forward scattering region) in atomic resolution scanning transmission electron microscopy provides an enhancement of the detectability of light elements over existing techniques. In some instances this is an enhancement of the visibility of the light element columns relative to heavy element columns. In all cases explored it is an enhancement in the signal-to-noise ratio of the image at the light column site. The image formation mechanisms are explained and the technique is compared with earlier approaches. Experimental data, supported by simulation, are presented for imaging the oxygen columns in LaAlO₃. Case studies looking at imaging hydrogen columns in YH₂ and lithium columns in Al₃Li are also explored through simulation, particularly with respect to the dependence on defocus, probe-forming aperture angle and detector collection aperture angles. © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shinoj, V. K.; Murukeshan, V. M.; Hong, Jesmond; Baskaran, M.; Aung, Tin
2015-07-01
Noninvasive medical imaging techniques have generated great interest and high potential in the research and development of ocular imaging and follow up procedures. It is well known that angle closure glaucoma is one of the major ocular diseases/ conditions that causes blindness. The identification and treatment of this disease are related primarily to angle assessment techniques. In this paper, we illustrate a probe-based imaging approach to obtain the images of the angle region in eye. The proposed probe consists of a micro CCD camera and LED/NIR laser light sources and they are configured at the distal end to enable imaging of iridocorneal region inside eye. With this proposed dualmodal probe, imaging is performed in light (white visible LED ON) and dark (NIR laser light source alone) conditions and the angle region is noticeable in both cases. The imaging using NIR sources have major significance in anterior chamber imaging since it evades pupil constriction due to the bright light and thereby the artificial altering of anterior chamber angle. The proposed methodology and developed scheme are expected to find potential application in glaucoma disease detection and diagnosis.
Borrelli, Enrico; Nittala, Muneeswar Gupta; Abdelfattah, Nizar Saleh; Lei, Jianqin; Hariri, Amir H; Shi, Yue; Fan, Wenying; Cozzi, Mariano; Sarao, Valentina; Lanzetta, Paolo; Staurenghi, Giovanni; Sadda, SriniVas R
2018-06-05
To systematically compare the intermodality and inter-reader agreement for two blue-light confocal fundus autofluorescence (FAF) systems. Thirty eyes (21 patients) with a diagnosis of geographic atrophy (GA) were enrolled. Eyes were imaged using two confocal blue-light FAF devices: (1) Spectralis device with a 488 nm excitation wavelength (488-FAF); (2) EIDON device with 450 nm excitation wavelength and the capability for 'colour' FAF imaging including both the individual red and green components of the emission spectrum. Furthermore, a third imaging modality (450-RF image) isolating and highlighting the red emission fluorescence component (REFC) was obtained and graded. Each image was graded by two readers to assess inter-reader variability and a single image for each modality was used to assess the intermodality variability. The 95% coefficient of repeatability (1.35 mm 2 for the 488-FAF-based grading, 8.13 mm 2 for the 450-FAF-based grading and 1.08 mm 2 for the 450-RF-based grading), the coefficient of variation (1.11 for 488-FAF, 2.05 for 450-FAF, 0.92 for 450-RF) and the intraclass correlation coefficient (0.994 for 488-FAF, 0.711 for 450-FAF, 0.997 for 450-RF) indicated that 450-FAF-based and 450-RF-based grading have the lowest and highest inter-reader agreements, respectively. The GA area was larger for 488-FAF images (median (IQR) 2.1 mm 2 (0.8-6.4 mm 2 )) than for 450-FAF images (median (IQR) 1.0 mm 2 (0.3-4.3 mm 2 ); p<0.0001). There was no significant difference in lesion area measurement between 488-FAF-based and 450-RF-based grading (median (IQR) 2.6 mm 2 (0.8-6.8 mm 2 ); p=1.0). The isolation of the REFC from the 450-FAF images allowed for a reproducible quantification of GA. This assessment had good comparability with that obtained with 488-FAF images. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Gas temperature and density measurements based on spectrally resolved Rayleigh-Brillouin scattering
NASA Technical Reports Server (NTRS)
Seasholtz, Richard G.; Lock, James A.
1992-01-01
The use of molecular Rayleigh scattering for measurements of gas density and temperature is evaluated. The technique used is based on the measurement of the spectrum of the scattered light, where both temperature and density are determined from the spectral shape. Planar imaging of Rayleigh scattering from air using a laser light sheet is evaluated for ambient conditions. The Cramer-Rao lower bounds for the shot-noise limited density and temperature measurement uncertainties are calculated for an ideal optical spectrum analyzer and for a planar mirror Fabry-Perot interferometer used in a static, imaging mode. With this technique, a single image of the Rayleigh scattered light can be analyzed to obtain density (or pressure) and temperature. Experimental results are presented for planar measurements taken in a heated air stream.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Yijia; Xu, Shuping; Xu, Weiqing, E-mail: xuwq@jlu.edu.cn
An integrated and portable Raman analyzer featuring an inverted probe fixed on a motor-driving adjustable optical module was designed for the combination of a microfluidic system. It possesses a micro-imaging function. The inverted configuration is advantageous to locate and focus microfluidic channels. Different from commercial micro-imaging Raman spectrometers using manual switchable light path, this analyzer adopts a dichroic beam splitter for both imaging and signal collection light paths, which avoids movable parts and improves the integration and stability of optics. Combined with surface-enhanced Raman scattering technique, this portable Raman micro-analyzer is promising as a powerful tool for microfluidic analytics.
NASA Astrophysics Data System (ADS)
Gao, Shengkui; Mondal, Suman B.; Zhu, Nan; Liang, RongGuang; Achilefu, Samuel; Gruev, Viktor
2015-01-01
Near infrared (NIR) fluorescence imaging has shown great potential for various clinical procedures, including intraoperative image guidance. However, existing NIR fluorescence imaging systems either have a large footprint or are handheld, which limits their usage in intraoperative applications. We present a compact NIR fluorescence imaging system (NFIS) with an image overlay solution based on threshold detection, which can be easily integrated with a goggle display system for intraoperative guidance. The proposed NFIS achieves compactness, light weight, hands-free operation, high-precision superimposition, and a real-time frame rate. In addition, the miniature and ultra-lightweight light-emitting diode tracking pod is easy to incorporate with NIR fluorescence imaging. Based on experimental evaluation, the proposed NFIS solution has a lower detection limit of 25 nM of indocyanine green at 27 fps and realizes a highly precise image overlay of NIR and visible images of mice in vivo. The overlay error is limited within a 2-mm scale at a 65-cm working distance, which is highly reliable for clinical study and surgical use.
Near infrared and visible face recognition based on decision fusion of LBP and DCT features
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-03-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.
Devices, systems, and methods for imaging
Appleby, David; Fraser, Iain; Watson, Scott
2008-04-15
Certain exemplary embodiments comprise a system, which can comprise an imaging plate. The imaging plate can be exposable by an x-ray source. The imaging plate can be configured to be used in digital radiographic imaging. The imaging plate can comprise a phosphor-based image storage device configured to convert an image stored therein into light.
Ma, Qian; Khademhosseinieh, Bahar; Huang, Eric; Qian, Haoliang; Bakowski, Malina A; Troemel, Emily R; Liu, Zhaowei
2016-08-16
The conventional optical microscope is an inherently two-dimensional (2D) imaging tool. The objective lens, eyepiece and image sensor are all designed to capture light emitted from a 2D 'object plane'. Existing technologies, such as confocal or light sheet fluorescence microscopy have to utilize mechanical scanning, a time-multiplexing process, to capture a 3D image. In this paper, we present a 3D optical microscopy method based upon simultaneously illuminating and detecting multiple focal planes. This is implemented by adding two diffractive optical elements to modify the illumination and detection optics. We demonstrate that the image quality of this technique is comparable to conventional light sheet fluorescent microscopy with the advantage of the simultaneous imaging of multiple axial planes and reduced number of scans required to image the whole sample volume.
NASA Astrophysics Data System (ADS)
Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Sa-ngiamsak, Chiranut
2013-06-01
Based on our previous work on light penetration-based silkworm gender identification, we find that unwanted optical noises scattering from the surrounding area near the silkworm pupa and the transparent support are sometimes analyzed and misinterpreted leading to incorrect silkworm gender identification. To alleviate this issue, we place a small rectangular hole on a transparent support so that it not only helps the user precisely place the silkworm pupa but also functions as a region of interest (ROI) for blocking unwanted optical noises and for roughly locating the abdomen region in the image for ease of image processing. Apart from the external ROI, we also assign a smaller ROI inside the image in order to remove strong scattering light from all edges of the external ROI and at the same time speed up our image processing operations. With only the external ROI in function, our experiment shows a measured 86% total accuracy in identifying gender of 120 silkworm pupae with a measured average processing time of 38 ms. Combining the external ROI and the image ROI together revamps the total accuracy in identifying the silkworm gender to 95% with a measured faster 18 ms processing time.
NASA Astrophysics Data System (ADS)
Okawa, Shinpei; Hirasawa, Takeshi; Kushibiki, Toshihiro; Ishihara, Miya
2015-03-01
Quantification of the optical properties of the tissues and blood by noninvasive photoacoustic (PA) imaging may provide useful information for screening and early diagnosis of diseases. Linearized 2D image reconstruction algorithm based on PA wave equation and the photon diffusion equation (PDE) can reconstruct the image with computational cost smaller than a method based on 3D radiative transfer equation. However, the reconstructed image is affected by the differences between the actual and assumed light propagations. A quantitative capability of a linearized 2D image reconstruction was investigated and discussed by the numerical simulations and the phantom experiment in this study. The numerical simulations with the 3D Monte Carlo (MC) simulation and the 2D finite element calculation of the PDE were carried out. The phantom experiment was also conducted. In the phantom experiment, the PA pressures were acquired by a probe which had an optical fiber for illumination and the ring shaped P(VDF-TrFE) ultrasound transducer. The measured object was made of Intralipid and Indocyanine green. In the numerical simulations, it was shown that the linearized image reconstruction method recovered the absorption coefficients with alleviating the dependency of the PA amplitude on the depth of the photon absorber. The linearized image reconstruction method worked effectively under the light propagation calculated by 3D MC simulation, although some errors occurred. The phantom experiments validated the result of the numerical simulations.
Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging
NASA Astrophysics Data System (ADS)
Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi
2016-10-01
In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
Data-nonintrusive photonics-based credit card verifier with a low false rejection rate.
Sumriddetchkajorn, Sarun; Intaravanne, Yuttana
2010-02-10
We propose and experimentally demonstrate a noninvasive credit card verifier with a low false rejection rate (FRR). Our key idea is based on the use of three broadband light sources in our data-nonintrusive photonics-based credit card verifier structure, where spectral components of the embossed hologram images are registered as red, green, and blue. In this case, nine distinguishable variables are generated for a feed-forward neural network (FFNN). In addition, we investigate the center of mass of the image histogram projected onto the x axis (I(color)), making our system more tolerant of the intensity fluctuation of the light source. We also reduce the unwanted signals on each hologram image by simply dividing the hologram image into three zones and then calculating their corresponding I(color) values for red, green, and blue bands. With our proposed concepts, we implement our field test prototype in which three broadband white light light-emitting diodes (LEDs), a two-dimensional digital color camera, and a four-layer FFNN are used. Based on 249 genuine credit cards and 258 counterfeit credit cards, we find that the average of differences in I(color) values between genuine and counterfeit credit cards is improved by 1.5 times and up to 13.7 times. In this case, we can effectively verify credit cards with a very low FRR of 0.79%.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Wang, Yanan; Zhu, Zhenhao; Su, Jinhui
2018-05-01
A focused plenoptic camera can effectively transform angular and spatial information to yield a refocused rendered image with high resolution. However, choosing a proper patch size poses a significant problem for the image-rendering algorithm. By using a spatial frequency response measurement, a method to obtain a suitable patch size is presented. By evaluating the spatial frequency response curves, the optimized patch size can be obtained quickly and easily. Moreover, the range of depth over which images can be rendered without artifacts can be estimated. Experiments show that the results of the image rendered based on frequency response measurement are in accordance with the theoretical calculation, which indicates that this is an effective way to determine the patch size. This study may provide support to light-field image rendering.
Fast photoacoustic imaging system based on 320-element linear transducer array.
Yin, Bangzheng; Xing, Da; Wang, Yi; Zeng, Yaguang; Tan, Yi; Chen, Qun
2004-04-07
A fast photoacoustic (PA) imaging system, based on a 320-transducer linear array, was developed and tested on a tissue phantom. To reconstruct a test tomographic image, 64 time-domain PA signals were acquired from a tissue phantom with embedded light-absorption targets. A signal acquisition was accomplished by utilizing 11 phase-controlled sub-arrays, each consisting of four transducers. The results show that the system can rapidly map the optical absorption of a tissue phantom and effectively detect the embedded light-absorbing target. By utilizing the multi-element linear transducer array and phase-controlled imaging algorithm, we thus can acquire PA tomography more efficiently, compared to other existing technology and algorithms. The methodology and equipment thus provide a rapid and reliable approach to PA imaging that may have potential applications in noninvasive imaging and clinic diagnosis.
Brückner, Michael; Becker, Katja; Popp, Jürgen; Frosch, Torsten
2015-09-24
A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. Copyright © 2015 Elsevier B.V. All rights reserved.
LED-based endoscopic light source for spectral imaging
NASA Astrophysics Data System (ADS)
Browning, Craig M.; Mayes, Samuel; Favreau, Peter; Rich, Thomas C.; Leavesley, Silas J.
2016-03-01
Colorectal cancer is the United States 3rd leading cancer in death rates.1 The current screening for colorectal cancer is an endoscopic procedure using white light endoscopy (WLE). There are multiple new methods testing to replace WLE, for example narrow band imaging and autofluorescence imaging.2 However, these methods do not meet the need for a higher specificity or sensitivity. The goal for this project is to modify the presently used endoscope light source to house 16 narrow wavelength LEDs for spectral imaging in real time while increasing sensitivity and specificity. The process to do such was to take an Olympus CLK-4 light source, replace the light and electronics with 16 LEDs and new circuitry. This allows control of the power and intensity of the LEDs. This required a larger enclosure to house a bracket system for the solid light guide (lightpipe), three new circuit boards, a power source and National Instruments hardware/software for computer control. The results were a successfully designed retrofit with all the new features. The LED testing resulted in the ability to control each wavelength's intensity. The measured intensity over the voltage range will provide the information needed to couple the camera for imaging. Overall the project was successful; the modifications to the light source added the controllable LEDs. This brings the research one step closer to the main goal of spectral imaging for early detection of colorectal cancer. Future goals will be to connect the camera and test the imaging process.
Simulation of a fast diffuse optical tomography system based on radiative transfer equation
NASA Astrophysics Data System (ADS)
Motevalli, S. M.; Payani, A.
2016-12-01
Studies show that near-infrared (NIR) light (light with wavelength between 700nm and 1300nm) undergoes two interactions, absorption and scattering, when it penetrates a tissue. Since scattering is the predominant interaction, the calculation of light distribution in the tissue and the image reconstruction of absorption and scattering coefficients are very complicated. Some analytical and numerical methods, such as radiative transport equation and Monte Carlo method, have been used for the simulation of light penetration in tissue. Recently, some investigators in the world have tried to develop a diffuse optical tomography system. In these systems, NIR light penetrates the tissue and passes through the tissue. Then, light exiting the tissue is measured by NIR detectors placed around the tissue. These data are collected from all the detectors and transferred to the computational parts (including hardware and software), which make a cross-sectional image of the tissue after performing some computational processes. In this paper, the results of the simulation of an optical diffuse tomography system are presented. This simulation involves two stages: a) Simulation of the forward problem (or light penetration in the tissue), which is performed by solving the diffusion approximation equation in the stationary state using FEM. b) Simulation of the inverse problem (or image reconstruction), which is performed by the optimization algorithm called Broyden quasi-Newton. This method of image reconstruction is faster compared to the other Newton-based optimization algorithms, such as the Levenberg-Marquardt one.
Investigation of self-adaptive LED surgical lighting based on entropy contrast enhancing method
NASA Astrophysics Data System (ADS)
Liu, Peng; Wang, Huihui; Zhang, Yaqin; Shen, Junfei; Wu, Rengmao; Zheng, Zhenrong; Li, Haifeng; Liu, Xu
2014-05-01
Investigation was performed to explore the possibility of enhancing contrast by varying the spectral distribution (SPD) of the surgical lighting. The illumination scenes with different SPDs were generated by the combination of a self-adaptive white light optimization method and the LED ceiling system, the images of biological sample are taken by a CCD camera and then processed by an 'Entropy' based contrast evaluation model which is proposed specific for surgery occasion. Compared with the neutral white LED based and traditional algorithm based image enhancing methods, the illumination based enhancing method turns out a better performance in contrast enhancing and improves the average contrast value about 9% and 6%, respectively. This low cost method is simple, practicable, and thus may provide an alternative solution for the expensive visual facility medical instruments.
LED light design method for high contrast and uniform illumination imaging in machine vision.
Wu, Xiaojun; Gao, Guangming
2018-03-01
In machine vision, illumination is very critical to determine the complexity of the inspection algorithms. Proper lights can obtain clear and sharp images with the highest contrast and low noise between the interested object and the background, which is conducive to the target being located, measured, or inspected. Contrary to the empirically based trial-and-error convention to select the off-the-shelf LED light in machine vision, an optimization algorithm for LED light design is proposed in this paper. It is composed of the contrast optimization modeling and the uniform illumination technology for non-normal incidence (UINI). The contrast optimization model is built based on the surface reflection characteristics, e.g., the roughness, the reflective index, and light direction, etc., to maximize the contrast between the features of interest and the background. The UINI can keep the uniformity of the optimized lighting by the contrast optimization model. The simulation and experimental results demonstrate that the optimization algorithm is effective and suitable to produce images with the highest contrast and uniformity, which is very inspirational to the design of LED illumination systems in machine vision.
A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-01-01
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods. PMID:28665361
Design Through Integration of On-Board Calibration Device with Imaging Spectroscopy Instruments
NASA Technical Reports Server (NTRS)
Stange, Michael
2012-01-01
The main purpose of the Airborne Visible and Infrared Imaging Spectroscopy (AVIRIS) project is to "identify, measure, and monitor constituents of the Earth's surface and atmosphere based on molecular absorption and particle scattering signatures." The project designs, builds, and tests various imaging spectroscopy instruments that use On-Board Calibration devices (OBC) to check the accuracy of the data collected by the spectrometers. The imaging instrument records the spectral signatures of light collected during flight. To verify the data is correct, the OBC shines light which is collected by the imaging spectrometer and compared against previous calibration data to track spectral response changes in the instrument. The spectral data has the calibration applied to it based on the readings from the OBC data in order to ensure accuracy.
Pc-based car license plate reading
NASA Astrophysics Data System (ADS)
Tanabe, Katsuyoshi; Marubayashi, Eisaku; Kawashima, Harumi; Nakanishi, Tadashi; Shio, Akio
1994-03-01
A PC-based car license plate recognition system has been developed. The system recognizes Chinese characters and Japanese phonetic hiragana characters as well as six digits on Japanese license plates. The system consists of a CCD camera, vehicle sensors, a strobe unit, a monitoring center, and an i486-based PC. The PC includes in its extension slots: a vehicle detector board, a strobe emitter board, and an image grabber board. When a passing vehicle is detected by the vehicle sensors, the strobe emits a pulse of light. The light pulse is synchronized with the time the vehicle image is frozen on an image grabber board. The recognition process is composed of three steps: image thresholding, character region extraction, and matching-based character recognition. The recognition software can handle obscured characters. Experimental results for hundreds of outdoor images showed high recognition performance within relatively short performance times. The results confirmed that the system is applicable to a wide variety of applications such as automatic vehicle identification and travel time measurement.
Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho
2004-12-01
A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.
Improving the uniformity of luminous system in radial imaging capsule endoscope system
NASA Astrophysics Data System (ADS)
Ou-Yang, Mang; Jeng, Wei-De
2013-02-01
This study concerns the illumination system in a radial imaging capsule endoscope (RICE). Uniformly illuminating the object is difficult because the intensity of the light from the light emitting diodes (LEDs) varies with angular displacement. When light is emitted from the surface of the LED, it first encounters the cone mirror, from which it is reflected, before directly passing through the lenses and complementary metal oxide semiconductor (CMOS) sensor. The light that is strongly reflected from the transparent view window (TVW) propagates again to the cone mirror, to be reflected and to pass through the lenses and CMOS sensor. The above two phenomena cause overblooming on the image plane. Overblooming causes nonuniform illumination on the image plane and consequently reduced image quality. In this work, optical design software was utilized to construct a photometric model for the optimal design of the LED illumination system. Based on the original RICE model, this paper proposes an optimal design to improve the uniformity of the illumination. The illumination uniformity in the RICE is increased from its original value of 0.128 to 0.69, greatly improving light uniformity.
NASA Astrophysics Data System (ADS)
Simone, Gabriele; Cordone, Roberto; Serapioni, Raul Paolo; Lecca, Michela
2017-05-01
Retinex theory estimates the human color sensation at any observed point by correcting its color based on the spatial arrangement of the colors in proximate regions. We revise two recent path-based, edge-aware Retinex implementations: Termite Retinex (TR) and Energy-driven Termite Retinex (ETR). As the original Retinex implementation, TR and ETR scan the neighborhood of any image pixel by paths and rescale their chromatic intensities by intensity levels computed by reworking the colors of the pixels on the paths. Our interest in TR and ETR is due to their unique, content-based scanning scheme, which uses the image edges to define the paths and exploits a swarm intelligence model for guiding the spatial exploration of the image. The exploration scheme of ETR has been showed to be particularly effective: its paths are local minima of an energy functional, designed to favor the sampling of image pixels highly relevant to color sensation. Nevertheless, since its computational complexity makes ETR poorly practicable, here we present a light version of it, named Light Energy-driven TR, and obtained from ETR by implementing a modified, optimized minimization procedure and by exploiting parallel computing.
NASA Astrophysics Data System (ADS)
Ulianova, Onega; Subbotina, Irina; Filonova, Nadezhda; Zaitsev, Sergey; Saltykov, Yury; Polyanina, Tatiana; Lyapina, Anna; Ulyanov, Sergey; Larionova, Olga; Feodorova, Valentina
2018-04-01
Methods of t-LASCA and s-LASCA imaging have been firstly adapted to the problem of monitoring of blood microcirculation in chicken embryo model. Set-up for LASCA imaging of chicken embryo is mounted. Disorders of blood microcirculation in embryonated chicken egg, infected by Chlamydia trachomatis, are detected. Speckle-imaging technique is compared with white-light ovoscopy and new method of laser ovoscopy, based on the scattering of coherent light, advantages of LASCA imaging for the early detection of developmental process of chlamydial agent is demonstrated.
Anti-glare LED lamps with adjustable illumination light field.
Chen, Yung-Sheng; Lin, Chung-Yi; Yeh, Chun-Ming; Kuo, Chie-Tong; Hsu, Chih-Wei; Wang, Hsiang-Chen
2014-03-10
We introduce a type of LED light-gauge steel frame lamp with an adjustable illumination light field that does not require a diffusion plate. Base on the Monte Carlo ray tracing method, this lamp has a good glare rating (GR) of 17.5 at 3050 lm. Compared with the traditional LED light-gauge steel frame lamp (without diffusion plate), the new type has low GR. The adjustability of the illumination light field could improve the zebra effect caused by the inadequate illumination light field of the lamp. Meanwhile, we adopt the retinal image analysis to discuss the influence of GR on vision. High GR could reflect stray light on the retinal image, which will reduce vision clarity and hasten the feeling of eye fatigue.
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
Using compressive measurement to obtain images at ultra low-light-level
NASA Astrophysics Data System (ADS)
Ke, Jun; Wei, Ping
2013-08-01
In this paper, a compressive imaging architecture is used for ultra low-light-level imaging. In such a system, features, instead of object pixels, are imaged onto a photocathode, and then magnified by an image intensifier. By doing so, system measurement SNR is increased significantly. Therefore, the new system can image objects at ultra low-ligh-level, while a conventional system has difficulty. PCA projection is used to collect feature measurements in this work. Linear Wiener operator and nonlinear method based on FoE model are used to reconstruct objects. Root mean square error (RMSE) is used to quantify system reconstruction quality.
Development of a diffraction imaging flow cytometer
Jacobs, Kenneth M.; Lu, Jun Q.
2013-01-01
Diffraction images record angle-resolved distribution of scattered light from a particle excited by coherent light and can correlate highly with the 3D morphology of a particle. We present a jet-in-fluid design of flow chamber for acquisition of clear diffraction images in a laminar flow. Diffraction images of polystyrene spheres of different diameters were acquired and found to correlate highly with the calculated ones based on the Mie theory. Fast Fourier transform analysis indicated that the measured images can be used to extract sphere diameter values. These results demonstrate the significant potentials of high-throughput diffraction imaging flow cytometry for extracting 3D morphological features of cells. PMID:19794790
Multi-spectral wide-field imaging for PplX PDT dosimetry of skin (Conference Presentation)
NASA Astrophysics Data System (ADS)
LaRochelle, Ethan; Chun, Hayden H.; Hasan, Tayyaba; Pogue, Brian W.; Maytin, Edward V.; Chapman, Michael S.; Davis, Scott C.
2016-03-01
Actinic Kertoses (AK) are common pre-cancerous lesions associated with sun-damaged skin. While generally benign, the condition can progress to squamous cell carcinoma (SCC) and is a particular concern for immunosuppressed patients who are susceptible to uncontrolled AK and SCC. Among the FDA-approved treatment options for AK, ALA-based photodynamic therapy is unique in that it is non-scarring and can be repeated on the same area. However, response rates vary widely due to variations in drug and light delivery, PpIX production, and tissue oxygenation. Thus, developing modalities to predict response is critical to enable patient-specific treatment-enhancing interventions. To that end, we have developed a wide-field spectrally-resolved fluorescence imaging system capable of red and blue light excitation. While blue light excites PpIX efficiently, poor photon penetration limits the image content to superficial layers of skin. Red light excitation, on the other hand, can reveal fluorescence information originating from deeper in tissue, which may provide relevant information about PpIX distribution. Our instrument illuminates the skin via a fiber-based ring illuminator, into which is coupled sequentially a white light source, and blue and red laser diodes. Light emitted from the tissue passes through a high-speed filter wheel with filters selected to resolve the PpIX emission spectrum. This configuration enables the use of spectral fitting to decouple PpIX fluorescence from background signal, improving sensitivity to low concentrations of PpIX. Images of tissue-simulating phantoms and animal models confirm a linear response to PpIX, and the ability to image sub-surface PpIX inaccessible with blue light using red excitation.
SU-G-IeP4-06: Feasibility of External Beam Treatment Field Verification Using Cherenkov Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Black, P; Na, Y; Wuu, C
2016-06-15
Purpose: Cherenkov light emission has been shown to correlate with ionizing radiation (IR) dose delivery in solid tissue. In order to properly correlate Cherenkov light images with real time dose delivery in a patient, we must account for geometric and intensity distortions arising from observation angle, as well as the effect of monitor units (MU) and field size on Cherenkov light emission. To test the feasibility of treatment field verification, we first focused on Cherenkov light emission efficiency based on MU and known field size (FS). Methods: Cherenkov light emission was captured using a PI-MAX4 intensified charge coupled device(ICCD) systemmore » (Princeton Instruments), positioned at a fixed angle of 40° relative to the beam central axis. A Varian TrueBeam linear accelerator (linac) was operated at 6MV and 600MU/min to deliver an Anterior-Posterior beam to a 5cm thick block phantom positioned at 100cm Source-to-Surface-Distance(SSD). FS of 10×10, 5×5, and 2×2cm{sup 2} were used. Before beam delivery projected light field images were acquired, ensuring that geometric distortions were consistent when measuring Cherenkov field discrepancies. Cherenkov image acquisition was triggered by linac target current. 500 frames were acquired for each FS. Composite images were created through summation of frames and background subtraction. MU per image was calculated based on linac pulse delay of 2.8ms. Cherenkov and projected light FS were evaluated using ImageJ software. Results: Mean Cherenkov FS discrepancies compared to light field were <0.5cm for 5.6, 2.8, and 8.6 MU for 10×10, 5×5, and 2×2cm{sup 2} FS, respectably. Discrepancies were reduced with increasing field size and MU. We predict a minimum of 100 frames is needed for reliable confirmation of delivered FS. Conclusion: Current discrepancies in Cherenkov field sizes are within a usable range to confirm treatment delivery in standard and respiratory gated clinical scenarios at MU levels appropriate to standard MLC position segments.« less
Design of system calibration for effective imaging
NASA Astrophysics Data System (ADS)
Varaprasad Babu, G.; Rao, K. M. M.
2006-12-01
A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.
Multistage morphological segmentation of bright-field and fluorescent microscopy images
NASA Astrophysics Data System (ADS)
Korzyńska, A.; Iwanowski, M.
2012-06-01
This paper describes the multistage morphological segmentation method (MSMA) for microscopic cell images. The proposed method enables us to study the cell behaviour by using a sequence of two types of microscopic images: bright field images and/or fluorescent images. The proposed method is based on two types of information: the cell texture coming from the bright field images and intensity of light emission, done by fluorescent markers. The method is dedicated to the image sequences segmentation and it is based on mathematical morphology methods supported by other image processing techniques. The method allows for detecting cells in image independently from a degree of their flattening and from presenting structures which produce the texture. It makes use of some synergic information from the fluorescent light emission image as the support information. The MSMA method has been applied to images acquired during the experiments on neural stem cells as well as to artificial images. In order to validate the method, two types of errors have been considered: the error of cell area detection and the error of cell position using artificial images as the "gold standard".
A novel optical system design of light field camera
NASA Astrophysics Data System (ADS)
Wang, Ye; Li, Wenhua; Hao, Chenyang
2016-01-01
The structure of main lens - Micro Lens Array (MLA) - imaging sensor is usually adopted in optical system of light field camera, and the MLA is the most important part in the optical system, which has the function of collecting and recording the amplitude and phase information of the field light. In this paper, a novel optical system structure is proposed. The novel optical system is based on the 4f optical structure, and the micro-aperture array (MAA) is used to instead of the MLA for realizing the information acquisition of the 4D light field. We analyze the principle that the novel optical system could realize the information acquisition of the light field. At the same time, a simple MAA, line grating optical system, is designed by ZEMAX software in this paper. The novel optical system is simulated by a line grating optical system, and multiple images are obtained in the image plane. The imaging quality of the novel optical system is analyzed.
NASA Technical Reports Server (NTRS)
Giveona, Amir; Shaklan, Stuart; Kern, Brian; Noecker, Charley; Kendrick, Steve; Wallace, Kent
2012-01-01
In a setup similar to the self coherent camera, we have added a set of pinholes in the diffraction ring of the Lyot plane in a high-contrast stellar Lyot coronagraph. We describe a novel complex electric field reconstruction from image plane intensity measurements consisting of light in the coronagraph's dark hole interfering with light from the pinholes. The image plane field is modified by letting light through one pinhole at a time. In addition to estimation of the field at the science camera, this method allows for self-calibration of the probes by letting light through the pinholes in various permutations while blocking the main Lyot opening. We present results of estimation and calibration from the High Contrast Imaging Testbed along with a comparison to the pair-wise deformable mirror diversity based estimation technique. Tests are carried out in narrow-band light and over a composite 10% bandpass.
Improved real-time imaging spectrometer
NASA Technical Reports Server (NTRS)
Lambert, James L. (Inventor); Chao, Tien-Hsin (Inventor); Yu, Jeffrey W. (Inventor); Cheng, Li-Jen (Inventor)
1993-01-01
An improved AOTF-based imaging spectrometer that offers several advantages over prior art AOTF imaging spectrometers is presented. The ability to electronically set the bandpass wavelength provides observational flexibility. Various improvements in optical architecture provide simplified magnification variability, improved image resolution and light throughput efficiency and reduced sensitivity to ambient light. Two embodiments of the invention are: (1) operation in the visible/near-infrared domain of wavelength range 0.48 to 0.76 microns; and (2) infrared configuration which operates in the wavelength range of 1.2 to 2.5 microns.
Hartmann, Sébastien; Elsäßer, Wolfgang
2017-01-01
Initially, ghost imaging (GI) was demonstrated with entangled light from parametric down conversion. Later, classical light sources were introduced with the development of thermal light GI concepts. State-of-the-art classical GI light sources rely either on complex combinations of coherent light with spatially randomizing optical elements or on incoherent lamps with monochromating optics, however suffering strong losses of efficiency and directionality. Here, a broad-area superluminescent diode is proposed as a new light source for classical ghost imaging. The coherence behavior of this spectrally broadband emitting opto-electronic light source is investigated in detail. An interferometric two-photon detection technique is exploited in order to resolve the ultra-short correlation timescales. We thereby quantify the coherence time, the photon statistics as well as the number of spatial modes unveiling a complete incoherent light behavior. With a one-dimensional proof-of-principle GI experiment, we introduce these compact emitters to the field which could be beneficial for high-speed GI systems as well as for long range GI sensing in future applications. PMID:28150737
Optimization of the excitation light sheet in selective plane illumination microscopy
Gao, Liang
2015-01-01
Selective plane illumination microscopy (SPIM) allows rapid 3D live fluorescence imaging on biological specimens with high 3D spatial resolution, good optical sectioning capability and minimal photobleaching and phototoxic effect. SPIM gains its advantage by confining the excitation light near the detection focal plane, and its performance is determined by the ability to create a thin, large and uniform excitation light sheet. Several methods have been developed to create such an excitation light sheet for SPIM. However, each method has its own strengths and weaknesses, and tradeoffs must be made among different aspects in SPIM imaging. In this work, we present a strategy to select the excitation light sheet among the latest SPIM techniques, and to optimize its geometry based on spatial resolution, field of view, optical sectioning capability, and the sample to be imaged. Besides the light sheets discussed in this work, the proposed strategy is also applicable to estimate the SPIM performance using other excitation light sheets. PMID:25798312
NASA Astrophysics Data System (ADS)
Yang, Le; Sang, Xinzhu; Yu, Xunbo; Liu, Boyang; Liu, Li; Yang, Shenwu; Yan, Binbin; Du, Jingyan; Gao, Chao
2018-05-01
A 54-inch horizontal-parallax only light-field display based on the light-emitting diode (LED) panel and the micro-pinhole unit array (MPUA) is demonstrated. Normally, the perceived 3D effect of the three-dimensional (3D) display with smooth motion parallax and abundant light-field information can be enhanced with increasing the density of viewpoints. However, the density of viewpoints is inversely proportional to the spatial display resolution for the conventional integral imaging. Here, a special MPUA is designed and fabricated, and the displayed 3D scene constructed by the proposed horizontal light-field display is presented. Compared with the conventional integral imaging, both the density of horizontal viewpoints and the spatial display resolution are significantly improved. In the experiment, A 54-inch horizontal light-field display with 42.8° viewing angle based on the LED panel with the resolution of 1280 × 720 and the MPUA is realized, which can provide natural 3D visual effect to observers with high quality.
Light-triggered thermoelectric conversion based on a carbon nanotube-polymer hybrid gel.
Miyako, Eijiro; Nagata, Hideya; Funahashi, Ryoji; Hirano, Ken; Hirotsu, Takahiro
2009-01-01
Lights? Nanotubes? Action! A hydrogel comprising lysozymes, poly(ethylene glycol), phospholipids, and functionalized single-walled carbon nanotubes is employed for light-driven thermoelectric conversion. A photoinduced thermoelectric conversion module based on the hydrogel functions as a novel electric power generator (see image). This concept may find application in various industries, such as robotics and aerospace engineering.
Time-lapse contact microscopy of cell cultures based on non-coherent illumination
NASA Astrophysics Data System (ADS)
Gabriel, Marion; Balle, Dorothée; Bigault, Stéphanie; Pornin, Cyrille; Gétin, Stéphane; Perraut, François; Block, Marc R.; Chatelain, François; Picollet-D'Hahan, Nathalie; Gidrol, Xavier; Haguet, Vincent
2015-10-01
Video microscopy offers outstanding capabilities to investigate the dynamics of biological and pathological mechanisms in optimal culture conditions. Contact imaging is one of the simplest imaging architectures to digitally record images of cells due to the absence of any objective between the sample and the image sensor. However, in the framework of in-line holography, other optical components, e.g., an optical filter or a pinhole, are placed underneath the light source in order to illuminate the cells with a coherent or quasi-coherent incident light. In this study, we demonstrate that contact imaging with an incident light of both limited temporal and spatial coherences can be achieved with sufficiently high quality for most applications in cell biology, including monitoring of cell sedimentation, rolling, adhesion, spreading, proliferation, motility, death and detachment. Patterns of cells were recorded at various distances between 0 and 1000 μm from the pixel array of the image sensors. Cells in suspension, just deposited or at mitosis focalise light into photonic nanojets which can be visualised by contact imaging. Light refraction by cells significantly varies during the adhesion process, the cell cycle and among the cell population in connection with every modification in the tridimensional morphology of a cell.
a Low-Cost and Portable System for 3d Reconstruction of Texture-Less Objects
NASA Astrophysics Data System (ADS)
Hosseininaveh, A.; Yazdan, R.; Karami, A.; Moradi, M.; Ghorbani, F.
2015-12-01
The optical methods for 3D modelling of objects can be classified into two categories including image-based and range-based methods. Structure from Motion is one of the image-based methods implemented in commercial software. In this paper, a low-cost and portable system for 3D modelling of texture-less objects is proposed. This system includes a rotating table designed and developed by using a stepper motor and a very light rotation plate. The system also has eight laser light sources with very dense and strong beams which provide a relatively appropriate pattern on texture-less objects. In this system, regarding to the step of stepper motor, images are semi automatically taken by a camera. The images can be used in structure from motion procedures implemented in Agisoft software.To evaluate the performance of the system, two dark objects were used. The point clouds of these objects were obtained by spraying a light powders on the objects and exploiting a GOM laser scanner. Then these objects were placed on the proposed turntable. Several convergent images were taken from each object while the laser light sources were projecting the pattern on the objects. Afterward, the images were imported in VisualSFM as a fully automatic software package for generating an accurate and complete point cloud. Finally, the obtained point clouds were compared to the point clouds generated by the GOM laser scanner. The results showed the ability of the proposed system to produce a complete 3D model from texture-less objects.
Light field measurement based on the single-lens coherent diffraction imaging
NASA Astrophysics Data System (ADS)
Shen, Cheng; Tan, Jiubin; Liu, Zhengjun
2018-01-01
Plenoptic camera and holography are popular light field measurement techniques. However, the low resolution or the complex apparatus hinders their widespread application. In this paper, we put forward a new light field measurement scheme. The lens is introduced into coherent diffraction imaging to operate an optical transform, extended fractional Fourier transform. Combined with the multi-image phase retrieval algorithm, the scheme is proved to hold several advantages. It gets rid of the support requirement and is much easier to implement while keeping a high resolution by making full use of the detector plane. Also, it is verified that our scheme has a superiority over the direct lens focusing imaging in amplitude measurement accuracy and phase retrieval ability.
Underwater image enhancement through depth estimation based on random forest
NASA Astrophysics Data System (ADS)
Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han
2017-11-01
Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-01
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-10
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.
Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera
NASA Astrophysics Data System (ADS)
Cruz Perez, Carlos; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor
2015-09-01
Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.
Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera.
Perez, Carlos Cruz; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor
2015-09-01
Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.
Study of coherent reflectometer for imaging internal structures of highly scattering media
NASA Astrophysics Data System (ADS)
Poupardin, Mathieu; Dolfi, Agnes
1996-01-01
Optical reflectometers are potentially useful tools for imaging internal structures of turbid media, particularly of biological media. To get a point by point image, an active imaging system has to distinguish light scattered from a sample volume and light scattered by other locations in the media. Operating this discrimination of light with reflectometers based on coherence can be realized in two ways: assuring a geometric selection or a temporal selection. In this paper we present both methods, showing in each case the influence of the different parameters on the size of the sample volume under the assumption of single scattering. We also study the influence on the detection efficiency of the coherence loss of the incident light resulting from multiple scattering. We adapt a model, first developed for atmospheric lidar in turbulent atmosphere, to get an analytical expression of this detection efficiency in the function of the optical coefficients of the media.
NASA Astrophysics Data System (ADS)
Turko, Nir A.; Isbach, Michael; Ketelhut, Steffi; Greve, Burkhard; Schnekenburger, Jürgen; Shaked, Natan T.; Kemper, Björn
2017-02-01
We explored photothermal quantitative phase imaging (PTQPI) of living cells with functionalized nanoparticles (NPs) utilizing a cost-efficient setup based on a cell culture microscope. The excitation light was modulated by a mechanical chopper wheel with low frequencies. Quantitative phase imaging (QPI) was performed with Michelson interferometer-based off-axis digital holographic microscopy and a standard industrial camera. We present results from PTQPI observations on breast cancer cells that were incubated with functionalized gold NPs binding to the epidermal growth factor receptor. Moreover, QPI was used to quantify the impact of the NPs and the low frequency light excitation on cell morphology and viability.
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-01-01
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855
NASA Astrophysics Data System (ADS)
Tsunoi, Yasuyuki; Sato, Shunichi; Kawauchi, Satoko; Akutsu, Yusuke; Miyagawa, Yoshihiro; Araki, Koji; Shiotani, Akihiro; Terakawa, Mitsuhiro
2015-11-01
For efficient and side effects-free pharmacological treatment, we here propose a theranostic system that enables transvascular drug delivery by photomechanical waves (PMWs) and photoacoustic (PA) imaging of the drug distribution; both functions are based on nanosecond laser pulses and can therefore be integrated in one system. Through optical fibers arranged around an ultrasound sensor, low-energy and high-energy nanosecond light pulses were transmitted respectively for PA imaging and PMW-based drug delivery by temporal switching. With the system, we delivered a test drug (Evans blue) to tumors in mice and visualized distributions of both the blood vessels and drug in the tissue in vivo, showing the validity of the system.
Structured Light-Based Hazard Detection For Planetary Surface Navigation
NASA Technical Reports Server (NTRS)
Nefian, Ara; Wong, Uland Y.; Dille, Michael; Bouyssounouse, Xavier; Edwards, Laurence; To, Vinh; Deans, Matthew; Fong, Terry
2017-01-01
This paper describes a structured light-based sensor for hazard avoidance in planetary environments. The system presented here can also be used in terrestrial applications constrained by reduced onboard power and computational complexity and low illumination conditions. The sensor is on a calibrated camera and laser dot projector system. The onboard hazard avoidance system determines the position of the projected dots in the image and through a triangulation process detects potential hazards. The paper presents the design parameters for this sensor and describes the image based solution for hazard avoidance. The system presented here was tested extensively in day and night conditions in Lunar analogue environments. The current system achieves over 97 detection rate with 1.7 false alarms over 2000 images.
Yasuda, Mitsuru; Akimoto, Takuo
2015-01-01
High-contrast fluorescence imaging using an optical interference mirror (OIM) slide that enhances the fluorescence from a fluorophore located on top of the OIM surface is reported. To enhance the fluorescence and reduce the background light of the OIM, transverse-electric-polarized excitation light was used as incident light, and the transverse-magnetic-polarized fluorescence signal was detected. As a result, an approximate 100-fold improvement in the signal-to-noise ratio was achieved through a 13-fold enhancement of the fluorescence signal and an 8-fold reduction of the background light.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-01-01
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510
Retinex based low-light image enhancement using guided filtering and variational framework
NASA Astrophysics Data System (ADS)
Zhang, Shi; Tang, Gui-jin; Liu, Xiao-hua; Luo, Su-huai; Wang, Da-dong
2018-03-01
A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization (CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-03-20
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.
NASA Astrophysics Data System (ADS)
Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo
2014-02-01
In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.
NASA Astrophysics Data System (ADS)
Okawa, Shinpei; Sei, Kiguna; Hirasawa, Takeshi; Irisawa, Kaku; Hirota, Kazuhiro; Wada, Takatsugu; Kushibiki, Toshihiro; Furuya, Kenichi; Ishihara, Miya
2017-03-01
For diagnosis of cervical cancer, screening by colposcope and successive biopsy are usually carried out. Colposcope, which is a mesoscope, is used to examine surface of the cervix and to find precancerous lesion grossly. However, the accuracy of colposcopy depends on the skills of the examiner and is inconsistent as a result. Additionally, colposcope lacks depth information. It is known that microvessel density and blood flow in cervical lesion increases associated with angiogenesis. Therefore, photoacoustic imaging (PAI) to detect angiogenesis in cervical lesion has been studied. PAI can diagnose cervical lesion sensitively and provide depth information. The authors have been investigating the efficacy of PAI in the diagnoses of the cervical lesion and cancer by use of the PAI and ultrasonography system with transvaginal probe developed by Fujifilm Corporation. For quantitative diagnosis by use of PAI, it is required to take the light propagation in biological medium into account. The image reconstruction of the absorption coefficient from the PA image of cervix by use of the simulation of light propagation based on finite element method has been tried in this study. Numerical simulation, phantom experiment and in vivo imaging were carried out.
Impact of multi-focused images on recognition of soft biometric traits
NASA Astrophysics Data System (ADS)
Chiesa, V.; Dugelay, J. L.
2016-09-01
In video surveillance semantic traits estimation as gender and age has always been debated topic because of the uncontrolled environment: while light or pose variations have been largely studied, defocused images are still rarely investigated. Recently the emergence of new technologies, as plenoptic cameras, yields to deal with these problems analyzing multi-focus images. Thanks to a microlens array arranged between the sensor and the main lens, light field cameras are able to record not only the RGB values but also the information related to the direction of light rays: the additional data make possible rendering the image with different focal plane after the acquisition. For our experiments, we use the GUC Light Field Face Database that includes pictures from the First Generation Lytro camera. Taking advantage of light field images, we explore the influence of defocusing on gender recognition and age estimation problems. Evaluations are computed on up-to-date and competitive technologies based on deep learning algorithms. After studying the relationship between focus and gender recognition and focus and age estimation, we compare the results obtained by images defocused by Lytro software with images blurred by more standard filters in order to explore the difference between defocusing and blurring effects. In addition we investigate the impact of deblurring on defocused images with the goal to better understand the different impacts of defocusing and standard blurring on gender and age estimation.
Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array.
Navruz, Isa; Coskun, Ahmet F; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan
2013-10-21
We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ~9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ~3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also removes spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears.
Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array
Navruz, Isa; Coskun, Ahmet F.; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan
2013-01-01
We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ∼9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ∼3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also gets rid of spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears. PMID:23939637
Physically-based in silico light sheet microscopy for visualizing fluorescent brain models
2015-01-01
Background We present a physically-based computational model of the light sheet fluorescence microscope (LSFM). Based on Monte Carlo ray tracing and geometric optics, our method simulates the operational aspects and image formation process of the LSFM. This simulated, in silico LSFM creates synthetic images of digital fluorescent specimens that can resemble those generated by a real LSFM, as opposed to established visualization methods producing visually-plausible images. We also propose an accurate fluorescence rendering model which takes into account the intrinsic characteristics of fluorescent dyes to simulate the light interaction with fluorescent biological specimen. Results We demonstrate first results of our visualization pipeline to a simplified brain tissue model reconstructed from the somatosensory cortex of a young rat. The modeling aspects of the LSFM units are qualitatively analysed, and the results of the fluorescence model were quantitatively validated against the fluorescence brightness equation and characteristic emission spectra of different fluorescent dyes. AMS subject classification Modelling and simulation PMID:26329404
Luminance-based specular gloss characterization.
Leloup, Frédéric B; Pointer, Michael R; Dutré, Philip; Hanselaer, Peter
2011-06-01
Gloss is a feature of visual appearance that arises from the directionally selective reflection of light incident on a surface. Especially when a distinct reflected image is perceptible, the luminance distribution of the illumination scene above the sample can strongly influence the gloss perception. For this reason, industrial glossmeters do not provide a satisfactory gloss estimation of high-gloss surfaces. In this study, the influence of the conditions of illumination on specular gloss perception was examined through a magnitude estimation experiment in which 10 observers took part. A light booth with two light sources was utilized: the mirror image of only one source being visible in reflection by the observer. The luminance of both the reflected image and the adjacent sample surface could be independently varied by separate adjustment of the intensity of the two light sources. A psychophysical scaling function was derived, relating the visual gloss estimations to the measured luminance of both the reflected image and the off-specular sample background. The generalization error of the model was estimated through a validation experiment performed by 10 other observers. In result, a metric including both surface and illumination properties is provided. Based on this metric, improved gloss evaluation methods and instruments could be developed.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Pozzi, P.; Bezzubik, V. V.; Belashenkov, N. R.
2017-06-01
Superresolution image reconstruction method based on the structured illumination microscopy (SIM) principle with reduced and simplified pattern set is presented. The method described needs only 2 sinusoidal patterns shifted by half a period for each spatial direction of reconstruction, instead of the minimum of 3 for the previously known methods. The method is based on estimating redundant frequency components in the acquired set of modulated images. Digital processing is based on linear operations. When applied to several spatial orientations, the image set can be further reduced to a single pattern for each spatial orientation, complemented by a single non-modulated image for all the orientations. By utilizing this method for the case of two spatial orientations, the total input image set is reduced up to 3 images, providing up to 2-fold improvement in data acquisition time compared to the conventional 3-pattern SIM method. Using the simplified pattern design, the field of view can be doubled with the same number of spatial light modulator raster elements, resulting in a total 4-fold increase in the space-time product. The method requires precise knowledge of the optical transfer function (OTF). The key limitation is the thickness of object layer that scatters or emits light, which requires to be sufficiently small relatively to the lens depth of field. Numerical simulations and experimental results are presented. Experimental results are obtained on the SIM setup with the spatial light modulator based on the 1920x1080 digital micromirror device.
A method of detection to the grinding wheel layer thickness based on computer vision
NASA Astrophysics Data System (ADS)
Ji, Yuchen; Fu, Luhua; Yang, Dujuan; Wang, Lei; Liu, Changjie; Wang, Zhong
2018-01-01
This paper proposed a method of detection to the grinding wheel layer thickness based on computer vision. A camera is used to capture images of grinding wheel layer on the whole circle. Forward lighting and back lighting are used to enables a clear image to be acquired. Image processing is then executed on the images captured, which consists of image preprocessing, binarization and subpixel subdivision. The aim of binarization is to help the location of a chord and the corresponding ring width. After subpixel subdivision, the thickness of the grinding layer can be calculated finally. Compared with methods usually used to detect grinding wheel wear, method in this paper can directly and quickly get the information of thickness. Also, the eccentric error and the error of pixel equivalent are discussed in this paper.
NASA Astrophysics Data System (ADS)
Regmi, Raju; Mohan, Kavya; Mondal, Partha Pratim
2014-09-01
Visualization of intracellular organelles is achieved using a newly developed high throughput imaging cytometry system. This system interrogates the microfluidic channel using a sheet of light rather than the existing point-based scanning techniques. The advantages of the developed system are many, including, single-shot scanning of specimens flowing through the microfluidic channel at flow rate ranging from micro- to nano- lit./min. Moreover, this opens-up in-vivo imaging of sub-cellular structures and simultaneous cell counting in an imaging cytometry system. We recorded a maximum count of 2400 cells/min at a flow-rate of 700 nl/min, and simultaneous visualization of fluorescently-labeled mitochondrial network in HeLa cells during flow. The developed imaging cytometry system may find immediate application in biotechnology, fluorescence microscopy and nano-medicine.
Multispectral imaging of the ocular fundus using light emitting diode illumination
NASA Astrophysics Data System (ADS)
Everdell, N. L.; Styles, I. B.; Calcagni, A.; Gibson, J.; Hebden, J.; Claridge, E.
2010-09-01
We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.
Multispectral imaging of the ocular fundus using light emitting diode illumination.
Everdell, N L; Styles, I B; Calcagni, A; Gibson, J; Hebden, J; Claridge, E
2010-09-01
We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.
Conjugation of fiber-coupled wide-band light sources and acousto-optical spectral elements
NASA Astrophysics Data System (ADS)
Machikhin, Alexander; Batshev, Vladislav; Polschikova, Olga; Khokhlov, Demid; Pozhar, Vitold; Gorevoy, Alexey
2017-12-01
Endoscopic instrumentation is widely used for diagnostics and surgery. The imaging systems, which provide the hyperspectral information of the tissues accessible by endoscopes, are particularly interesting and promising for in vivo photoluminescence diagnostics and therapy of tumour and inflammatory diseases. To add the spectral imaging feature to standard video endoscopes, we propose to implement acousto-optical (AO) filtration of wide-band illumination of incandescent-lamp-based light sources. To collect maximum light and direct it to the fiber-optic light guide inside the endoscopic probe, we have developed and tested the optical system for coupling the light source, the acousto-optical tunable filter (AOTF) and the light guide. The system is compact and compatible with the standard endoscopic components.
Simon, Amy A; Rowe, Jason F; Gaulme, Patrick; Hammel, Heidi B; Casewell, Sarah L; Fortney, Jonathan J; Gizis, John E; Lissauer, Jack J; Morales-Juberias, Raul; Orton, Glenn S; Wong, Michael H; Marley, Mark S
2016-02-01
Observations of Neptune with the Kepler Space Telescope yield a 49 day light curve with 98% coverage at a 1 minute cadence. A significant signature in the light curve comes from discrete cloud features. We compare results extracted from the light curve data with contemporaneous disk-resolved imaging of Neptune from the Keck 10-m telescope at 1.65 microns and Hubble Space Telescope visible imaging acquired nine months later. This direct comparison validates the feature latitudes assigned to the K2 light curve periods based on Neptune's zonal wind profile, and confirms observed cloud feature variability. Although Neptune's clouds vary in location and intensity on short and long timescales, a single large discrete storm seen in Keck imaging dominates the K2 and Hubble light curves; smaller or fainter clouds likely contribute to short-term brightness variability. The K2 Neptune light curve, in conjunction with our imaging data, provides context for the interpretation of current and future brown dwarf and extrasolar planet variability measurements. In particular we suggest that the balance between large, relatively stable, atmospheric features and smaller, more transient, clouds controls the character of substellar atmospheric variability. Atmospheres dominated by a few large spots may show inherently greater light curve stability than those which exhibit a greater number of smaller features.
Generation of light-sheet at the end of multimode fibre (Conference Presentation)
NASA Astrophysics Data System (ADS)
Plöschner, Martin; Kollárová, Véra; Dostál, Zbyněk.; Nylk, Jonathan; Barton-Owen, Thomas; Ferrier, David E. K.; Chmelik, Radim; Dholakia, Kishan; Cizmár, TomáÅ.¡
2017-02-01
Light-sheet fluorescence microscopy is quickly becoming one of the cornerstone imaging techniques in biology as it provides rapid, three-dimensional sectioning of specimens at minimal levels of phototoxicity. It is very appealing to bring this unique combination of imaging properties into an endoscopic setting and be able to perform optical sectioning deep in tissues. Current endoscopic approaches for delivery of light-sheet illumination are based on single-mode optical fibre terminated by cylindrical gradient index lens. Such configuration generates a light-sheet plane that is axially fixed and a mechanical movement of either the sample or the endoscope is required to acquire three-dimensional information about the sample. Furthermore, the axial resolution of this technique is limited to 5um. The delivery of the light-sheet through the multimode fibre provides better axial resolution limited only by its numerical aperture, the light-sheet is scanned holographically without any mechanical movement, and multiple advanced light-sheet imaging modalities, such as Bessel and structured illumination Bessel beam, are intrinsically supported by the system due to the cylindrical symmetry of the fibre. We discuss the holographic techniques for generation of multiple light-sheet types and demonstrate the imaging on a sample of fluorescent beads fixed in agarose gel, as well as on a biological sample of Spirobranchus Lamarcki.
A new product for photon-limited imaging
NASA Astrophysics Data System (ADS)
Gonsiorowski, Thomas
1986-01-01
A new commercial low-light imaging detector, the Photon Digitizing Camera (PDC), is based on the PAPA detector developed at Harvard University. The PDC generates (x, y, t)-coordinate data of each detected photoevent. Because the positional address computation is performed optically, very high counting rates are achieved even at full spatial resolution. Careful optomechanical and electronic design results in a compact, rugged detector with superb performance. The PDC will be used for speckle imaging of astronomical sources and other astronomical and low-light applications.
NASA Technical Reports Server (NTRS)
1996-01-01
PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.
Novel ray tracing method for stray light suppression from ocean remote sensing measurements.
Oh, Eunsong; Hong, Jinsuk; Kim, Sug-Whan; Park, Young-Je; Cho, Seong-Ick
2016-05-16
We developed a new integrated ray tracing (IRT) technique to analyze the stray light effect in remotely sensed images. Images acquired with the Geostationary Ocean Color Imager show a radiance level discrepancy at the slot boundary, which is suspected to be a stray light effect. To determine its cause, we developed and adjusted a novel in-orbit stray light analysis method, which consists of three simulated phases (source, target, and instrument). Each phase simulation was performed in a way that used ray information generated from the Sun and reaching the instrument detector plane efficiently. This simulation scheme enabled the construction of the real environment from the remote sensing data, with a focus on realistic phenomena. In the results, even in a cloud-free environment, a background stray light pattern was identified at the bottom of each slot. Variations in the stray light effect and its pattern according to bright target movement were simulated, with a maximum stray light ratio of 8.5841% in band 2 images. To verify the proposed method and simulation results, we compared the results with the real acquired remotely sensed image. In addition, after correcting for abnormal phenomena in specific cases, we confirmed that the stray light ratio decreased from 2.38% to 1.02% in a band 6 case, and from 1.09% to 0.35% in a band 8 case. IRT-based stray light analysis enabled clear determination of the stray light path and candidates in in-orbit circumstances, and the correction process aided recovery of the radiometric discrepancy.
A Dying Star in a Different Light
2010-11-17
This image composite shows two views of a puffy, dying star, or planetary nebula, known as NGC 1514. At left is a view from a ground-based, visible-light telescope; the view on the right shows the object in infrared light from NASA WISE telescope.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
Adaptive wavefront sensor based on the Talbot phenomenon.
Podanchuk, Dmytro V; Goloborodko, Andrey A; Kotov, Myhailo M; Kovalenko, Andrey V; Kurashov, Vitalij N; Dan'ko, Volodymyr P
2016-04-20
A new adaptive method of wavefront sensing is proposed and demonstrated. The method is based on the Talbot self-imaging effect, which is observed in an illuminating light beam with strong second-order aberration. Compensation of defocus and astigmatism is achieved with an appropriate choice of size of the rectangular unit cell of the diffraction grating, which is performed iteratively. A liquid-crystal spatial light modulator is used for this purpose. Self-imaging of rectangular grating in the astigmatic light beam is demonstrated experimentally. High-order aberrations are detected with respect to the compensated second-order aberration. The comparative results of wavefront sensing with a Shack-Hartmann sensor and the proposed sensor are adduced.
Flexible biodegradable citrate-based polymeric step-index optical fiber.
Shan, Dingying; Zhang, Chenji; Kalaba, Surge; Mehta, Nikhil; Kim, Gloria B; Liu, Zhiwen; Yang, Jian
2017-10-01
Implanting fiber optical waveguides into tissue or organs for light delivery and collection is among the most effective ways to overcome the issue of tissue turbidity, a long-standing obstacle for biomedical optical technologies. Here, we report a citrate-based material platform with engineerable opto-mechano-biological properties and demonstrate a new type of biodegradable, biocompatible, and low-loss step-index optical fiber for organ-scale light delivery and collection. By leveraging the rich designability and processibility of citrate-based biodegradable polymers, two exemplary biodegradable elastomers with a fine refractive index difference and yet matched mechanical properties and biodegradation profiles were developed. Furthermore, we developed a two-step fabrication method to fabricate flexible and low-loss (0.4 db/cm) optical fibers, and performed systematic characterizations to study optical, spectroscopic, mechanical, and biodegradable properties. In addition, we demonstrated the proof of concept of image transmission through the citrate-based polymeric optical fibers and conducted in vivo deep tissue light delivery and fluorescence sensing in a Sprague-Dawley (SD) rat, laying the groundwork for realizing future implantable devices for long-term implantation where deep-tissue light delivery, sensing and imaging are desired, such as cell, tissue, and scaffold imaging in regenerative medicine and in vivo optogenetic stimulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Yang; Wang, Qianqian
2008-12-01
When laser ranger is transported or used in field operations, the transmitting axis, receiving axis and aiming axis may be not parallel. The nonparallelism of the three-light-axis will affect the range-measuring ability or make laser ranger not be operated exactly. So testing and adjusting the three-light-axis parallelity in the production and maintenance of laser ranger is important to ensure using laser ranger reliably. The paper proposes a new measurement method using digital image processing based on the comparison of some common measurement methods for the three-light-axis parallelity. It uses large aperture off-axis paraboloid reflector to get the images of laser spot and white light cross line, and then process the images on LabVIEW platform. The center of white light cross line can be achieved by the matching arithmetic in LABVIEW DLL. And the center of laser spot can be achieved by gradation transformation, binarization and area filter in turn. The software system can set CCD, detect the off-axis paraboloid reflector, measure the parallelity of transmitting axis and aiming axis and control the attenuation device. The hardware system selects SAA7111A, a programmable vedio decoding chip, to perform A/D conversion. FIFO (first-in first-out) is selected as buffer.USB bus is used to transmit data to PC. The three-light-axis parallelity can be achieved according to the position bias between them. The device based on this method has been already used. The application proves this method has high precision, speediness and automatization.
Infrared imaging-based combat casualty care system
NASA Astrophysics Data System (ADS)
Davidson, James E., Sr.
1997-08-01
A Small Business Innovative Research (SBIR) contract was recently awarded to a start up company for the development of an infrared (IR) image based combat casualty care system. The company, Medical Thermal Diagnostics, or MTD, is developing a light weight, hands free, energy efficient uncooled IR imaging system based upon a Texas Instruments design which will allow emergency medical treatment of wounded soldiers in complete darkness without any type of light enhancement equipment. The principal investigator for this effort, Dr. Gene Luther, DVM, Ph.D., Professor Emeritus, LSU School of Veterinary Medicine, will conduct the development and testing of this system with support from Thermalscan, Inc., a nondestructive testing company experienced in IR thermography applications. Initial research has been done with surgery on a cat for feasibility of the concept as well as forensic research on pigs as a close representation of human physiology to determine time of death. Further such studies will be done later as well as trauma studies. IR images of trauma injuries will be acquired by imaging emergency room patients to create an archive of emergency medical situations seen with an infrared imaging camera. This archived data will then be used to develop training material for medical personnel using the system. This system has potential beyond military applications. Firefighters and emergency medical technicians could directly benefit from the capability to triage and administer medical care to trauma victims in low or no light conditions.
Comparison of scientific CMOS camera and webcam for monitoring cardiac pulse after exercise
NASA Astrophysics Data System (ADS)
Sun, Yu; Papin, Charlotte; Azorin-Peris, Vicente; Kalawsky, Roy; Greenwald, Stephen; Hu, Sijung
2011-09-01
In light of its capacity for remote physiological assessment over a wide range of anatomical locations, imaging photoplethysmography has become an attractive research area in biomedical and clinical community. Amongst recent iPPG studies, two separate research directions have been revealed, i.e., scientific camera based imaging PPG (iPPG) and webcam based imaging PPG (wPPG). Little is known about the difference between these two techniques. To address this issue, a dual-channel imaging PPG system (iPPG and wPPG) using ambient light as the illumination source has been introduced in this study. The performance of the two imaging PPG techniques was evaluated through the measurement of cardiac pulse acquired from the face of 10 male subjects before and after 10 min of cycling exercise. A time-frequency representation method was used to visualize the time-dependent behaviour of the heart rate. In comparison to the gold standard contact PPG, both imaging PPG techniques exhibit comparable functional characteristics in the context of cardiac pulse assessment. Moreover, the synchronized ambient light intensity recordings in the present study can provide additional information for appraising the performance of the imaging PPG systems. This feasibility study thereby leads to a new route for non-contact monitoring of vital signs, with clear applications in triage and homecare.
Direct imaging of slow, stored and stationary EIT polaritons
NASA Astrophysics Data System (ADS)
Campbell, Geoff T.; Cho, Young-Wook; Su, Jian; Everett, Jesse; Robins, Nicholas; Lam, Ping Koy; Buchler, Ben
2017-09-01
Stationary and slow light effects are of great interest for quantum information applications. Using laser-cooled Rb87 atoms, we performed side imaging of our atomic ensemble under slow and stationary light conditions, which allows direct comparison with numerical models. The polaritons were generated using electromagnetically induced transparency (EIT), with stationary light generated using counter-propagating control fields. By controlling the power ratio of the two control fields, we show fine control of the group velocity of the stationary light. We also compare the dynamics of stationary light using monochromatic and bichromatic control fields. Our results show negligible difference between the two situations, in contrast to previous work in EIT-based systems.
Maire, E; Lelièvre, E; Brau, D; Lyons, A; Woodward, M; Fafeur, V; Vandenbunder, B
2000-04-10
We have developed an approach to study in single living epithelial cells both cell migration and transcriptional activation, which was evidenced by the detection of luminescence emission from cells transfected with luciferase reporter vectors. The image acquisition chain consists of an epifluorescence inverted microscope, connected to an ultralow-light-level photon-counting camera and an image-acquisition card associated to specialized image analysis software running on a PC computer. Using a simple method based on a thin calibrated light source, the image acquisition chain has been optimized following comparisons of the performance of microscopy objectives and photon-counting cameras designed to observe luminescence. This setup allows us to measure by image analysis the luminescent light emitted by individual cells stably expressing a luciferase reporter vector. The sensitivity of the camera was adjusted to a high value, which required the use of a segmentation algorithm to eliminate the background noise. Following mathematical morphology treatments, kinetic changes of luminescent sources were analyzed and then correlated with the distance and speed of migration. Our results highlight the usefulness of our image acquisition chain and mathematical morphology software to quantify the kinetics of luminescence changes in migrating cells.
Tomaszewski, Michał; Ruszczak, Bogdan; Michalski, Paweł
2018-06-01
Electrical insulators are elements of power lines that require periodical diagnostics. Due to their location on the components of high-voltage power lines, their imaging can be cumbersome and time-consuming, especially under varying lighting conditions. Insulator diagnostics with the use of visual methods may require localizing insulators in the scene. Studies focused on insulator localization in the scene apply a number of methods, including: texture analysis, MRF (Markov Random Field), Gabor filters or GLCM (Gray Level Co-Occurrence Matrix) [1], [2]. Some methods, e.g. those which localize insulators based on colour analysis [3], rely on object and scene illumination, which is why the images from the dataset are taken under varying lighting conditions. The dataset may also be used to compare the effectiveness of different methods of localizing insulators in images. This article presents high-resolution images depicting a long rod electrical insulator under varying lighting conditions and against different backgrounds: crops, forest and grass. The dataset contains images with visible laser spots (generated by a device emitting light at the wavelength of 532 nm) and images without such spots, as well as complementary data concerning the illumination level and insulator position in the scene, the number of registered laser spots, and their coordinates in the image. The laser spots may be used to support object-localizing algorithms, while the images without spots may serve as a source of information for those algorithms which do not need spots to localize an insulator.
Drusen Characterization with Multimodal Imaging
Spaide, Richard F.; Curcio, Christine A.
2010-01-01
Summary Multimodal imaging findings and histological demonstration of soft drusen, cuticular drusen, and subretinal drusenoid deposits provided information used to develop a model explaining their imaging characteristics. Purpose To characterize the known appearance of cuticular drusen, subretinal drusenoid deposits (reticular pseudodrusen), and soft drusen as revealed by multimodal fundus imaging; to create an explanatory model that accounts for these observations. Methods Reported color, fluorescein angiographic, autofluorescence, and spectral domain optical coherence tomography (SD-OCT) images of patients with cuticular drusen, soft drusen, and subretinal drusenoid deposits were reviewed, as were actual images from affected eyes. Representative histological sections were examined. The geometry, location, and imaging characteristics of these lesions were evaluated. A hypothesis based on the Beer-Lambert Law of light absorption was generated to fit these observations. Results Cuticular drusen appear as numerous uniform round yellow-white punctate accumulations under the retinal pigment epithelium (RPE). Soft drusen are larger yellow-white dome-shaped mounds of deposit under the RPE. Subretinal drusenoid deposits are polymorphous light-grey interconnected accumulations above the RPE. Based on the model, both cuticular and soft drusen appear yellow due to the removal of shorter wavelength light by a double pass through the RPE. Subretinal drusenoid deposits, which are located on the RPE, are not subjected to short wavelength attenuation and therefore are more prominent when viewed with blue light. The location and morphology of extracellular material in relationship to the RPE, and associated changes to RPE morphology and pigmentation, appeared to be primary determinants of druse appearance in different imaging modalities. Conclusion Although cuticular drusen, subretinal drusenoid deposits, and soft drusen are composed of common components, they are distinguishable by multimodal imaging due to differences in location, morphology, and optical filtering effects by drusenoid material and the RPE. PMID:20924263
Feature selection from hyperspectral imaging for guava fruit defects detection
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd. Zubir; Tan, Sou Ching
2017-06-01
Development of technology makes hyperspectral imaging commonly used for defect detection. In this research, a hyperspectral imaging system was setup in lab to target for guava fruits defect detection. Guava fruit was selected as the object as to our knowledge, there is fewer attempts were made for guava defect detection based on hyperspectral imaging. The common fluorescent light source was used to represent the uncontrolled lighting condition in lab and analysis was carried out in a specific wavelength range due to inefficiency of this particular light source. Based on the data, the reflectance intensity of this specific setup could be categorized in two groups. Sequential feature selection with linear discriminant (LD) and quadratic discriminant (QD) function were used to select features that could potentially be used in defects detection. Besides the ordinary training method, training dataset in discriminant was separated in two to cater for the uncontrolled lighting condition. These two parts were separated based on the brighter and dimmer area. Four evaluation matrixes were evaluated which are LD with common training method, QD with common training method, LD with two part training method and QD with two part training method. These evaluation matrixes were evaluated using F1-score with total 48 defected areas. Experiment shown that F1-score of linear discriminant with the compensated method hitting 0.8 score, which is the highest score among all.
NASA Astrophysics Data System (ADS)
Ying, Changsheng; Zhao, Peng; Li, Ye
2018-01-01
The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo.
Xia, Jun; Chatni, Muhammad R; Maslov, Konstantin; Guo, Zijian; Wang, Kun; Anastasio, Mark; Wang, Lihong V
2012-05-01
We report a novel small-animal whole-body imaging system called ring-shaped confocal photoacoustic computed tomography (RC-PACT). RC-PACT is based on a confocal design of free-space ring-shaped light illumination and 512-element full-ring ultrasonic array signal detection. The free-space light illumination maximizes the light delivery efficiency, and the full-ring signal detection ensures a full two-dimensional view aperture for accurate image reconstruction. Using cylindrically focused array elements, RC-PACT can image a thin cross section with 0.10 to 0.25 mm in-plane resolutions and 1.6 s/frame acquisition time. By translating the mouse along the elevational direction, RC-PACT provides a series of cross-sectional images of the brain, liver, kidneys, and bladder.
Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo
NASA Astrophysics Data System (ADS)
Xia, Jun; Chatni, Muhammad R.; Maslov, Konstantin; Guo, Zijian; Wang, Kun; Anastasio, Mark; Wang, Lihong V.
2012-05-01
We report a novel small-animal whole-body imaging system called ring-shaped confocal photoacoustic computed tomography (RC-PACT). RC-PACT is based on a confocal design of free-space ring-shaped light illumination and 512-element full-ring ultrasonic array signal detection. The free-space light illumination maximizes the light delivery efficiency, and the full-ring signal detection ensures a full two-dimensional view aperture for accurate image reconstruction. Using cylindrically focused array elements, RC-PACT can image a thin cross section with 0.10 to 0.25 mm in-plane resolutions and 1.6 s/frame acquisition time. By translating the mouse along the elevational direction, RC-PACT provides a series of cross-sectional images of the brain, liver, kidneys, and bladder.
Using DMSP/OLS nighttime imagery to estimate carbon dioxide emission
NASA Astrophysics Data System (ADS)
Desheng, B.; Letu, H.; Bao, Y.; Naizhuo, Z.; Hara, M.; Nishio, F.
2012-12-01
This study highlighted a method for estimating CO2 emission from electric power plants using the Defense Meteorological Satellite Program's Operational Linescan System (DMSP/OLS) stable light image product for 1999. CO2 emissions from power plants account for a high percentage of CO2 emissions from fossil fuel consumptions. Thermal power plants generate the electricity by burning fossil fuels, so they emit CO2 directly. In many Asian countries such as China, Japan, India, and South Korea, the amounts of electric power generated by thermal power accounts over 58% in the total amount of electric power in 1999. So far, figures of the CO2 emission were obtained mainly by traditional statistical methods. Moreover, the statistical data were summarized as administrative regions, so it is difficult to examine the spatial distribution of non-administrative division. In some countries the reliability of such CO2 emission data is relatively low. However, satellite remote sensing can observe the earth surface without limitation of administrative regions. Thus, it is important to estimate CO2 using satellite remote sensing. In this study, we estimated the CO2 emission by fossil fuel consumption from electric power plant using stable light image of the DMSP/OLS satellite data for 1999 after correction for saturation effect in Japan. Digital number (DN) values of the stable light images in center areas of cities are saturated due to the large nighttime light intensities and characteristics of the OLS satellite sensors. To more accurately estimate the CO2 emission using the stable light images, a saturation correction method was developed by using the DMSP radiance calibration image, which does not include any saturation pixels. A regression equation was developed by the relationship between DN values of non-saturated pixels in the stable light image and those in the radiance calibration image. And, regression equation was used to adjust the DNs of the radiance calibration image. Then, saturated DNs of the stable light image was corrected using adjusted radiance calibration image. After that, regression analysis was performed with cumulative DNs of the corrected stable light image, electric power consumption, electric power generation and CO2 emission by fossil fuel consumption from electric power plant each other. Results indicated that there are good relationships (R2>90%) between DNs of the corrected stable light image and other parameters. Based on the above results, we estimated the CO2 emission from electric power plant using corrected stable light image. Keywords: DMSP/OLS, stable light, saturation light correction method, regression analysis Acknowledgment: The research was financially supported by the Sasakawa Scientific Research Grant from the Japan Science Society.
High contrast imaging through adaptive transmittance control in the focal plane
NASA Astrophysics Data System (ADS)
Dhadwal, Harbans S.; Rastegar, Jahangir; Feng, Dake
2016-05-01
High contrast imaging, in the presence of a bright background, is a challenging problem encountered in diverse applications ranging from the daily chore of driving into a sun-drenched scene to in vivo use of biomedical imaging in various types of keyhole surgeries. Imaging in the presence of bright sources saturates the vision system, resulting in loss of scene fidelity, corresponding to low image contrast and reduced resolution. The problem is exacerbated in retro-reflective imaging systems where the light sources illuminating the object are unavoidably strong, typically masking the object features. This manuscript presents a novel theoretical framework, based on nonlinear analysis and adaptive focal plane transmittance, to selectively remove object domain sources of background light from the image plane, resulting in local and global increases in image contrast. The background signal can either be of a global specular nature, giving rise to parallel illumination from the entire object surface or can be represented by a mosaic of randomly orientated, small specular surfaces. The latter is more representative of real world practical imaging systems. Thus, the background signal comprises of groups of oblique rays corresponding to distributions of the mosaic surfaces. Through the imaging system, light from group of like surfaces, converges to a localized spot in the focal plane of the lens and then diverges to cast a localized bright spot in the image plane. Thus, transmittance of a spatial light modulator, positioned in the focal plane, can be adaptively controlled to block a particular source of background light. Consequently, the image plane intensity is entirely due to the object features. Experimental image data is presented to verify the efficacy of the methodology.
Sun, Yu; Papin, Charlotte; Azorin-Peris, Vicente; Kalawsky, Roy; Greenwald, Stephen; Hu, Sijung
2012-03-01
Imaging photoplethysmography (PPG) is able to capture useful physiological data remotely from a wide range of anatomical locations. Recent imaging PPG studies have concentrated on two broad research directions involving either high-performance cameras and or webcam-based systems. However, little has been reported about the difference between these two techniques, particularly in terms of their performance under illumination with ambient light. We explore these two imaging PPG approaches through the simultaneous measurement of the cardiac pulse acquired from the face of 10 male subjects and the spectral characteristics of ambient light. Measurements are made before and after a period of cycling exercise. The physiological pulse waves extracted from both imaging PPG systems using the smoothed pseudo-Wigner-Ville distribution yield functional characteristics comparable to those acquired using gold standard contact PPG sensors. The influence of ambient light intensity on the physiological information is considered, where results reveal an independent relationship between the ambient light intensity and the normalized plethysmographic signals. This provides further support for imaging PPG as a means for practical noncontact physiological assessment with clear applications in several domains, including telemedicine and homecare. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Technical Reports Server (NTRS)
2002-01-01
With the backing of NASA, researchers at Michigan State University, the University of Minnesota, and the University of Wisconsin have begun using satellite data to measure lake water quality and clarity of the lakes in the Upper Midwest. This false color IKONOS image displays the water clarity of the lakes in Eagan, Minnesota. Scientists measure the lake quality in satellite data by observing the ratio of blue to red light in the satellite data. When the amount of blue light reflecting off of the lake is high and the red light is low, a lake generally had high water quality. Lakes loaded with algae and sediments, on the other hand, reflect less blue light and more red light. In this image, scientists used false coloring to depict the level of clarity of the water. Clear lakes are blue, moderately clear lakes are green and yellow, and murky lakes are orange and red. Using images such as these along with data from the Landsat satellites and NASA's Terra satellite, the scientists plan to create a comprehensive water quality map for the entire Great Lakes region in the next few years. For more information, read: Testing the Waters (Image courtesy Upper Great Lakes Regional Earth Science Applications Center, based on data copyright Space Imaging)
Polarimetric infrared imaging simulation of a synthetic sea surface with Mie scattering.
He, Si; Wang, Xia; Xia, Runqiu; Jin, Weiqi; Liang, Jian'an
2018-03-01
A novel method to simulate the polarimetric infrared imaging of a synthetic sea surface with atmospheric Mie scattering effects is presented. The infrared emission, multiple reflections, and infrared polarization of the sea surface and the Mie scattering of aerosols are all included for the first time. At first, a new approach to retrieving the radiative characteristics of a wind-roughened sea surface is introduced. A two-scale method of sea surface realization and the inverse ray tracing of light transfer calculation are combined and executed simultaneously, decreasing the consumption of time and memory dramatically. Then the scattering process that the infrared light emits from the sea surface and propagates in the aerosol particles is simulated with a polarized light Monte Carlo model. Transformations of the polarization state of the light are calculated with the Mie theory. Finally, the polarimetric infrared images of the sea surface of different environmental conditions and detection parameters are generated based on the scattered light detected by the infrared imaging polarimeter. The results of simulation examples show that our polarimetric infrared imaging simulation can be applied to predict the infrared polarization characteristics of the sea surface, model the oceanic scene, and guide the detection in the oceanic environment.
NASA Astrophysics Data System (ADS)
Sun, Yu; Papin, Charlotte; Azorin-Peris, Vicente; Kalawsky, Roy; Greenwald, Stephen; Hu, Sijung
2012-03-01
Imaging photoplethysmography (PPG) is able to capture useful physiological data remotely from a wide range of anatomical locations. Recent imaging PPG studies have concentrated on two broad research directions involving either high-performance cameras and or webcam-based systems. However, little has been reported about the difference between these two techniques, particularly in terms of their performance under illumination with ambient light. We explore these two imaging PPG approaches through the simultaneous measurement of the cardiac pulse acquired from the face of 10 male subjects and the spectral characteristics of ambient light. Measurements are made before and after a period of cycling exercise. The physiological pulse waves extracted from both imaging PPG systems using the smoothed pseudo-Wigner-Ville distribution yield functional characteristics comparable to those acquired using gold standard contact PPG sensors. The influence of ambient light intensity on the physiological information is considered, where results reveal an independent relationship between the ambient light intensity and the normalized plethysmographic signals. This provides further support for imaging PPG as a means for practical noncontact physiological assessment with clear applications in several domains, including telemedicine and homecare.
NASA Astrophysics Data System (ADS)
Feng, Zhixin
2018-02-01
Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.
Light activated microbubbles for imaging and microsurgery
NASA Astrophysics Data System (ADS)
Cavigli, Lucia; Micheletti, Filippo; Tortoli, Paolo; Centi, Sonia; Lai, Sarah; Borri, Claudia; Rossi, Francesca; Ratto, Fulvio; Pini, Roberto
2017-03-01
Imaging and microsurgery procedures based on the photoacoustic effect have recently attracted much attention for cancer treatment. Light absorption in the nanosecond regime triggers thermoelastic processes that induce ultrasound emission and even cavitation. The ultrasound waves may be detected to reconstruct images, while cavitation may be exploited to kill malignant cells. The potential of gold nanorods as contrast agents for photoacoustic imaging has been extensively investigated, but still little is known about their use to trigger cavitation. Here, we investigated the influence of environment thermal properties on the ability of gold nanorods to trigger cavitation by probing the photoacoustic emission as a function of the excitation fluence. We are confident that these results will provide useful directions to the development of new strategies for therapies based on the photoacoustic effect.
NASA Astrophysics Data System (ADS)
Okawa, Shinpei; Hirasawa, Takeshi; Kushibiki, Toshihiro; Ishihara, Miya
2017-12-01
Quantitative photoacoustic tomography (QPAT) employing a light propagation model will play an important role in medical diagnoses by quantifying the concentration of hemoglobin or a contrast agent. However, QPAT by the light propagation model with the three-dimensional (3D) radiative transfer equation (RTE) requires a huge computational load in the iterative forward calculations involved in the updating process to reconstruct the absorption coefficient. The approximations of the light propagation improve the efficiency of the image reconstruction for the QPAT. In this study, we compared the 3D/two-dimensional (2D) photon diffusion equation (PDE) approximating 3D RTE with the Monte Carlo simulation based on 3D RTE. Then, the errors in a 2D PDE-based linearized image reconstruction caused by the approximations were quantitatively demonstrated and discussed in the numerical simulations. It was clearly observed that the approximations affected the reconstructed absorption coefficient. The 2D PDE-based linearized algorithm succeeded in the image reconstruction of the region with a large absorption coefficient in the 3D phantom. The value reconstructed in the phantom experiment agreed with that in the numerical simulation, so that it was validated that the numerical simulation of the image reconstruction predicted the relationship between the true absorption coefficient of the target in the 3D medium and the reconstructed value with the 2D PDE-based linearized algorithm. Moreover, the the true absorption coefficient in 3D medium was estimated from the 2D reconstructed image on the basis of the prediction by the numerical simulation. The estimation was successful in the phantom experiment, although some limitations were revealed.
Extremely simple holographic projection of color images
NASA Astrophysics Data System (ADS)
Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej
2012-03-01
A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).
AOSLO: from benchtop to clinic
NASA Astrophysics Data System (ADS)
Zhang, Yuhua; Poonja, Siddharth; Roorda, Austin
2006-08-01
We present a clinically deployable adaptive optics scanning laser ophthalmoscope (AOSLO) that features micro-electro-mechanical (MEMS) deformable mirror (DM) based adaptive optics (AO) and low coherent light sources. With the miniaturized optical aperture of a μDMS-Multi TM MEMS DM (Boston Micromachines Corporation, Watertown, MA), we were able to develop a compact and robust AOSLO optical system that occupies a 50 cm X 50 cm area on a mobile optical table. We introduced low coherent light sources, which are superluminescent laser diodes (SLD) at 680 nm with 9 nm bandwidth and 840 nm with 50 nm bandwidth, in confocal scanning ophthalmoscopy to eliminate interference artifacts in the images. We selected a photo multiplier tube (PMT) for photon signal detection and designed low noise video signal conditioning circuits. We employed an acoustic-optical (AOM) spatial light modulator to modulate the light beam so that we could avoid unnecessary exposure to the retina or project a specific stimulus pattern onto the retina. The MEMS DM based AO system demonstrated robust performance. The use of low coherent light sources effectively mitigated the interference artifacts in the images and yielded high-fidelity retinal images of contiguous cone mosaic. We imaged patients with inherited retinal degenerations including cone-rod dystrophy (CRD) and retinitis pigmentosa (RP). We have produced high-fidelity, real-time, microscopic views of the living human retina for healthy and diseased eyes.
Optical image encryption method based on incoherent imaging and polarized light encoding
NASA Astrophysics Data System (ADS)
Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.
2018-05-01
We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.
Measurements of UGR of LED light by a DSLR colorimeter
NASA Astrophysics Data System (ADS)
Hsu, Shau-Wei; Chen, Cheng-Hsien; Jiaan, Yuh-Der
2012-10-01
We have developed an image-based measurement method on UGR (unified glare rating) of interior lighting environment. A calibrated DSLR (digital single-lens reflex camera) with an ultra wide-angle lens was used to measure the luminance distribution, by which the corresponding parameters can be automatically calculated. A LED lighting was placed in a room and measured at various positions and directions to study the properties of UGR. The testing results are fitted with visual experiences and UGR principles. To further examine the results, a spectroradiometer and an illuminance meter were respectively used to measure the luminance and illuminance at the same position and orientation of the DSLR. The calculation of UGR by this image-based method may solve the problem of non-uniform luminance-distribution of LED lighting, and was studied on segmentation of the luminance graph for the calculations.
A fast double shutter for CCD-based metrology
NASA Astrophysics Data System (ADS)
Geisler, R.
2017-02-01
Image based metrology such as Particle Image Velocimetry (PIV) depends on the comparison of two images of an object taken in fast succession. Cameras for these applications provide the so-called `double shutter' mode: One frame is captured with a short exposure time and in direct succession a second frame with a long exposure time can be recorded. The difference in the exposure times is typically no problem since illumination is provided by a pulsed light source such as a laser and the measurements are performed in a darkened environment to prevent ambient light from accumulating in the long second exposure time. However, measurements of self-luminous processes (e.g. plasma, combustion ...) as well as experiments in ambient light are difficult to perform and require special equipment (external shutters, highspeed image sensors, multi-sensor systems ...). Unfortunately, all these methods incorporate different drawbacks such as reduced resolution, degraded image quality, decreased light sensitivity or increased susceptibility to decalibration. In the solution presented here, off-the-shelf CCD sensors are used with a special timing to combine neighbouring pixels in a binning-like way. As a result, two frames of short exposure time can be captured in fast succession. They are stored in the on-chip vertical register in a line-interleaved pattern, read out in the common way and separated again by software. The two resultant frames are completely congruent; they expose no insensitive lines or line shifts and thus enable sub-pixel accurate measurements. A third frame can be captured at the full resolution analogue to the double shutter technique. Image based measurement techniques such as PIV can benefit from this mode when applied in bright environments. The third frame is useful e.g. for acceleration measurements or for particle tracking applications.
Research of spectacle frame measurement system based on structured light method
NASA Astrophysics Data System (ADS)
Guan, Dong; Chen, Xiaodong; Zhang, Xiuda; Yan, Huimin
2016-10-01
Automatic eyeglass lens edging system is now widely used to automatically cut and polish the uncut lens based on the spectacle frame shape data which is obtained from the spectacle frame measuring machine installed on the system. The conventional approach to acquire the frame shape data works in the contact scanning mode with a probe tracing around the groove contour of the spectacle frame which requires a sophisticated mechanical and numerical control system. In this paper, a novel non-contact optical measuring method based on structured light to measure the three dimensional (3D) data of the spectacle frame is proposed. First we focus on the processing approach solving the problem of deterioration of the structured light stripes caused by intense specular reflection on the frame surface. The techniques of bright-dark bi-level fringe projecting, multiple exposuring and high dynamic range imaging are introduced to obtain a high-quality image of structured light stripes. Then, the Gamma transform and median filtering are applied to enhance image contrast. In order to get rid of background noise from the image and extract the region of interest (ROI), an auxiliary lighting system of special design is utilized to help effectively distinguish between the object and the background. In addition, a morphological method with specific morphological structure-elements is adopted to remove noise between stripes and boundary of the spectacle frame. By further fringe center extraction and depth information acquisition through the method of look-up table, the 3D shape of the spectacle frame is recovered.
Hyperspectral retinal imaging with a spectrally tunable light source
NASA Astrophysics Data System (ADS)
Francis, Robert P.; Zuzak, Karel J.; Ufret-Vincenty, Rafael
2011-03-01
Hyperspectral retinal imaging can measure oxygenation and identify areas of ischemia in human patients, but the devices used by current researchers are inflexible in spatial and spectral resolution. We have developed a flexible research prototype consisting of a DLP®-based spectrally tunable light source coupled to a fundus camera to quickly explore the effects of spatial resolution, spectral resolution, and spectral range on hyperspectral imaging of the retina. The goal of this prototype is to (1) identify spectral and spatial regions of interest for early diagnosis of diseases such as glaucoma, age-related macular degeneration (AMD), and diabetic retinopathy (DR); and (2) define required specifications for commercial products. In this paper, we describe the challenges and advantages of using a spectrally tunable light source for hyperspectral retinal imaging, present clinical results of initial imaging sessions, and describe how this research can be leveraged into specifying a commercial product.
Intrinsic melanin and hemoglobin colour components for skin lesion malignancy detection.
Madooei, Ali; Drew, Mark S; Sadeghi, Maryam; Atkins, M Stella
2012-01-01
In this paper we propose a new log-chromaticity 2-D colour space, an extension of previous approaches, which succeeds in removing confounding factors from dermoscopic images: (i) the effects of the particular camera characteristics for the camera system used in forming RGB images; (ii) the colour of the light used in the dermoscope; (iii) shading induced by imaging non-flat skin surfaces; (iv) and light intensity, removing the effect of light-intensity falloff toward the edges of the dermoscopic image. In the context of a blind source separation of the underlying colour, we arrive at intrinsic melanin and hemoglobin images, whose properties are then used in supervised learning to achieve excellent malignant vs. benign skin lesion classification. In addition, we propose using the geometric-mean of colour for skin lesion segmentation based on simple grey-level thresholding, with results outperforming the state of the art.
A novel method to detect shadows on multispectral images
NASA Astrophysics Data System (ADS)
Daǧlayan Sevim, Hazan; Yardımcı ćetin, Yasemin; Özışık Başkurt, Didem
2016-10-01
Shadowing occurs when the direct light coming from a light source is obstructed by high human made structures, mountains or clouds. Since shadow regions are illuminated only by scattered light, true spectral properties of the objects are not observed in such regions. Therefore, many object classification and change detection problems utilize shadow detection as a preprocessing step. Besides, shadows are useful for obtaining 3D information of the objects such as estimating the height of buildings. With pervasiveness of remote sensing images, shadow detection is ever more important. This study aims to develop a shadow detection method on multispectral images based on the transformation of C1C2C3 space and contribution of NIR bands. The proposed method is tested on Worldview-2 images covering Ankara, Turkey at different times. The new index is used on these 8-band multispectral images with two NIR bands. The method is compared with methods in the literature.
NASA Astrophysics Data System (ADS)
He, Xiao Dong
This thesis studies light scattering processes off rough surfaces. Analytic models for reflection, transmission and subsurface scattering of light are developed. The results are applicable to realistic image generation in computer graphics. The investigation focuses on the basic issue of how light is scattered locally by general surfaces which are neither diffuse nor specular; Physical optics is employed to account for diffraction and interference which play a crucial role in the scattering of light for most surfaces. The thesis presents: (1) A new reflectance model; (2) A new transmittance model; (3) A new subsurface scattering model. All of these models are physically-based, depend on only physical parameters, apply to a wide range of materials and surface finishes and more importantly, provide a smooth transition from diffuse-like to specular reflection as the wavelength and incidence angle are increased or the surface roughness is decreased. The reflectance and transmittance models are based on the Kirchhoff Theory and the subsurface scattering model is based on Energy Transport Theory. They are valid only for surfaces with shallow slopes. The thesis shows that predicted reflectance distributions given by the reflectance model compare favorably with experiment. The thesis also investigates and implements fast ways of computing the reflectance and transmittance models. Furthermore, the thesis demonstrates that a high level of realistic image generation can be achieved due to the physically -correct treatment of the scattering processes by the reflectance model.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-07-21
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264
In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie
2015-03-01
Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.
Imaging of dental material by polarization-sensitive optical coherence tomography
NASA Astrophysics Data System (ADS)
Dichtl, Sabine; Baumgartner, Angela; Hitzenberger, Christoph K.; Moritz, Andreas; Wernisch, Johann; Robl, Barbara; Sattmann, Harald; Leitgeb, Rainer; Sperr, Wolfgang; Fercher, Adolf F.
1999-05-01
Partial coherence interferometry (PCI) and optical coherence tomography (OCT) are noninvasive and noncontact techniques for high precision biometry and for obtaining cross- sectional images of biologic structures. OCT was initially introduced to depict the transparent tissue of the eye. It is based on interferometry employing the partial coherence properties of a light source with high spatial coherence ut short coherence length to image structures with a resolution of the order of a few microns. Recently this technique has been modified for cross section al imaging of dental and periodontal tissues. In vitro and in vivo OCT images have been recorded, which distinguish enamel, cemento and dentin structures and provide detailed structural information on clinical abnormalities. In contrast to convention OCT, where the magnitude of backscattered light as a function of depth is imaged, polarization sensitive OCT uses backscattered light to image the magnitude of the birefringence in the sample as a function of depth. First polarization sensitive OCT recordings show, that changes in the mineralization status of enamel or dentin caused by caries or non-caries lesions can result in changes of the polarization state of the light backscattered by dental material. Therefore polarization sensitive OCT might provide a new diagnostic imaging modality in clinical and research dentistry.
NASA Astrophysics Data System (ADS)
Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie
2018-01-01
The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.
Development of a Coded Aperture X-Ray Backscatter Imager for Explosive Device Detection
NASA Astrophysics Data System (ADS)
Faust, Anthony A.; Rothschild, Richard E.; Leblanc, Philippe; McFee, John Elton
2009-02-01
Defence R&D Canada has an active research and development program on detection of explosive devices using nuclear methods. One system under development is a coded aperture-based X-ray backscatter imaging detector designed to provide sufficient speed, contrast and spatial resolution to detect antipersonnel landmines and improvised explosive devices. The successful development of a hand-held imaging detector requires, among other things, a light-weight, ruggedized detector with low power requirements, supplying high spatial resolution. The University of California, San Diego-designed HEXIS detector provides a modern, large area, high-temperature CZT imaging surface, robustly packaged in a light-weight housing with sound mechanical properties. Based on the potential for the HEXIS detector to be incorporated as the detection element of a hand-held imaging detector, the authors initiated a collaborative effort to demonstrate the capability of a coded aperture-based X-ray backscatter imaging detector. This paper will discuss the landmine and IED detection problem and review the coded aperture technique. Results from initial proof-of-principle experiments will then be reported.
Galaxies Gather at Great Distances
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Distant Galaxy Cluster Infrared Survey Poster [figure removed for brevity, see original site] [figure removed for brevity, see original site] Bird's Eye View Mosaic Bird's Eye View Mosaic with Clusters [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] 9.1 Billion Light-Years 8.7 Billion Light-Years 8.6 Billion Light-Years Astronomers have discovered nearly 300 galaxy clusters and groups, including almost 100 located 8 to 10 billion light-years away, using the space-based Spitzer Space Telescope and the ground-based Mayall 4-meter telescope at Kitt Peak National Observatory in Tucson, Ariz. The new sample represents a six-fold increase in the number of known galaxy clusters and groups at such extreme distances, and will allow astronomers to systematically study massive galaxies two-thirds of the way back to the Big Bang. A mosaic portraying a bird's eye view of the field in which the distant clusters were found is shown at upper left. It spans a region of sky 40 times larger than that covered by the full moon as seen from Earth. Thousands of individual images from Spitzer's infrared array camera instrument were stitched together to create this mosaic. The distant clusters are marked with orange dots. Close-up images of three of the distant galaxy clusters are shown in the adjoining panels. The clusters appear as a concentration of red dots near the center of each image. These images reveal the galaxies as they were over 8 billion years ago, since that's how long their light took to reach Earth and Spitzer's infrared eyes. These pictures are false-color composites, combining ground-based optical images captured by the Mosaic-I camera on the Mayall 4-meter telescope at Kitt Peak, with infrared pictures taken by Spitzer's infrared array camera. Blue and green represent visible light at wavelengths of 0.4 microns and 0.8 microns, respectively, while red indicates infrared light at 4.5 microns. Kitt Peak National Observatory is part of the National Optical Astronomy Observatory in Tuscon, Ariz.Fusion of light-field and photogrammetric surface form data
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard K.
2017-08-01
Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.
Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki
2017-12-09
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.
Light microscopy applications in systems biology: opportunities and challenges
2013-01-01
Biological systems present multiple scales of complexity, ranging from molecules to entire populations. Light microscopy is one of the least invasive techniques used to access information from various biological scales in living cells. The combination of molecular biology and imaging provides a bottom-up tool for direct insight into how molecular processes work on a cellular scale. However, imaging can also be used as a top-down approach to study the behavior of a system without detailed prior knowledge about its underlying molecular mechanisms. In this review, we highlight the recent developments on microscopy-based systems analyses and discuss the complementary opportunities and different challenges with high-content screening and high-throughput imaging. Furthermore, we provide a comprehensive overview of the available platforms that can be used for image analysis, which enable community-driven efforts in the development of image-based systems biology. PMID:23578051
Full ocular biometry through dual-depth whole-eye optical coherence tomography
Kim, Hyung-Jin; Kim, Minji; Hyeon, Min Gyu; Choi, Youngwoon; Kim, Beop-Min
2018-01-01
We propose a new method of determining the optical axis (OA), pupillary axis (PA), and visual axis (VA) of the human eye by using dual-depth whole-eye optical coherence tomography (OCT). These axes, as well as the angles “α” between the OA and VA and “κ” between PA and VA, are important in many ophthalmologic applications, especially in refractive surgery. Whole-eye images are reconstructed based on simultaneously acquired images of the anterior segment and retina. The light from a light source is split into two orthogonal polarization components for imaging the anterior segment and retina, respectively. The OA and PA are identified based on their geometric definitions by using the anterior segment image only, while the VA is detected through accurate correlation between the two images. The feasibility of our approach was tested using a model eye and human subjects. PMID:29552378
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor
Park, Jinho; Park, Hasil
2017-01-01
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826
Kim, Heekang; Kwon, Soon; Kim, Sungho
2016-07-08
This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen).
NASA Astrophysics Data System (ADS)
Kutulakos, Kyros N.; O'Toole, Matthew
2015-03-01
Conventional cameras record all light falling on their sensor regardless of the path that light followed to get there. In this paper we give an overview of a new family of computational cameras that offers many more degrees of freedom. These cameras record just a fraction of the light coming from a controllable source, based on the actual 3D light path followed. Photos and live video captured this way offer an unconventional view of everyday scenes in which the effects of scattering, refraction and other phenomena can be selectively blocked or enhanced, visual structures that are too subtle to notice with the naked eye can become apparent, and object appearance can depend on depth. We give an overview of the basic theory behind these cameras and their DMD-based implementation, and discuss three applications: (1) live indirect-only imaging of complex everyday scenes, (2) reconstructing the 3D shape of scenes whose geometry or material properties make them hard or impossible to scan with conventional methods, and (3) acquiring time-of-flight images that are free of multi-path interference.
New light field camera based on physical based rendering tracing
NASA Astrophysics Data System (ADS)
Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung
2014-03-01
Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.
Demonstration of a single-wavelength spectral-imaging-based Thai jasmine rice identification
NASA Astrophysics Data System (ADS)
Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan
2011-07-01
A single-wavelength spectral-imaging-based Thai jasmine rice breed identification is demonstrated. Our nondestructive identification approach relies on a combination of fluorescent imaging and simple image processing techniques. Especially, we apply simple image thresholding, blob filtering, and image subtracting processes to either a 545 or a 575nm image in order to identify our desired Thai jasmine rice breed from others. Other key advantages include no waste product and fast identification time. In our demonstration, UVC light is used as our exciting light, a liquid crystal tunable optical filter is used as our wavelength seclector, and a digital camera with 640activepixels×480activepixels is used to capture the desired spectral image. Eight Thai rice breeds having similar size and shape are tested. Our experimental proof of concept shows that by suitably applying image thresholding, blob filtering, and image subtracting processes to the selected fluorescent image, the Thai jasmine rice breed can be identified with measured false acceptance rates of <22.9% and <25.7% for spectral images at 545 and 575nm wavelengths, respectively. A measured fast identification time is 25ms, showing high potential for real-time applications.
Seppänen, Tapio
2017-01-01
Fourier transform infrared (FTIR) microspectroscopy images contain information from the whole infrared spectrum used for microspectroscopic analyses. In combination with the FTIR image, visible light images are used to depict the area from which the FTIR spectral image was sampled. These two images are traditionally acquired as separate files. This paper proposes a histogram shifting-based data hiding technique to embed visible light images in FTIR spectral images producing single entities. The primary objective is to improve data management efficiency. Secondary objectives are confidentiality, availability, and reliability. Since the integrity of biomedical data is vital, the proposed method applies reversible data hiding. After extraction of the embedded data, the FTIR image is reversed to its original state. Furthermore, the proposed method applies authentication tags generated with keyed Hash-Based Message Authentication Codes (HMAC) to detect tampered or corrupted areas of FTIR images. The experimental results show that the FTIR spectral images carrying the payload maintain good perceptual fidelity and the payload can be reliably recovered even after bit flipping or cropping attacks. It has been also shown that extraction successfully removes all modifications caused by the payload. Finally, authentication tags successfully indicated tampered FTIR image areas. PMID:29259987
Systems and methods for optically measuring properties of hydrocarbon fuel gases
Adler-Golden, S.; Bernstein, L.S.; Bien, F.; Gersh, M.E.; Goldstein, N.
1998-10-13
A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution. 14 figs.
Systems and methods for optically measuring properties of hydrocarbon fuel gases
Adler-Golden, Steven; Bernstein, Lawrence S.; Bien, Fritz; Gersh, Michael E.; Goldstein, Neil
1998-10-13
A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution.
A line scanned light-sheet microscope with phase shaped self-reconstructing beams.
Fahrbach, Florian O; Rohrbach, Alexander
2010-11-08
We recently demonstrated that Microscopy with Self-Reconstructing Beams (MISERB) increases both image quality and penetration depth of illumination beams in strongly scattering media. Based on the concept of line scanned light-sheet microscopy, we present an add-on module to a standard inverted microscope using a scanned beam that is shaped in phase and amplitude by a spatial light modulator. We explain technical details of the setup as well as of the holograms for the creation, positioning and scaling of static light-sheets, Gaussian beams and Bessel beams. The comparison of images from identical sample areas illuminated by different beams allows a precise assessment of the interconnection between beam shape and image quality. The superior propagation ability of Bessel beams through inhomogeneous media is demonstrated by measurements on various scattering media.
Contrasting trends in light pollution across Europe based on satellite observed night time lights.
Bennie, Jonathan; Davies, Thomas W; Duffy, James P; Inger, Richard; Gaston, Kevin J
2014-01-21
Since the 1970s nighttime satellite images of the Earth from space have provided a striking illustration of the extent of artificial light. Meanwhile, growing awareness of adverse impacts of artificial light at night on scientific astronomy, human health, ecological processes and aesthetic enjoyment of the night sky has led to recognition of light pollution as a significant global environmental issue. Links between economic activity, population growth and artificial light are well documented in rapidly developing regions. Applying a novel method to analysis of satellite images of European nighttime lights over 15 years, we show that while the continental trend is towards increasing brightness, some economically developed regions show more complex patterns with large areas decreasing in observed brightness over this period. This highlights that opportunities exist to constrain and even reduce the environmental impact of artificial light pollution while delivering cost and energy-saving benefits.
Imaging Spectrometer on a Chip
NASA Technical Reports Server (NTRS)
Wang, Yu; Pain, Bedabrata; Cunningham, Thomas; Zheng, Xinyu
2007-01-01
A proposed visible-light imaging spectrometer on a chip would be based on the concept of a heterostructure comprising multiple layers of silicon-based photodetectors interspersed with long-wavelength-pass optical filters. In a typical application, this heterostructure would be replicated in each pixel of an image-detecting integrated circuit of the active-pixel-sensor type (see figure). The design of the heterostructure would exploit the fact that within the visible portion of the spectrum, the characteristic depth of penetration of photons increases with wavelength. Proceeding from the front toward the back, each successive long-wavelength-pass filter would have a longer cutoff wavelength, and each successive photodetector would be made thicker to enable it to absorb a greater proportion of incident longer-wavelength photons. Incident light would pass through the first photodetector and encounter the first filter, which would reflect light having wavelengths shorter than its cutoff wavelength and pass light of longer wavelengths. A large portion of the incident and reflected shorter-wavelength light would be absorbed in the first photodetector. The light that had passed through the first photodetector/filter pair of layers would pass through the second photodetector and encounter the second filter, which would reflect light having wavelengths shorter than its cutoff wavelength while passing light of longer wavelengths. Thus, most of the light reflected by the second filter would lie in the wavelength band between the cutoff wavelengths of the first and second filters. Thus, further, most of the light absorbed in the second photodetector would lie in this wavelength band. In a similar manner, each successive photodetector would detect, predominantly, light in a successively longer wavelength band bounded by the shorter cutoff wavelength of the preceding filter and the longer cutoff wavelength of the following filter.
MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera
NASA Astrophysics Data System (ADS)
Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.
2017-10-01
An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).
Study on High Resolution Membrane-Based Diffractive Optical Imaging on Geostationary Orbit
NASA Astrophysics Data System (ADS)
Jiao, J.; Wang, B.; Wang, C.; Zhang, Y.; Jin, J.; Liu, Z.; Su, Y.; Ruan, N.
2017-05-01
Diffractive optical imaging technology provides a new way to realize high resolution earth observation on geostationary orbit. There are a lot of benefits to use the membrane-based diffractive optical element in ultra-large aperture optical imaging system, including loose tolerance, light weight, easy folding and unfolding, which make it easy to realize high resolution earth observation on geostationary orbit. The implementation of this technology also faces some challenges, including the configuration of the diffractive primary lens, the development of high diffraction efficiency membrane-based diffractive optical elements, and the correction of the chromatic aberration of the diffractive optical elements. Aiming at the configuration of the diffractive primary lens, the "6+1" petal-type unfold scheme is proposed, which consider the compression ratio, the blocking rate and the development complexity. For high diffraction efficiency membrane-based diffractive optical element, a self-collimating method is proposed. The diffraction efficiency is more than 90 % of the theoretical value. For the chromatic aberration correction problem, an optimization method based on schupmann is proposed to make the imaging spectral bandwidth in visible light band reach 100 nm. The above conclusions have reference significance for the development of ultra-large aperture diffractive optical imaging system.
Design, implementation and investigation of an image guide-based optical flip-flop array
NASA Technical Reports Server (NTRS)
Griffith, P. C.
1987-01-01
Presented is the design for an image guide-based optical flip-flop array created using a Hughes liquid crystal light valve and a flexible image guide in a feedback loop. This design is used to investigate the application of image guides as a communication mechanism in numerical optical computers. It is shown that image guides can be used successfully in this manner but mismatch match between the input and output fiber arrays is extremely limiting.
Saito, Kenta; Kobayashi, Kentaro; Tani, Tomomi; Nagai, Takeharu
2008-01-01
Multi-point scanning confocal microscopy using a Nipkow disk enables the acquisition of fluorescent images with high spatial and temporal resolutions. Like other single-point scanning confocal systems that use Galvano meter mirrors, a commercially available Nipkow spinning disk confocal unit, Yokogawa CSU10, requires lasers as the excitation light source. The choice of fluorescent dyes is strongly restricted, however, because only a limited number of laser lines can be introduced into a single confocal system. To overcome this problem, we developed an illumination system in which light from a mercury arc lamp is scrambled to make homogeneous light by passing it through a multi-mode optical fiber. This illumination system provides incoherent light with continuous wavelengths, enabling the observation of a wide range of fluorophores. Using this optical system, we demonstrate both the high-speed imaging (up to 100 Hz) of intracellular Ca(2+) propagation, and the multi-color imaging of Ca(2+) and PKC-gamma dynamics in living cells.
Angiographic and structural imaging using high axial resolution fiber-based visible-light OCT
Pi, Shaohua; Camino, Acner; Zhang, Miao; Cepurna, William; Liu, Gangjun; Huang, David; Morrison, John; Jia, Yali
2017-01-01
Optical coherence tomography using visible-light sources can increase the axial resolution without the need for broader spectral bandwidth. Here, a high-resolution, fiber-based, visible-light optical coherence tomography system is built and used to image normal retina in rats and blood vessels in chicken embryo. In the rat retina, accurate segmentation of retinal layer boundaries and quantification of layer thicknesses are accomplished. Furthermore, three distinct capillary plexuses in the retina and the choriocapillaris are identified and the characteristic pattern of the nerve fiber layer thickness in rats is revealed. In the chicken embryo model, the microvascular network and a venous bifurcation are examined and the ability to identify and segment large vessel walls is demonstrated. PMID:29082087
Enriching text with images and colored light
NASA Astrophysics Data System (ADS)
Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon
2008-01-01
We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.
NASA Astrophysics Data System (ADS)
Xia, Wenze; Ma, Yayun; Han, Shaokun; Wang, Yulin; Liu, Fei; Zhai, Yu
2018-06-01
One of the most important goals of research on three-dimensional nonscanning laser imaging systems is the improvement of the illumination system. In this paper, a new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array is proposed. This array is obtained using a fiber array connected to a laser array with each unit laser having independent control circuits. This system uses a point-to-point imaging process, which is realized using the exact corresponding optical relationship between the point-light-source array and a linear-mode avalanche photodiode array detector. The complete working process of this system is explained in detail, and the mathematical model of this system containing four equations is established. A simulated contrast experiment and two real contrast experiments which use the simplified setup without a laser array are performed. The final results demonstrate that unlike a conventional three-dimensional nonscanning laser imaging system, the proposed system meets all the requirements of an eligible illumination system. Finally, the imaging performance of this system is analyzed under defocusing situations, and analytical results show that the system has good defocusing robustness and can be easily adjusted in real applications.
NASA Astrophysics Data System (ADS)
Riviere, Nicolas; Ceolato, Romain; Hespel, Laurent
2014-10-01
Onera, the French aerospace lab, develops and models active imaging systems to understand the relevant physical phenomena affecting these systems performance. As a consequence, efforts have been done on the propagation of a pulse through the atmosphere and on target geometries and surface properties. These imaging systems must operate at night in all ambient illumination and weather conditions in order to perform strategic surveillance for various worldwide operations. We have implemented codes for 2D and 3D laser imaging systems. As we aim to image a scene in the presence of rain, snow, fog or haze, we introduce such light-scattering effects in our numerical models and compare simulated images with measurements provided by commercial laser scanners.
Differential high-speed digital micromirror device based fluorescence speckle confocal microscopy.
Jiang, Shihong; Walker, John
2010-01-20
We report a differential fluorescence speckle confocal microscope that acquires an image in a fraction of a second by exploiting the very high frame rate of modern digital micromirror devices (DMDs). The DMD projects a sequence of predefined binary speckle patterns to the sample and modulates the intensity of the returning fluorescent light simultaneously. The fluorescent light reflecting from the DMD's "on" and "off" pixels is modulated by correlated speckle and anticorrelated speckle, respectively, to form two images on two CCD cameras in parallel. The sum of the two images recovers a widefield image, but their difference gives a near-confocal image in real time. Experimental results for both low and high numerical apertures are shown.
Creation of High Efficient Firefly Luciferase
NASA Astrophysics Data System (ADS)
Nakatsu, Toru
Firefly emits visible yellow-green light. The bioluminescence reaction is carried out by the enzyme luciferase. The bioluminescence of luciferase is widely used as an excellent tool for monitoring gene expression, the measurement of the amount of ATP and in vivo imaging. Recently a study of the cancer metastasis is carried out by in vivo luminescence imaging system, because luminescence imaging is less toxic and more useful for long-term assay than fluorescence imaging by GFP. However the luminescence is much dimmer than fluorescence. Then bioluminescence imaging in living organisms demands the high efficient luciferase which emits near infrared lights or enhances the emission intensity. Here I introduce an idea for creating the high efficient luciferase based on the crystal structure.
Rowe, Jason F.; Gaulme, Patrick; Hammel, Heidi B.; Casewell, Sarah L.; Fortney, Jonathan J.; Gizis, John E.; Lissauer, Jack J.; Morales-Juberias, Raul; Orton, Glenn S.; Wong, Michael H.; Marley, Mark S.
2017-01-01
Observations of Neptune with the Kepler Space Telescope yield a 49 day light curve with 98% coverage at a 1 minute cadence. A significant signature in the light curve comes from discrete cloud features. We compare results extracted from the light curve data with contemporaneous disk-resolved imaging of Neptune from the Keck 10-m telescope at 1.65 microns and Hubble Space Telescope visible imaging acquired nine months later. This direct comparison validates the feature latitudes assigned to the K2 light curve periods based on Neptune’s zonal wind profile, and confirms observed cloud feature variability. Although Neptune’s clouds vary in location and intensity on short and long timescales, a single large discrete storm seen in Keck imaging dominates the K2 and Hubble light curves; smaller or fainter clouds likely contribute to short-term brightness variability. The K2 Neptune light curve, in conjunction with our imaging data, provides context for the interpretation of current and future brown dwarf and extrasolar planet variability measurements. In particular we suggest that the balance between large, relatively stable, atmospheric features and smaller, more transient, clouds controls the character of substellar atmospheric variability. Atmospheres dominated by a few large spots may show inherently greater light curve stability than those which exhibit a greater number of smaller features. PMID:28127087
Chu, Jun; Oh, Young-Hee; Sens, Alex; Ataie, Niloufar; Dana, Hod; Macklin, John J.; Laviv, Tal; Welf, Erik S.; Dean, Kevin M.; Zhang, Feijie; Kim, Benjamin B.; Tang, Clement Tran; Hu, Michelle; Baird, Michelle A.; Davidson, Michael W.; Kay, Mark A.; Fiolka, Reto; Yasuda, Ryohei; Kim, Douglas S.; Ng, Ho-Leung; Lin, Michael Z.
2016-01-01
Orange-red fluorescent proteins (FPs) are widely used in biomedical research for multiplexed epifluorescence microscopy with GFP-based probes, but their different excitation requirements make multiplexing with new advanced microscopy methods difficult. Separately, orange-red FPs are useful for deep-tissue imaging in mammals due to the relative tissue transmissibility of orange-red light, but their dependence on illumination limits their sensitivity as reporters in deep tissues. Here we describe CyOFP1, a bright engineered orange-red FP that is excitable by cyan light. We show that CyOFP1 enables single-excitation multiplexed imaging with GFP-based probes in single-photon and two-photon microscopy, including time-lapse imaging in light-sheet systems. CyOFP1 also serves as an efficient acceptor for resonance energy transfer from the highly catalytic blue-emitting luciferase NanoLuc. An optimized fusion of CyOFP1 and NanoLuc, called Antares, functions as a highly sensitive bioluminescent reporter in vivo, producing substantially brighter signals from deep tissues than firefly luciferase and other bioluminescent proteins. PMID:27240196
Total variation based image deconvolution for extended depth-of-field microscopy images
NASA Astrophysics Data System (ADS)
Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.
2015-03-01
One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.
NASA Astrophysics Data System (ADS)
Chong, Shau Poh; Bernucci, Marcel T.; Borycki, Dawid; Radhakrishnan, Harsha; Srinivasan, Vivek J.
2017-02-01
Visible light is absorbed by intrinsic chromophores such as photopigment, melanin, and hemoglobin, and scattered by subcellular structures, all of which are potential retinal disease biomarkers. Recently, high-resolution quantitative measurement and mapping of hemoglobin concentrations was demonstrated using visible light Optical Coherence Tomography (OCT). Yet, most high-resolution visible light OCT systems adopt free-space, or bulk, optical setups, which could limit clinical applications. Here, the construction of a multi-functional fiber-optic OCT system for human retinal imaging with <2.5 micron axial resolution is described. A detailed noise characterization of two supercontinuum light sources with differing pulse repetition rates is presented. The higher repetition rate, lower noise, source is found to enable a sensitivity of 87 dB with 0.1 mW incident power at the cornea and a 98 microsecond exposure time. Using a broadband, asymmetric, fused single-mode fiber coupler designed for visible wavelengths, the sample arm is integrated into an ophthalmoscope platform, rendering it portable and suitable for clinical use. In vivo anatomical, Doppler, and spectroscopic imaging of the human retina is further demonstrated using a single oversampled B-scan. For spectroscopic fitting of oxyhemoglobin (HbO2) and deoxyhemoglobin (Hb) content in the retinal vessels, a noise bias-corrected absorbance spectrum is estimated using a sliding short-time Fourier transform of the complex OCT signal and fit using a model of light absorption and scattering. This yielded path length (L) times molar concentration, LCHbO2 and LCHb. Based on these results, we conclude that high-resolution visible light OCT has potential for depth-resolved functional imaging of the eye.
An overview of methods to mitigate artifacts in optical coherence tomography imaging of the skin.
Adabi, Saba; Fotouhi, Audrey; Xu, Qiuyun; Daveluy, Steve; Mehregan, Darius; Podoleanu, Adrian; Nasiriavanaki, Mohammadreza
2018-05-01
Optical coherence tomography (OCT) of skin delivers three-dimensional images of tissue microstructures. Although OCT imaging offers a promising high-resolution modality, OCT images suffer from some artifacts that lead to misinterpretation of tissue structures. Therefore, an overview of methods to mitigate artifacts in OCT imaging of the skin is of paramount importance. Speckle, intensity decay, and blurring are three major artifacts in OCT images. Speckle is due to the low coherent light source used in the configuration of OCT. Intensity decay is a deterioration of light with respect to depth, and blurring is the consequence of deficiencies of optical components. Two speckle reduction methods (one based on artificial neural network and one based on spatial compounding), an attenuation compensation algorithm (based on Beer-Lambert law) and a deblurring procedure (using deconvolution), are described. Moreover, optical properties extraction algorithm based on extended Huygens-Fresnel (EHF) principle to obtain some additional information from OCT images are discussed. In this short overview, we summarize some of the image enhancement algorithms for OCT images which address the abovementioned artifacts. The results showed a significant improvement in the visibility of the clinically relevant features in the images. The quality improvement was evaluated using several numerical assessment measures. Clinical dermatologists benefit from using these image enhancement algorithms to improve OCT diagnosis and essentially function as a noninvasive optical biopsy. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Lim, Jun; Park, So Yeong; Huang, Jung Yun; Han, Sung Mi; Kim, Hong-Tae
2013-01-01
We developed an off-axis-illuminated zone-plate-based hard x-ray Zernike phase-contrast microscope beamline at Pohang Light Source. Owing to condenser optics-free and off-axis illumination, a large field of view was achieved. The pinhole-type Zernike phase plate affords high-contrast images of a cell with minimal artifacts such as the shade-off and halo effects. The setup, including the optics and the alignment, is simple and easy, and allows faster and easier imaging of large bio-samples.
From synchrotron radiation to lab source: advanced speckle-based X-ray imaging using abrasive paper
NASA Astrophysics Data System (ADS)
Wang, Hongchang; Kashyap, Yogesh; Sawhney, Kawal
2016-02-01
X-ray phase and dark-field imaging techniques provide complementary and inaccessible information compared to conventional X-ray absorption or visible light imaging. However, such methods typically require sophisticated experimental apparatus or X-ray beams with specific properties. Recently, an X-ray speckle-based technique has shown great potential for X-ray phase and dark-field imaging using a simple experimental arrangement. However, it still suffers from either poor resolution or the time consuming process of collecting a large number of images. To overcome these limitations, in this report we demonstrate that absorption, dark-field, phase contrast, and two orthogonal differential phase contrast images can simultaneously be generated by scanning a piece of abrasive paper in only one direction. We propose a novel theoretical approach to quantitatively extract the above five images by utilising the remarkable properties of speckles. Importantly, the technique has been extended from a synchrotron light source to utilise a lab-based microfocus X-ray source and flat panel detector. Removing the need to raster the optics in two directions significantly reduces the acquisition time and absorbed dose, which can be of vital importance for many biological samples. This new imaging method could potentially provide a breakthrough for numerous practical imaging applications in biomedical research and materials science.
Electro-holographic display using a ZBLAN glass as the image space.
Son, Jung-Young; Lee, Hyoung; Byeon, Jina; Zhao, Jiangbo; Ebendorff-Heidepriem, Heike
2017-04-01
An Er3+-doped ZBLAN glass is used to display a 360° viewable reconstructed image from a hologram on a DMD. The reconstructed image, when the hologram is illuminated by a 852 nm wavelength laser beam, is situated at the inside of the glass, and then a 1530 nm wavelength laser beam is crossed through the image to light it with an upconversion green light, which is viewable at all surrounding directions. This enables us to eliminate the limitation of the viewing zone angle imposed by the finite size of pixels in electro-holographic displays based on digital display chips/panels. The amount of the green light is much higher than that known previously. This is partly caused by the upconversion luminescence induced by 852 and 1530 nm laser beams.
Quantitative phase imaging of retinal cells (Conference Presentation)
NASA Astrophysics Data System (ADS)
LaForest, Timothé; Carpentras, Dino; Kowalczuk, Laura; Behar-Cohen, Francine; Moser, Christophe
2017-02-01
Vision process is ruled by several cells layers of the retina. Before reaching the photoreceptors, light entering the eye has to pass through a few hundreds of micrometers thick layer of ganglion and neurons cells. Macular degeneration is a non-curable disease of themacula occurring with age. This disease can be diagnosed at an early stage by imaging neuronal cells in the retina and observing their death chronically. These cells are phase objects locatedon a background that presents an absorption pattern and so difficult to see with standard imagingtechniques in vivo. Phase imaging methods usually need the illumination system to be on the opposite side of the sample with respect to theimaging system. This is a constraintand a challenge for phase imaging in-vivo. Recently, the possibility of performing phase contrast imaging from one side using properties of scattering media has been shown. This phase contrast imaging is based on the back illumination generated by the sample itself. Here, we present a reflection phase imaging technique based on oblique back-illumination. The oblique back-illumination creates a dark field image of the sample. Generating asymmetric oblique illumination allows obtaining differential phase contrast image, which in turn can be processed to recover a quantitative phase image. In the case of the eye, a transcleral illumination can generate oblique incident light on the retina and the choroidal layer.The back reflected light is then collected by the eye lens to produce dark field image. We show experimental results of retinal phase imagesin ex vivo samples of human and pig retina.
NASA Technical Reports Server (NTRS)
Rosecrance, Richard C.; Johnson, Lee; Soderstrom, Dominic
2016-01-01
Canopy light interception is a main driver of water use and crop yield in almond and walnut production. Fractional green canopy cover (Fc) is a good indicator of light interception and can be estimated remotely from satellite using the normalized difference vegetation index (NDVI) data. Satellite-based Fc estimates could be used to inform crop evapotranspiration models, and hence support improvements in irrigation evaluation and management capabilities. Satellite estimates of Fc in almond and walnut orchards, however, need to be verified before incorporating them into irrigation scheduling or other crop water management programs. In this study, Landsat-based NDVI and Fc from NASA's Satellite Irrigation Management Support (SIMS) were compared with four estimates of canopy cover: 1. light bar measurement, 2. in-situ and image-based dimensional tree-crown analyses, 3. high-resolution NDVI data from low flying aircraft, and 4. orchard photos obtained via Google Earth and processed by an Image J thresholding routine. Correlations between the various estimates are discussed.
NASA Astrophysics Data System (ADS)
Rosecrance, R. C.; Johnson, L.; Soderstrom, D.
2016-12-01
Canopy light interception is a main driver of water use and crop yield in almond and walnut production. Fractional green canopy cover (Fc) is a good indicator of light interception and can be estimated remotely from satellite using the normalized difference vegetation index (NDVI) data. Satellite-based Fc estimates could be used to inform crop evapotranspiration models, and hence support improvements in irrigation evaluation and management capabilities. Satellite estimates of Fc in almond and walnut orchards, however, need to be verified before incorporating them into irrigation scheduling or other crop water management programs. In this study, Landsat-based NDVI and Fc from NASA's Satellite Irrigation Management Support (SIMS) were compared with four estimates of canopy cover: 1. light bar measurement, 2. in-situ and image-based dimensional tree-crown analyses, 3. high-resolution NDVI data from low flying aircraft, and 4. orchard photos obtained via Google Earth and processed by an Image J thresholding routine. Correlations between the various estimates are discussed.
NASA Astrophysics Data System (ADS)
Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin
2013-12-01
We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.
Digital Light Processing update: status and future applications
NASA Astrophysics Data System (ADS)
Hornbeck, Larry J.
1999-05-01
Digital Light Processing (DLP) projection displays based on the Digital Micromirror Device (DMD) were introduced to the market in 1996. Less than 3 years later, DLP-based projectors are found in such diverse applications as mobile, conference room, video wall, home theater, and large-venue. They provide high-quality, seamless, all-digital images that have exceptional stability as well as freedom from both flicker and image lag. Marked improvements have been made in the image quality of DLP-based projection display, including brightness, resolution, contrast ratio, and border image. DLP-based mobile projectors that weighted about 27 pounds in 1996 now weight only about 7 pounds. This weight reduction has been responsible for the definition of an entirely new projector class, the ultraportable. New applications are being developed for this important new projection display technology; these include digital photofinishing for high process speed minilab and maxilab applications and DLP Cinema for the digital delivery of films to audiences around the world. This paper describes the status of DLP-based projection display technology, including its manufacturing, performance improvements, and new applications, with emphasis on DLP Cinema.
Performance of PHOTONIS' low light level CMOS imaging sensor for long range observation
NASA Astrophysics Data System (ADS)
Bourree, Loig E.
2014-05-01
Identification of potential threats in low-light conditions through imaging is commonly achieved through closed-circuit television (CCTV) and surveillance cameras by combining the extended near infrared (NIR) response (800-10000nm wavelengths) of the imaging sensor with NIR LED or laser illuminators. Consequently, camera systems typically used for purposes of long-range observation often require high-power lasers in order to generate sufficient photons on targets to acquire detailed images at night. While these systems may adequately identify targets at long-range, the NIR illumination needed to achieve such functionality can easily be detected and therefore may not be suitable for covert applications. In order to reduce dependency on supplemental illumination in low-light conditions, the frame rate of the imaging sensors may be reduced to increase the photon integration time and thus improve the signal to noise ratio of the image. However, this may hinder the camera's ability to image moving objects with high fidelity. In order to address these particular drawbacks, PHOTONIS has developed a CMOS imaging sensor (CIS) with a pixel architecture and geometry designed specifically to overcome these issues in low-light level imaging. By combining this CIS with field programmable gate array (FPGA)-based image processing electronics, PHOTONIS has achieved low-read noise imaging with enhanced signal-to-noise ratio at quarter moon illumination, all at standard video frame rates. The performance of this CIS is discussed herein and compared to other commercially available CMOS and CCD for long-range observation applications.
Volumetric bioimaging based on light field microscopy with temporal focusing illumination
NASA Astrophysics Data System (ADS)
Hsu, Feng-Chun; Sie, Yong Da; Lai, Feng-Jie; Chen, Shean-Jen
2018-02-01
Light field technique at a single shot can get the whole volume image of observed sample. Therefore, the original frame rate of the optical system can be taken as the volumetric image rate. For dynamically imaging whole micron-scale biosample, a light field microscope with temporal focusing illumination has been developed. In the light field microscope, the f-number of the microlens array (MLA) is adopted to match that of the objective; hence, the subimages via adjacent lenslets do not overlay each other. A three-dimensional (3D) deconvolution algorithm is utilized to deblur the out-of-focusing part. Conventional light field microscopy (LFM) illuminates whole volume sample even noninteresting parts; nevertheless, whole volume excitation causes even more damage on bio-sample and also increase the background noise from the out of range. Therefore, temporal focusing is integrated into the light field microscope for selecting the illumination volume. Herein, a slit on the back focal plane of the objective is utilized to control the axial excitation confinement for selecting the illumination volume. As a result, the developed light field microscope with the temporal focusing multiphoton illumination (TFMPI) can reconstruct 3D images within the selected volume, and the lateral resolution approaches to the theoretical value. Furthermore, the 3D Brownian motion of two-micron fluorescent beads is observed as the criterion of dynamic sample. With superior signal-to-noise ratio and less damage to tissue, the microscope is potential to provide volumetric imaging for vivo sample.
Broadband image sensor array based on graphene-CMOS integration
NASA Astrophysics Data System (ADS)
Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank
2017-06-01
Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.
High-Speed Noninvasive Eye-Tracking System
NASA Technical Reports Server (NTRS)
Talukder, Ashit; LaBaw, Clayton; Michael-Morookian, John; Monacos, Steve; Serviss, Orin
2007-01-01
The figure schematically depicts a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. Like prior commercial noninvasive eye-tracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Relative to the prior commercial systems, the present system operates at much higher speed and thereby offers enhanced capability for applications that involve human-computer interactions, including typing and computer command and control by handicapped individuals,and eye-based diagnosis of physiological disorders that affect gaze responses.
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Valenti, M.
2009-12-01
Jupiter's moon Europa likely possesses an ocean of liquid water beneath its icy surface, but estimates of the thickness of the surface ice shell vary from a few kilometers to tens of kilometers. Color images of Europa reveal the existence of a reddish, non-ice component associated with a variety of geological features. The composition and origin of this material is uncertain, as is its relationship to Europa's various landforms. Published analyses of Galileo Near Infrared Mapping Spectrometer (NIMS) observations indicate the presence of highly hydrated sulfate compounds. This non-ice material may also bear biosignatures or other signs of biotic material. Additional spectral information from the Galileo Solid State Imager (SSI) could further elucidate the nature of the surface deposits, particularly when combined with information from the NIMS. However, little effort has been focused on this approach because proper calibration of the color image data is challenging, requiring both skill and patience to process the data and incorporate the appropriate scattered light correction. We are currently working to properly calibrate the color SSI data. The most important and most difficult issue to address in the analysis of multispectral SSI data entails using thorough calibrations and a correction for scattered light. Early in the Galileo mission, studies of the Galileo SSI data for the moon revealed discrepancies of up to 10% in relative reflectance between images containing scattered light and images corrected for scattered light. Scattered light adds a wavelength-dependent low-intensity brightness factor to pixels across an image. For example, a large bright geological feature located just outside the field of view of an image will scatter extra light onto neighboring pixels within the field of view. Scattered light can be seen as a dim halo surrounding an image that includes a bright limb, and can also come from light scattered inside the camera by dirt, edges, and the interfaces of lenses. Because of the wavelength dependence of this effect, a scattered light correction must be performed on any SSI multispectral dataset before quantitative spectral analysis can be done. The process involves using a point-spread function for each filter that helps determine the amount of scattered light expected for a given pixel based on its location and the model attenuation factor for that pixel. To remove scattered light for a particular image taken through a particular filter, the Fourier transform of the attenuation function, which is the point spread function for that filter, is convolved with the Fourier transform of the image at the same wavelength. The result is then filtered for noise in the frequency domain, and then transformed back to the spatial domain. This results in a version of the original image that would have been taken without the scattered light contribution. We will report on our initial results from this calibration.
NASA Astrophysics Data System (ADS)
Davies, N.; Davies-Shaw, D.; Shaw, J. D.
2007-02-01
We report firsthand on innovative developments in non-invasive, biophotonic techniques for a wide range of diagnostic, imaging and treatment options, including the recognition and quantification of cancerous, pre-cancerous cells and chronic inflammatory conditions. These techniques have benefited from the ability to target the affected site by both monochromatic light and broad multiple wavelength spectra. The employment of such wavelength or color-specific properties embraces the fluorescence stimulation of various photosensitizing drugs, and the instigation and detection of identified fluorescence signatures attendant upon laser induced fluorescence (LIF) phenomena as transmitted and propagated by precancerous, cancerous and normal tissue. In terms of tumor imaging and therapeutic and treatment options, we have exploited the abilities of various wavelengths to penetrate to different depths, through different types of tissues, and have explored quantifiable absorption and reflection characteristics upon which diagnostic assumptions can be reliably based and formulated. These biophotonic-based diagnostic, sensing and imaging techniques have also benefited from, and have been further enhanced by, the integrated ability to provide various power levels to be employed at various stages in the procedure. Applications are myriad, including non-invasive, non destructive diagnosis of in vivo cell characteristics and functions; light-based tissue analysis; real-time monitoring and mapping of brain function and of tumor growth; real time monitoring of the surgical completeness of tumor removal during laser-imaged/guided brain resection; diagnostic procedures based on fluorescence life-time monitoring, the monitoring of chronic inflammatory conditions (including rheumatoid arthritis), and continuous blood glucose monitoring in the control of diabetes.
Laser Light-field Fusion for Wide-field Lensfree On-chip Phase Contrast Microscopy of Nanoparticles
NASA Astrophysics Data System (ADS)
Kazemzadeh, Farnoud; Wong, Alexander
2016-12-01
Wide-field lensfree on-chip microscopy, which leverages holography principles to capture interferometric light-field encodings without lenses, is an emerging imaging modality with widespread interest given the large field-of-view compared to lens-based techniques. In this study, we introduce the idea of laser light-field fusion for lensfree on-chip phase contrast microscopy for detecting nanoparticles, where interferometric laser light-field encodings acquired using a lensfree, on-chip setup with laser pulsations at different wavelengths are fused to produce marker-free phase contrast images of particles at the nanometer scale. As a proof of concept, we demonstrate, for the first time, a wide-field lensfree on-chip instrument successfully detecting 300 nm particles across a large field-of-view of ~30 mm2 without any specialized or intricate sample preparation, or the use of synthetic aperture- or shift-based techniques.
Laser Light-field Fusion for Wide-field Lensfree On-chip Phase Contrast Microscopy of Nanoparticles.
Kazemzadeh, Farnoud; Wong, Alexander
2016-12-13
Wide-field lensfree on-chip microscopy, which leverages holography principles to capture interferometric light-field encodings without lenses, is an emerging imaging modality with widespread interest given the large field-of-view compared to lens-based techniques. In this study, we introduce the idea of laser light-field fusion for lensfree on-chip phase contrast microscopy for detecting nanoparticles, where interferometric laser light-field encodings acquired using a lensfree, on-chip setup with laser pulsations at different wavelengths are fused to produce marker-free phase contrast images of particles at the nanometer scale. As a proof of concept, we demonstrate, for the first time, a wide-field lensfree on-chip instrument successfully detecting 300 nm particles across a large field-of-view of ~30 mm 2 without any specialized or intricate sample preparation, or the use of synthetic aperture- or shift-based techniques.
Invalid-point removal based on epipolar constraint in the structured-light method
NASA Astrophysics Data System (ADS)
Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin
2018-06-01
In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.
Wu, L C; D'Amelio, F; Fox, R A; Polyakov, I; Daunton, N G
1997-06-06
The present report describes a desktop computer-based method for the quantitative assessment of the area occupied by immunoreactive terminals in close apposition to nerve cells in relation to the perimeter of the cell soma. This method is based on Fast Fourier Transform (FFT) routines incorporated in NIH-Image public domain software. Pyramidal cells of layer V of the somatosensory cortex outlined by GABA immunolabeled terminals were chosen for our analysis. A Leitz Diaplan light microscope was employed for the visualization of the sections. A Sierra Scientific Model 4030 CCD camera was used to capture the images into a Macintosh Centris 650 computer. After preprocessing, filtering was performed on the power spectrum in the frequency domain produced by the FFT operation. An inverse FFT with filter procedure was employed to restore the images to the spatial domain. Pasting of the original image to the transformed one using a Boolean logic operation called 'AND'ing produced an image with the terminals enhanced. This procedure allowed the creation of a binary image using a well-defined threshold of 128. Thus, the terminal area appears in black against a white background. This methodology provides an objective means of measurement of area by counting the total number of pixels occupied by immunoreactive terminals in light microscopic sections in which the difficulties of labeling intensity, size, shape and numerical density of terminals are avoided.
Cheng, Victor S; Bai, Jinfen; Chen, Yazhu
2009-11-01
As the needs for various kinds of body surface information are wide-ranging, we developed an imaging-sensor integrated system that can synchronously acquire high-resolution three-dimensional (3D) far-infrared (FIR) thermal and true-color images of the body surface. The proposed system integrates one FIR camera and one color camera with a 3D structured light binocular profilometer. To eliminate the emotion disturbance of the inspector caused by the intensive light projection directly into the eye from the LCD projector, we have developed a gray encoding strategy based on the optimum fringe projection layout. A self-heated checkerboard has been employed to perform the calibration of different types of cameras. Then, we have calibrated the structured light emitted by the LCD projector, which is based on the stereo-vision idea and the least-squares quadric surface-fitting algorithm. Afterwards, the precise 3D surface can fuse with undistorted thermal and color images. To enhance medical applications, the region-of-interest (ROI) in the temperature or color image representing the surface area of clinical interest can be located in the corresponding position in the other images through coordinate system transformation. System evaluation demonstrated a mapping error between FIR and visual images of three pixels or less. Experiments show that this work is significantly useful in certain disease diagnoses.
NASA Technical Reports Server (NTRS)
Wu, L. C.; D'Amelio, F.; Fox, R. A.; Polyakov, I.; Daunton, N. G.
1997-01-01
The present report describes a desktop computer-based method for the quantitative assessment of the area occupied by immunoreactive terminals in close apposition to nerve cells in relation to the perimeter of the cell soma. This method is based on Fast Fourier Transform (FFT) routines incorporated in NIH-Image public domain software. Pyramidal cells of layer V of the somatosensory cortex outlined by GABA immunolabeled terminals were chosen for our analysis. A Leitz Diaplan light microscope was employed for the visualization of the sections. A Sierra Scientific Model 4030 CCD camera was used to capture the images into a Macintosh Centris 650 computer. After preprocessing, filtering was performed on the power spectrum in the frequency domain produced by the FFT operation. An inverse FFT with filter procedure was employed to restore the images to the spatial domain. Pasting of the original image to the transformed one using a Boolean logic operation called 'AND'ing produced an image with the terminals enhanced. This procedure allowed the creation of a binary image using a well-defined threshold of 128. Thus, the terminal area appears in black against a white background. This methodology provides an objective means of measurement of area by counting the total number of pixels occupied by immunoreactive terminals in light microscopic sections in which the difficulties of labeling intensity, size, shape and numerical density of terminals are avoided.
NASA Astrophysics Data System (ADS)
Bondareva, A. P.; Cheremkhin, P. A.; Evtikhiev, N. N.; Krasnov, V. V.; Starikov, S. N.
Scheme of optical image encryption with digital information input and dynamic encryption key based on two liquid crystal spatial light modulators and operating with spatially-incoherent monochromatic illumination is experimentally implemented. Results of experiments on images optical encryption and numerical decryption are presented. Satisfactory decryption error of 0.20÷0.27 is achieved.
NASA Technical Reports Server (NTRS)
2001-01-01
The ground-based image in visible light locates the hub imaged with the Hubble Space Telescope. This barred galaxy feeds material into its hub, igniting star birth. The Hubble NICMOS instrument penetrates beneath the dust to reveal clusters of young stars. Footage shows ground-based, WFPC2, and NICMOS images of NGS 1365. An animation of a large spiral galaxy zooms from the edge to the galactic bulge.
System and Method for Null-Lens Wavefront Sensing
NASA Technical Reports Server (NTRS)
Hill, Peter C. (Inventor); Thompson, Patrick L. (Inventor); Aronstein, David L. (Inventor); Bolcar, Matthew R. (Inventor); Smith, Jeffrey S. (Inventor)
2015-01-01
A method of measuring aberrations in a null-lens including assembly and alignment aberrations. The null-lens may be used for measuring aberrations in an aspheric optic with the null-lens. Light propagates from the aspheric optic location through the null-lens, while sweeping a detector through the null-lens focal plane. Image data being is collected at locations about said focal plane. Light is simulated propagating to the collection locations for each collected image. Null-lens aberrations may extracted, e.g., applying image-based wavefront-sensing to collected images and simulation results. The null-lens aberrations improve accuracy in measuring aspheric optic aberrations.
NASA Astrophysics Data System (ADS)
Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay
2018-03-01
Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.
Color appearance for photorealistic image synthesis
NASA Astrophysics Data System (ADS)
Marini, Daniele; Rizzi, Alessandro; Rossi, Maurizio
2000-12-01
Photorealistic Image Synthesis is a relevant research and application field in computer graphics, whose aim is to produce synthetic images that are undistinguishable from real ones. Photorealism is based upon accurate computational models of light material interaction, that allow us to compute the spectral intensity light field of a geometrically described scene. The fundamental methods are ray tracing and radiosity. While radiosity allows us to compute the diffuse component of the emitted and reflected light, applying ray tracing in a two pass solution we can also cope with non diffuse properties of the model surfaces. Both methods can be implemented to generate an accurate photometric distribution of light of the simulated environment. A still open problem is the visualization phase, whose purpose is to display the final result of the simulated mode on a monitor screen or on a printed paper. The tone reproduction problem consists of finding the best solution to compress the extended dynamic range of the computed light field into the limited range of the displayable colors. Recently some scholars have addressed this problem considering the perception stage of image formation, so including a model of the human visual system in the visualization process. In this paper we present a working hypothesis to solve the tone reproduction problem of synthetic image generation, integrating Retinex perception model into the photo realistic image synthesis context.
Visual perception enhancement for detection of cancerous oral tissue by multi-spectral imaging
NASA Astrophysics Data System (ADS)
Wang, Hsiang-Chen; Tsai, Meng-Tsan; Chiang, Chun-Ping
2013-05-01
Color reproduction systems based on the multi-spectral imaging technique (MSI) for both directly estimating reflection spectra and direct visualization of oral tissues using various light sources are proposed. Images from three oral cancer patients were taken as the experimental samples, and spectral differences between pre-cancerous and normal oral mucosal tissues were calculated at three time points during 5-aminolevulinic acid photodynamic therapy (ALA-PDT) to analyze whether they were consistent with disease processes. To check the successful treatment of oral cancer with ALA-PDT, oral cavity images by swept source optical coherence tomography (SS-OCT) are demonstrated. This system can also reproduce images under different light sources. For pre-cancerous detection, the oral images after the second ALA-PDT are assigned as the target samples. By using RGB LEDs with various correlated color temperatures (CCTs) for color difference comparison, the light source with a CCT of about 4500 K was found to have the best ability to enhance the color difference between pre-cancerous and normal oral mucosal tissues in the oral cavity. Compared with the fluorescent lighting commonly used today, the color difference can be improved by 39.2% from 16.5270 to 23.0023. Hence, this light source and spectral analysis increase the efficiency of the medical diagnosis of oral cancer and aid patients in receiving early treatment.
Image Reconstruction from Data Collected with an Imaging Interferometer
NASA Astrophysics Data System (ADS)
DeSantis, Z. J.; Thurman, S. T.; Hix, T. T.; Ogden, C. E.
The intensity distribution of an incoherent source and the spatial coherence function at some distance away are related by a Fourier transform, via the Van Cittert-Zernike theorem. Imaging interferometers measure the spatial coherence of light propagated from the incoherently illuminated object by combining light from spatially separated points to measure interference fringes. The contrast and phase of the fringe are the amplitude and phase of a Fourier component of the source’s intensity distribution. The Fiber-Coupled Interferometer (FCI) testbed is a visible light, lab-based imaging interferometer designed to test aspects of an envisioned ground-based interferometer for imaging geosynchronous satellites. The front half of the FCI testbed consists of the scene projection optics, which includes an incoherently backlit scene, located at the focus of a 1 m aperture f/100 telescope. The projected light was collected by the back half of the FCI testbed. The collection optics consisted of three 11 mm aperture fiber-coupled telescopes. Light in the fibers was combined pairwise and dispersed onto a sensor to measure the interference fringe as a function of wavelength, which produces a radial spoke of measurements in the Fourier domain. The visibility function was sampled throughout the Fourier domain by recording fringe data at many different scene rotations and collection telescope separations. Our image reconstruction algorithm successfully produced images for the three scenes we tested: asymmetric pair of pinholes, U.S. Air Force resolution bar target, and satellite scene. The bar target reconstruction shows detail and resolution near the predicted resolution limit. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author(s) and should not be interpreted as reflecting the official views or policies of the Department of Defense or the U.S. Government.
Volumetric Light-field Encryption at the Microscopic Scale
Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu
2017-01-01
We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale. PMID:28059149
Volumetric Light-field Encryption at the Microscopic Scale
NASA Astrophysics Data System (ADS)
Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu
2017-01-01
We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale.
Large-area, flexible imaging arrays constructed by light-charge organic memories
Zhang, Lei; Wu, Ti; Guo, Yunlong; Zhao, Yan; Sun, Xiangnan; Wen, Yugeng; Yu, Gui; Liu, Yunqi
2013-01-01
Existing organic imaging circuits, which offer attractive benefits of light weight, low cost and flexibility, are exclusively based on phototransistor or photodiode arrays. One shortcoming of these photo-sensors is that the light signal should keep invariant throughout the whole pixel-addressing and reading process. As a feasible solution, we synthesized a new charge storage molecule and embedded it into a device, which we call light-charge organic memory (LCOM). In LCOM, the functionalities of photo-sensor and non-volatile memory are integrated. Thanks to the deliberate engineering of electronic structure and self-organization process at the interface, 92% of the stored charges, which are linearly controlled by the quantity of light, retain after 20000 s. The stored charges can also be non-destructively read and erased by a simple voltage program. These results pave the way to large-area, flexible imaging circuits and demonstrate a bright future of small molecular materials in non-volatile memory. PMID:23326636
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Portable wide-field hand-held NIR scanner
NASA Astrophysics Data System (ADS)
Jung, Young-Jin; Roman, Manuela; Carrasquilla, Jennifer; Erickson, Sarah J.; Godavarty, Anuradha
2013-03-01
Near-infrared (NIR) optical imaging modality is one of the widely used medical imaging techniques for breast cancer imaging, functional brain mapping, and many other applications. However, conventional NIR imaging systems are bulky and expensive, thereby limiting their accelerated clinical translation. Herein a new compact (6 × 7 × 12 cm3), cost-effective, and wide-field NIR scanner has been developed towards contact as well as no-contact based real-time imaging in both reflectance and transmission mode. The scanner mainly consists of an NIR source light (between 700- 900 nm), an NIR sensitive CCD camera, and a custom-developed image acquisition and processing software to image an area of 12 cm2. Phantom experiments have been conducted to estimate the feasibility of diffuse optical imaging by using Indian-Ink as absorption-based contrast agents. As a result, the developed NIR system measured the light intensity change in absorption-contrasted target up to 4 cm depth under transillumination mode. Preliminary in-vivo studies demonstrated the feasibility of real-time monitoring of blood flow changes. Currently, extensive in-vivo studies are carried out using the ultra-portable NIR scanner in order to assess the potential of the imager towards breast imaging..
Model-based restoration using light vein for range-gated imaging systems.
Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen
2016-09-10
The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.
Novel computer-based endoscopic camera
NASA Astrophysics Data System (ADS)
Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia
1995-05-01
We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.
Geradts, Z J; Bijhold, J; Hermsen, R; Murtagh, F
2001-06-01
On the market several systems exist for collecting spent ammunition data for forensic investigation. These databases store images of cartridge cases and the marks on them. Image matching is used to create hit lists that show which marks on a cartridge case are most similar to another cartridge case. The research in this paper is focused on the different methods of feature selection and pattern recognition that can be used for optimizing the results of image matching. The images are acquired by side light images for the breech face marks and by ring light for the firing pin impression. For these images a standard way of digitizing the images used. For the side light images and ring light images this means that the user has to position the cartridge case in the same position according to a protocol. The positioning is important for the sidelight, since the image that is obtained of a striation mark depends heavily on the angle of incidence of the light. In practice, it appears that the user positions the cartridge case with +/-10 degrees accuracy. We tested our algorithms using 49 cartridge cases of 19 different firearms, where the examiner determined that they were shot with the same firearm. For testing, these images were mixed with a database consisting of approximately 4900 images that were available from the Drugfire database of different calibers.In cases where the registration and the light conditions among those matching pairs was good, a simple computation of the standard deviation of the subtracted gray levels, delivered the best-matched images. For images that were rotated and shifted, we have implemented a "brute force" way of registration. The images are translated and rotated until the minimum of the standard deviation of the difference is found. This method did not result in all relevant matches in the top position. This is caused by the effect that shadows and highlights are compared in intensity. Since the angle of incidence of the light will give a different intensity profile, this method is not optimal. For this reason a preprocessing of the images was required. It appeared that the third scale of the "à trous" wavelet transform gives the best results in combination with brute force. Matching the contents of the images is less sensitive to the variation of the lighting. The problem with the brute force method is however that the time for calculation for 49 cartridge cases to compare between them, takes over 1 month of computing time on a Pentium II-computer with 333MHz. For this reason a faster approach is implemented: correlation in log polar coordinates. This gave similar results as the brute force calculation, however it was computed in 24h for a complete database with 4900 images.A fast pre-selection method based on signatures is carried out that is based on the Kanade Lucas Tomasi (KLT) equation. The positions of the points computed with this method are compared. In this way, 11 of the 49 images were in the top position in combination with the third scale of the à trous equation. It depends however on the light conditions and the prominence of the marks if correct matches are found in the top ranked position. All images were retrieved in the top 5% of the database. This method takes only a few minutes for the complete database if, and can be optimized for comparison in seconds if the location of points are stored in files. For further improvement, it is useful to have the refinement in which the user selects the areas that are relevant on the cartridge case for their marks. This is necessary if this cartridge case is damaged and other marks that are not from the firearm appear on it.
Low-intensity calibration source for optical imaging systems
NASA Astrophysics Data System (ADS)
Holdsworth, David W.
2017-03-01
Laboratory optical imaging systems for fluorescence and bioluminescence imaging have become widely available for research applications. These systems use an ultra-sensitive CCD camera to produce quantitative measurements of very low light intensity, detecting signals from small-animal models labeled with optical fluorophores or luminescent emitters. Commercially available systems typically provide quantitative measurements of light output, in units of radiance (photons s-1 cm-2 SR-1) or intensity (photons s-1 cm-2). One limitation to current systems is that there is often no provision for routine quality assurance and performance evaluation. We describe such a quality assurance system, based on an LED-illuminated thin-film transistor (TFT) liquid-crystal display module. The light intensity is controlled by pulse-width modulation of the backlight, producing radiance values ranging from 1.8 x 106 photons s-1 cm-2 SR-1 to 4.2 x 1013 photons s-1 cm-2 SR-1. The lowest light intensity values are produced by very short backlight pulses (i.e. approximately 10 μs), repeated every 300 s. This very low duty cycle is appropriate for laboratory optical imaging systems, which typically operate with long-duration exposures (up to 5 minutes). The low-intensity light source provides a stable, traceable radiance standard that can be used for routine quality assurance of laboratory optical imaging systems.
Multiple LEDs luminous system in capsule endoscope
NASA Astrophysics Data System (ADS)
Mang, Ou-Yang; Huang, Shih-Wei; Lee, Hsin-Hung; Chen, Yung-Lin; Huang, Ko-Chih; Kuo, Yi-Ting
2007-02-01
Developing the luminous system in a capsule endoscope, it is difficult to obtain an uniform illumination[1] on the observed object because of several reasons: the light pattern of LED is sensitively depend on the driving current, location and projective angles; the optical path of LED light source is not parallel to the optical axis of the nearby imaging lenses; the strong reflection from the inner surface of the dome may saturate the CMOS sensors; the object plane of the observed intestine is not flat. Those reasons induce the over-blooming and deep-dark contrast in a picture and distort the original image strongly. The purpose of the article is to construct a photometric model to analyze the LED projection light pattern, and, furthermore, design a novel multiple LEDs luminous system for obtaining an uniform-brightness image. Several key parameters resulting as illumination uniformity has been taken under the model consideration and proven by experimental results. Those parameters include LED light pattern accuracy, choosing LED position relative to the imaging optical axis, LED numbers, arrangement, and the inner curvature of the dome. The novel structure improves the uniformity from 41% to 71% and reduces the light energy loss under 2%. The progress will help medical professionals to diagnose diseases and give treatment precisely based on the vivid image.
Image-enhanced endoscopy for diagnosis of colorectal tumors in view of endoscopic treatment
Yoshida, Naohisa; Yagi, Nobuaki; Yanagisawa, Akio; Naito, Yuji
2012-01-01
Recently, image-enhanced endoscopy (IEE) has been used to diagnose gastrointestinal tumors. This method is a change from conventional white-light (WL) endoscopy without dyeing solution, requiring only the push of a button. In IEE, there are many advantages in diagnosis of neoplastic tumors, evaluation of invasion depth for cancerous lesions, and detection of neoplastic lesions. In narrow band imaging (NBI) systems (Olympus Medical Co., Tokyo, Japan), optical filters that allow narrow-band light to pass at wavelengths of 415 and 540 nm are used. Mucosal surface blood vessels are seen most clearly at 415 nm, which is the wavelength that corresponds to the hemoglobin absorption band, while vessels in the deep layer of the mucosa can be detected at 540 nm. Thus, NBI also can detect pit-like structures named surface pattern. The flexible spectral imaging color enhancement (FICE) system (Fujifilm Medical Co., Tokyo, Japan) is also an IEE but different to NBI. FICE depends on the use of spectral-estimation technology to reconstruct images at different wavelengths based on WL images. FICE can enhance vascular and surface patterns. The autofluorescence imaging (AFI) video endoscope system (Olympus Medical Co., Tokyo, Japan) is a new illumination method that uses the difference in intensity of autofluorescence between the normal area and neoplastic lesions. AFI light comprises a blue light for emitting and a green light for hemoglobin absorption. The aim of this review is to highlight the efficacy of IEE for diagnosis of colorectal tumors for endoscopic treatment. PMID:23293724
NASA Astrophysics Data System (ADS)
Rizvi, Sadiq; Ley, Peer-Phillip; Knöchelmann, Marvin; Lachmayer, Roland
2018-02-01
Research reveals that visual information forms the major portion of the received data for driving. At night -owing to the, sometimes scarcity, sometime inhomogeneity of light- the human physiology and psychology experiences a dramatic alteration. It is found that although the likelihood of accident occurrence is higher during the day due to heavier traffic, the most fatal accidents still occur during night time. How can road safety be improved in limited lighting conditions using DMD-based high resolution headlamps? DMD-based pixel light systems, utilizing HID and LED light sources, are able to address hundreds of thousands of pixels individually. Using camera information, this capability allows 'glare-free' light distributions that perfectly adapt to the needs of all road users. What really enables these systems to stand out however, is their on-road image projection capability. This projection functionality may be used in co-operation with other driver assistance systems as an assist feature for the projection of navigation data, warning signs, car status information etc. Since contrast sensitivity constitutes a decisive measure of the human visual function, here is then a core question: what distributions of luminance in the projection space produce highly visible on-road image projections? This work seeks to address this question. Responses on sets of differently illuminated projections are collected from a group of participants and later interpreted using statistical data obtained using a luminance camera. Some aspects regarding the correlation between contrast ratio, symbol form and attention capture are also discussed.
Hubble Space Telescope, Faint Object Camera
NASA Technical Reports Server (NTRS)
1981-01-01
This drawing illustrates Hubble Space Telescope's (HST's), Faint Object Camera (FOC). The FOC reflects light down one of two optical pathways. The light enters a detector after passing through filters or through devices that can block out light from bright objects. Light from bright objects is blocked out to enable the FOC to see background images. The detector intensifies the image, then records it much like a television camera. For faint objects, images can be built up over long exposure times. The total image is translated into digital data, transmitted to Earth, and then reconstructed. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors.
CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel.
Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun
2014-11-01
A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments.
Generation of high-dynamic range image from digital photo
NASA Astrophysics Data System (ADS)
Wang, Ying; Potemin, Igor S.; Zhdanov, Dmitry D.; Wang, Xu-yang; Cheng, Han
2016-10-01
A number of the modern applications such as medical imaging, remote sensing satellites imaging, virtual prototyping etc use the High Dynamic Range Image (HDRI). Generally to obtain HDRI from ordinary digital image the camera is calibrated. The article proposes the camera calibration method based on the clear sky as the standard light source and takes sky luminance from CIE sky model for the corresponding geographical coordinates and time. The article considers base algorithms for getting real luminance values from ordinary digital image and corresponding programmed implementation of the algorithms. Moreover, examples of HDRI reconstructed from ordinary images illustrate the article.
Ultrafast image-based dynamic light scattering for nanoparticle sizing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Wu; Zhang, Jie; Liu, Lili
An ultrafast sizing method for nanoparticles is proposed, called as UIDLS (Ultrafast Image-based Dynamic Light Scattering). This method makes use of the intensity fluctuation of scattered light from nanoparticles in Brownian motion, which is similar to the conventional DLS method. The difference in the experimental system is that the scattered light by nanoparticles is received by an image sensor instead of a photomultiplier tube. A novel data processing algorithm is proposed to directly get correlation coefficient between two images at a certain time interval (from microseconds to milliseconds) by employing a two-dimensional image correlation algorithm. This coefficient has been provedmore » to be a monotonic function of the particle diameter. Samples of standard latex particles (79/100/352/482/948 nm) were measured for validation of the proposed method. The measurement accuracy of higher than 90% was found with standard deviations less than 3%. A sample of nanosilver particle with nominal size of 20 ± 2 nm and a sample of polymethyl methacrylate emulsion with unknown size were also tested using UIDLS method. The measured results were 23.2 ± 3.0 nm and 246.1 ± 6.3 nm, respectively, which is substantially consistent with the transmission electron microscope results. Since the time for acquisition of two successive images has been reduced to less than 1 ms and the data processing time in about 10 ms, the total measuring time can be dramatically reduced from hundreds seconds to tens of milliseconds, which provides the potential for real-time and in situ nanoparticle sizing.« less
A real-time monitoring system for night glare protection
NASA Astrophysics Data System (ADS)
Ma, Jun; Ni, Xuxiang
2010-11-01
When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.
Development of a QDots 800 based fluorescent solid phantom for validation of NIRF imaging platforms
NASA Astrophysics Data System (ADS)
Zhu, Banghe; Sevick-Muraca, Eva M.
2013-02-01
Over the past decade, we developed near-infrared fluorescence (NIRF) devices for non-invasive lymphatic imaging using microdosages of ICG in humans and for detection of lymph node metastasis in animal models mimicking metastatic human prostate cancer. To validate imaging, a NIST traceable phantom is needed so that developed "first-inhumans" drugs may be used with different luorescent imaging platforms. In this work, we developed a QDots 800 based fluorescent solid phantom for installation and operational qualification of clinical and preclinical, NIRF imaging devices. Due to its optical clearance, polyurethane was chosen as the base material. Titanium dioxide was used as the scattering agent because of its miscibility in polyurethane. QDots 800 was chosen owing to its stability and NIR emission spectra. A first phantom was constructed for evaluation of the noise floor arising from excitation light leakage, a phenomenon that can be minimized during engineering and design of fluorescent imaging systems. A second set of phantoms were constructed to enable quantification of device sensitivity associated with our preclinical and clinical devices. The phantoms have been successfully applied for installation and operational qualification of our preclinical and clinical devices. Assessment of excitation light leakage provides a figure of merit for "noise floor" and imaging sensitivity can be used to benchmark devices for specific imaging agents.
Entwistle, A
2004-06-01
A means for improving the contrast in the images produced from digital light micrographs is described that requires no intervention by the experimenter: zero-order, scaling, tonally independent, moderated histogram equalization. It is based upon histogram equalization, which often results in digital light micrographs that contain regions that appear to be saturated, negatively biased or very grainy. Here a non-decreasing monotonic function is introduced into the process, which moderates the changes in contrast that are generated. This method is highly effective for all three of the main types of contrast found in digital light micrography: bright objects viewed against a dark background, e.g. fluorescence and dark-ground or dark-field image data sets; bright and dark objects sets against a grey background, e.g. image data sets collected with phase or Nomarski differential interference contrast optics; and darker objects set against a light background, e.g. views of absorbing specimens. Moreover, it is demonstrated that there is a single fixed moderating function, whose actions are independent of the number of elements of image data, which works well with all types of digital light micrographs, including multimodal or multidimensional image data sets. The use of this fixed function is very robust as the appearance of the final image is not altered discernibly when it is applied repeatedly to an image data set. Consequently, moderated histogram equalization can be applied to digital light micrographs as a push-button solution, thereby eliminating biases that those undertaking the processing might have introduced during manual processing. Finally, moderated histogram equalization yields a mapping function and so, through the use of look-up tables, indexes or palettes, the information present in the original data file can be preserved while an image with the improved contrast is displayed on the monitor screen.
Assessment of a smartphone-based camera for fundus imaging in animals.
Balland, Olivier; Russo, Andrea; Isard, Pierre-François; Mathieson, Iona; Semeraro, Francesco; Dulaurent, Thomas
2017-01-01
To assess the use of an optical device (D-EYE; Si14 S.p.A.) attached to a modern smartphone (iPhone 5; Apple Inc.) for imaging the fundus in small animals. Five dogs, five cats, and five rabbits with clear media were imaged using a prototype of the D-EYE. The optical device was composed of lenses, polarizing filters, a beam splitter, a diaphragm, and mirrors, attached to a smartphone via a metal shell. Images were obtained 20 min after pupil dilation with topical 0.5% tropicamide in a darkened room, to ensure maximum pupillary dilation. Focus was set to the infinite when the autofocus was overwhelmed. Light intensity was adapted to each animal via the application (minimum light intensity for imaging the tapetal region, maximum light intensity for imaging the nontapetal region). Both still images and video sequences were recorded for each animal. Posterior segment structures were visible in all animals: optic nerve head, tapetum lucidum (when present), nontapetal region, retinal vessels, and choroidal vessels (when the retinal pigment epithelium and the choroidal pigmentation were discreet). Focal light artifacts were common when photographing the tapetum lucidum. Recording videos allowed the visualization of dynamic phenomena. The D-EYE assessed appears to be an easy means of obtaining images of the posterior segment structures. © 2016 American College of Veterinary Ophthalmologists.
Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method
NASA Astrophysics Data System (ADS)
Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan
2018-04-01
Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.
ERIC Educational Resources Information Center
Ayvaci, Hakan Sevki; Yildiz, Mehmet; Bakirci, Hasan
2015-01-01
This study employed a print laboratory material based on 5E model of constructivist learning approach to teach reflection of light and Image on a Plane Mirror. The effect of the instruction which conducted with the designed print laboratory material on academic achievements of prospective science and technology teachers and their attitudes towards…
Development of plenoptic infrared camera using low dimensional material based photodetectors
NASA Astrophysics Data System (ADS)
Chen, Liangliang
Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.
Light field creating and imaging with different order intensity derivatives
NASA Astrophysics Data System (ADS)
Wang, Yu; Jiang, Huan
2014-10-01
Microscopic image restoration and reconstruction is a challenging topic in the image processing and computer vision, which can be widely applied to life science, biology and medicine etc. A microscopic light field creating and three dimensional (3D) reconstruction method is proposed for transparent or partially transparent microscopic samples, which is based on the Taylor expansion theorem and polynomial fitting. Firstly the image stack of the specimen is divided into several groups in an overlapping or non-overlapping way along the optical axis, and the first image of every group is regarded as reference image. Then different order intensity derivatives are calculated using all the images of every group and polynomial fitting method based on the assumption that the structure of the specimen contained by the image stack in a small range along the optical axis are possessed of smooth and linear property. Subsequently, new images located any position from which to reference image the distance is Δz along the optical axis can be generated by means of Taylor expansion theorem and the calculated different order intensity derivatives. Finally, the microscopic specimen can be reconstructed in 3D form using deconvolution technology and all the images including both the observed images and the generated images. The experimental results show the effectiveness and feasibility of our method.
Gierthmuehlen, Mortimer; Freiman, Thomas M; Haastert-Talini, Kirsten; Mueller, Alexandra; Kaminsky, Jan; Stieglitz, Thomas; Plachta, Dennis T T
2013-01-01
The development of neural cuff-electrodes requires several in vivo studies and revisions of the electrode design before the electrode is completely adapted to its target nerve. It is therefore favorable to simulate many of the steps involved in this process to reduce costs and animal testing. As the restoration of motor function is one of the most interesting applications of cuff-electrodes, the position and trajectories of myelinated fibers in the simulated nerve are important. In this paper, we investigate a method for building a precise neuroanatomical model of myelinated fibers in a peripheral nerve based on images obtained using high-resolution light microscopy. This anatomical model describes the first aim of our "Virtual workbench" project to establish a method for creating realistic neural simulation models based on image datasets. The imaging, processing, segmentation and technical limitations are described, and the steps involved in the transition into a simulation model are presented. The results showed that the position and trajectories of the myelinated axons were traced and virtualized using our technique, and small nerves could be reliably modeled based on of light microscopy images using low-cost OpenSource software and standard hardware. The anatomical model will be released to the scientific community.
Gierthmuehlen, Mortimer; Freiman, Thomas M.; Haastert-Talini, Kirsten; Mueller, Alexandra; Kaminsky, Jan; Stieglitz, Thomas; Plachta, Dennis T. T.
2013-01-01
The development of neural cuff-electrodes requires several in vivo studies and revisions of the electrode design before the electrode is completely adapted to its target nerve. It is therefore favorable to simulate many of the steps involved in this process to reduce costs and animal testing. As the restoration of motor function is one of the most interesting applications of cuff-electrodes, the position and trajectories of myelinated fibers in the simulated nerve are important. In this paper, we investigate a method for building a precise neuroanatomical model of myelinated fibers in a peripheral nerve based on images obtained using high-resolution light microscopy. This anatomical model describes the first aim of our “Virtual workbench” project to establish a method for creating realistic neural simulation models based on image datasets. The imaging, processing, segmentation and technical limitations are described, and the steps involved in the transition into a simulation model are presented. The results showed that the position and trajectories of the myelinated axons were traced and virtualized using our technique, and small nerves could be reliably modeled based on of light microscopy images using low-cost OpenSource software and standard hardware. The anatomical model will be released to the scientific community. PMID:23785485
Development of a PET/Cerenkov-light hybrid imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Hamamura, Fuka; Kato, Katsuhiko
2014-09-15
Purpose: Cerenkov-light imaging is a new molecular imaging technology that detects visible photons from high-speed electrons using a high sensitivity optical camera. However, the merit of Cerenkov-light imaging remains unclear. If a PET/Cerenkov-light hybrid imaging system were developed, the merit of Cerenkov-light imaging would be clarified by directly comparing these two imaging modalities. Methods: The authors developed and tested a PET/Cerenkov-light hybrid imaging system that consists of a dual-head PET system, a reflection mirror located above the subject, and a high sensitivity charge coupled device (CCD) camera. The authors installed these systems inside a black box for imaging the Cerenkov-light.more » The dual-head PET system employed a 1.2 × 1.2 × 10 mm{sup 3} GSO arranged in a 33 × 33 matrix that was optically coupled to a position sensitive photomultiplier tube to form a GSO block detector. The authors arranged two GSO block detectors 10 cm apart and positioned the subject between them. The Cerenkov-light above the subject is reflected by the mirror and changes its direction to the side of the PET system and is imaged by the high sensitivity CCD camera. Results: The dual-head PET system had a spatial resolution of ∼1.2 mm FWHM and sensitivity of ∼0.31% at the center of the FOV. The Cerenkov-light imaging system's spatial resolution was ∼275μm for a {sup 22}Na point source. Using the combined PET/Cerenkov-light hybrid imaging system, the authors successfully obtained fused images from simultaneously acquired images. The image distributions are sometimes different due to the light transmission and absorption in the body of the subject in the Cerenkov-light images. In simultaneous imaging of rat, the authors found that {sup 18}F-FDG accumulation was observed mainly in the Harderian gland on the PET image, while the distribution of Cerenkov-light was observed in the eyes. Conclusions: The authors conclude that their developed PET/Cerenkov-light hybrid imaging system is useful to evaluate the merits and the limitations of Cerenkov-light imaging in molecular imaging research.« less
Applications of Light-Responsive Systems for Cancer Theranostics.
Chen, Hongzhong; Zhao, Yanli
2018-06-27
Achieving controlled and targeted delivery of chemotherapeutic drugs and other therapeutic agents to tumor sites is challenging. Among many stimulus strategies, light as a mode of action shows various advantages such as high spatiotemporal selectivity, minimal invasiveness and easy operation. Thus, drug delivery systems (DDSs) have been designed with the incorporation of various functionalities responsive to light as an exogenous stimulus. Early development has focused on guiding chemotherapeutic drugs to designated location, followed by the utilization of UV irradiation for controlled drug release. Because of the disadvantages of UV light such as phototoxicity and limited tissue penetration depth, scientists have moved the research focus onto developing nanoparticle systems responsive to light in the visible region (400-700 nm), aiming to reduce the phototoxicity. In order to enhance the tissue penetration depth, near-infrared light triggered DDSs become increasingly important. In addition, light-based advanced systems for fluorescent and photoacoustic imaging, as well as photodynamic and photothermal therapy have also been reported. Herein, we highlight some of recent developments by applying light-responsive systems in cancer theranostics, including light activated drug release, photodynamic and photothermal therapy, and bioimaging techniques such as fluorescent and photoacoustic imaging. Future prospect of light-mediated cancer treatment is discussed at the end of the review. This Spotlights on Applications article aims to provide up-to-date information about the rapidly developing field of light-based cancer theranostics.
Abdelfattah, Ahmed S.; Farhi, Samouil L.; Zhao, Yongxin; Brinks, Daan; Zou, Peng; Ruangkittisakul, Araya; Platisa, Jelena; Pieribone, Vincent A.; Ballanyi, Klaus; Cohen, Adam E.
2016-01-01
Optical imaging of voltage indicators based on green fluorescent proteins (FPs) or archaerhodopsin has emerged as a powerful approach for detecting the activity of many individual neurons with high spatial and temporal resolution. Relative to green FP-based voltage indicators, a bright red-shifted FP-based voltage indicator has the intrinsic advantages of lower phototoxicity, lower autofluorescent background, and compatibility with blue-light-excitable channelrhodopsins. Here, we report a bright red fluorescent voltage indicator (fluorescent indicator for voltage imaging red; FlicR1) with properties that are comparable to the best available green indicators. To develop FlicR1, we used directed protein evolution and rational engineering to screen libraries of thousands of variants. FlicR1 faithfully reports single action potentials (∼3% ΔF/F) and tracks electrically driven voltage oscillations at 100 Hz in dissociated Sprague Dawley rat hippocampal neurons in single trial recordings. Furthermore, FlicR1 can be easily imaged with wide-field fluorescence microscopy. We demonstrate that FlicR1 can be used in conjunction with a blue-shifted channelrhodopsin for all-optical electrophysiology, although blue light photoactivation of the FlicR1 chromophore presents a challenge for applications that require spatially overlapping yellow and blue excitation. SIGNIFICANCE STATEMENT Fluorescent-protein-based voltage indicators enable imaging of the electrical activity of many genetically targeted neurons with high spatial and temporal resolution. Here, we describe the engineering of a bright red fluorescent protein-based voltage indicator designated as FlicR1 (fluorescent indicator for voltage imaging red). FlicR1 has sufficient speed and sensitivity to report single action potentials and voltage fluctuations at frequencies up to 100 Hz in single-trial recordings with wide-field microscopy. Because it is excitable with yellow light, FlicR1 can be used in conjunction with blue-light-activated optogenetic actuators. However, spatially distinct patterns of optogenetic activation and voltage imaging are required to avoid fluorescence artifacts due to photoactivation of the FlicR1 chromophore. PMID:26911693
Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
Compressive light field imaging
NASA Astrophysics Data System (ADS)
Ashok, Amit; Neifeld, Mark A.
2010-04-01
Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter
NASA Astrophysics Data System (ADS)
Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.
2013-02-01
Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.
Gallium nitride light sources for optical coherence tomography
NASA Astrophysics Data System (ADS)
Goldberg, Graham R.; Ivanov, Pavlo; Ozaki, Nobuhiko; Childs, David T. D.; Groom, Kristian M.; Kennedy, Kenneth L.; Hogg, Richard A.
2017-02-01
The advent of optical coherence tomography (OCT) has permitted high-resolution, non-invasive, in vivo imaging of the eye, skin and other biological tissue. The axial resolution is limited by source bandwidth and central wavelength. With the growing demand for short wavelength imaging, super-continuum sources and non-linear fibre-based light sources have been demonstrated in tissue imaging applications exploiting the near-UV and visible spectrum. Whilst the potential has been identified of using gallium nitride devices due to relative maturity of laser technology, there have been limited reports on using such low cost, robust devices in imaging systems. A GaN super-luminescent light emitting diode (SLED) was first reported in 2009, using tilted facets to suppress lasing, with the focus since on high power, low speckle and relatively low bandwidth applications. In this paper we discuss a method of producing a GaN based broadband source, including a passive absorber to suppress lasing. The merits of this passive absorber are then discussed with regards to broad-bandwidth applications, rather than power applications. For the first time in GaN devices, the performance of the light sources developed are assessed though the point spread function (PSF) (which describes an imaging systems response to a point source), calculated from the emission spectra. We show a sub-7μm resolution is possible without the use of special epitaxial techniques, ultimately outlining the suitability of these short wavelength, broadband, GaN devices for use in OCT applications.
NASA Astrophysics Data System (ADS)
Chen, Yung-Sheng; Wang, Jeng-Yau
2015-09-01
Light source plays a significant role to acquire a qualified image from objects for facilitating the image processing and pattern recognition. For objects possessing specular surface, the phenomena of reflection and halo appearing in the acquired image will increase the difficulty of information processing. Such a situation may be improved by the assistance of valuable diffuse light source. Consider reading resistor via computer vision, due to the resistor's specular reflective surface it will face with a severe non-uniform luminous intensity on image yielding a higher error rate in recognition without a well-controlled light source. A measurement system including mainly a digital microscope embedded in a replaceable diffuse cover, a ring-type LED embedded onto a small pad carrying a resistor for evaluation, and Arduino microcontrollers connected with PC, is presented in this paper. Several replaceable cost-effective diffuse covers made by paper bowl, cup and box inside pasted with white paper are presented for reducing specular reflection and halo effects and compared with a commercial diffuse some. The ring-type LED can be flexibly configured to be a full or partial lighting based on the application. For each self-made diffuse cover, a set of resistors with 4 or 5 color bands are captured via digital microscope for experiments. The signal-to-noise ratio from the segmented resistor-image is used for performance evaluation. The detected principal axis of resistor body is used for the partial LED configuration to further improve the lighting condition. Experimental results confirm that the proposed mechanism can not only evaluate the cost-effective diffuse light source but also be extended as an automatic recognition system for resistor reading.
4D Light Field Imaging System Using Programmable Aperture
NASA Technical Reports Server (NTRS)
Bae, Youngsam
2012-01-01
Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need near complete stereo images for their autonomous navigation, manipulation, and depth approximation. The imaging system can provide visual feedback
Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C.
2014-01-01
A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367
NASA Astrophysics Data System (ADS)
Wang, Aiwu; Wang, Chundong; Fu, Li; Wong-Ng, Winnie; Lan, Yucheng
2017-10-01
The graphitic carbon nitride (g-C3N4) which is a two-dimensional conjugated polymer has drawn broad interdisciplinary attention as a low-cost, metal-free, and visible-light-responsive photocatalyst in the area of environmental remediation. The g-C3N4-based materials have excellent electronic band structures, electron-rich properties, basic surface functionalities, high physicochemical stabilities and are "earth-abundant." This review summarizes the latest progress related to the design and construction of g-C3N4-based materials and their applications including catalysis, sensing, imaging, and white-light-emitting diodes. An outlook on possible further developments in g-C3N4-based research for emerging properties and applications is also included.
A novel data processing technique for image reconstruction of penumbral imaging
NASA Astrophysics Data System (ADS)
Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin
2011-06-01
CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.
Frequency division multiplexed multi-color fluorescence microscope system
NASA Astrophysics Data System (ADS)
Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan
2017-10-01
Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.
A novel non-imaging optics based Raman spectroscopy device for transdermal blood analyte measurement
Kong, Chae-Ryon; Barman, Ishan; Dingari, Narahara Chari; Kang, Jeon Woong; Galindo, Luis; Dasari, Ramachandra R.; Feld, Michael S.
2011-01-01
Due to its high chemical specificity, Raman spectroscopy has been considered to be a promising technique for non-invasive disease diagnosis. However, during Raman excitation, less than one out of a million photons undergo spontaneous Raman scattering and such weakness in Raman scattered light often require highly efficient collection of Raman scattered light for the analysis of biological tissues. We present a novel non-imaging optics based portable Raman spectroscopy instrument designed for enhanced light collection. While the instrument was demonstrated on transdermal blood glucose measurement, it can also be used for detection of other clinically relevant blood analytes such as creatinine, urea and cholesterol, as well as other tissue diagnosis applications. For enhanced light collection, a non-imaging optical element called compound hyperbolic concentrator (CHC) converts the wide angular range of scattered photons (numerical aperture (NA) of 1.0) from the tissue into a limited range of angles accommodated by the acceptance angles of the collection system (e.g., an optical fiber with NA of 0.22). A CHC enables collimation of scattered light directions to within extremely narrow range of angles while also maintaining practical physical dimensions. Such a design allows for the development of a very efficient and compact spectroscopy system for analyzing highly scattering biological tissues. Using the CHC-based portable Raman instrument in a clinical research setting, we demonstrate successful transdermal blood glucose predictions in human subjects undergoing oral glucose tolerance tests. PMID:22125761
Dual-polarity plasmonic metalens for visible light
NASA Astrophysics Data System (ADS)
Chen, Xianzhong; Huang, Lingling; Mühlenbernd, Holger; Li, Guixin; Bai, Benfeng; Tan, Qiaofeng; Jin, Guofan; Qiu, Cheng-Wei; Zhang, Shuang; Zentgraf, Thomas
2012-11-01
Surface topography and refractive index profile dictate the deterministic functionality of a lens. The polarity of most lenses reported so far, that is, either positive (convex) or negative (concave), depends on the curvatures of the interfaces. Here we experimentally demonstrate a counter-intuitive dual-polarity flat lens based on helicity-dependent phase discontinuities for circularly polarized light. Specifically, by controlling the helicity of the input light, the positive and negative polarity are interchangeable in one identical flat lens. Helicity-controllable real and virtual focal planes, as well as magnified and demagnified imaging, are observed on the same plasmonic lens at visible and near-infrared wavelengths. The plasmonic metalens with dual polarity may empower advanced research and applications in helicity-dependent focusing and imaging devices, angular-momentum-based quantum information processing and integrated nano-optoelectronics.
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
Quantitative Assessment of Fat Levels in Caenorhabditis elegans Using Dark Field Microscopy
Fouad, Anthony D.; Pu, Shelley H.; Teng, Shelly; Mark, Julian R.; Fu, Moyu; Zhang, Kevin; Huang, Jonathan; Raizen, David M.; Fang-Yen, Christopher
2017-01-01
The roundworm Caenorhabditis elegans is widely used as a model for studying conserved pathways for fat storage, aging, and metabolism. The most broadly used methods for imaging fat in C. elegans require fixing and staining the animal. Here, we show that dark field images acquired through an ordinary light microscope can be used to estimate fat levels in worms. We define a metric based on the amount of light scattered per area, and show that this light scattering metric is strongly correlated with worm fat levels as measured by Oil Red O (ORO) staining across a wide variety of genetic backgrounds and feeding conditions. Dark field imaging requires no exogenous agents or chemical fixation, making it compatible with live worm imaging. Using our method, we track fat storage with high temporal resolution in developing larvae, and show that fat storage in the intestine increases in at least one burst during development. PMID:28404661
Decoding mobile-phone image sensor rolling shutter effect for visible light communications
NASA Astrophysics Data System (ADS)
Liu, Yang
2016-01-01
Optical wireless communication (OWC) using visible lights, also known as visible light communication (VLC), has attracted significant attention recently. As the traditional OWC and VLC receivers (Rxs) are based on PIN photo-diode or avalanche photo-diode, deploying the complementary metal-oxide-semiconductor (CMOS) image sensor as the VLC Rx is attractive since nowadays nearly every person has a smart phone with embedded CMOS image sensor. However, deploying the CMOS image sensor as the VLC Rx is challenging. In this work, we propose and demonstrate two simple contrast ratio (CR) enhancement schemes to improve the contrast of the rolling shutter pattern. Then we describe their processing algorithms one by one. The experimental results show that both the proposed CR enhancement schemes can significantly mitigate the high-intensity fluctuations of the rolling shutter pattern and improve the bit-error-rate performance.
Single-shot three-dimensional reconstruction based on structured light line pattern
NASA Astrophysics Data System (ADS)
Wang, ZhenZhou; Yang, YongMing
2018-07-01
Reconstruction of the object by single-shot is of great importance in many applications, in which the object is moving or its shape is non-rigid and changes irregularly. In this paper, we propose a single-shot structured light 3D imaging technique that calculates the phase map from the distorted line pattern. This technique makes use of the image processing techniques to segment and cluster the projected structured light line pattern from one single captured image. The coordinates of the clustered lines are extracted to form a low-resolution phase matrix which is then transformed to full-resolution phase map by spline interpolation. The 3D shape of the object is computed from the full-resolution phase map and the 2D camera coordinates. Experimental results show that the proposed method was able to reconstruct the three-dimensional shape of the object robustly from one single image.
Optical time-of-flight and absorbance imaging of biologic media.
Benaron, D A; Stevenson, D K
1993-03-05
Imaging the interior of living bodies with light may assist in the diagnosis and treatment of a number of clinical problems, which include the early detection of tumors and hypoxic cerebral injury. An existing picosecond time-of-flight and absorbance (TOFA) optical system has been used to image a model biologic system and a rat. Model measurements confirmed TOFA principles in systems with a high degree of photon scattering; rat images, which were constructed from the variable time delays experienced by a fixed fraction of early-arriving transmitted photons, revealed identifiable internal structure. A combination of light-based quantitative measurement and TOFA localization may have applications in continuous, noninvasive monitoring for structural imaging and spatial chemometric analysis in humans.
Optical Time-of-Flight and Absorbance Imaging of Biologic Media
NASA Astrophysics Data System (ADS)
Benaron, David A.; Stevenson, David K.
1993-03-01
Imaging the interior of living bodies with light may assist in the diagnosis and treatment of a number of clinical problems, which include the early detection of tumors and hypoxic cerebral injury. An existing picosecond time-of-flight and absorbance (TOFA) optical system has been used to image a model biologic system and a rat. Model measurements confirmed TOFA principles in systems with a high degree of photon scattering; rat images, which were constructed from the variable time delays experienced by a fixed fraction of early-arriving transmitted photons, revealed identifiable internal structure. A combination of light-based quantitative measurement and TOFA localization may have applications in continuous, noninvasive monitoring for structural imaging and spatial chemometric analysis in humans.
NASA Technical Reports Server (NTRS)
Give'on, Amir; Kern, Brian D.; Shaklan, Stuart
2011-01-01
In this paper we describe the complex electric field reconstruction from image plane intensity measurements for high contrast coronagraphic imaging. A deformable mirror (DM) surface is modied with pairs of complementary shapes to create diversity in the image plane of the science camera where the intensity of the light is measured. Along with the Electric Field Conjugation correction algorithm, this estimation method has been used in various high contrast imaging testbeds to achieve the best contrasts to date both in narrow and in broad band light. We present the basic methodology of estimation in easy to follow list of steps, present results from HCIT and raise several open quations we are confronted with using this method.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-13
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-01
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797
Broadband Phase Retrieval for Image-Based Wavefront Sensing
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A focus-diverse phase-retrieval algorithm has been shown to perform adequately for the purpose of image-based wavefront sensing when (1) broadband light (typically spanning the visible spectrum) is used in forming the images by use of an optical system under test and (2) the assumption of monochromaticity is applied to the broadband image data. Heretofore, it had been assumed that in order to obtain adequate performance, it is necessary to use narrowband or monochromatic light. Some background information, including definitions of terms and a brief description of pertinent aspects of image-based phase retrieval, is prerequisite to a meaningful summary of the present development. Phase retrieval is a general term used in optics to denote estimation of optical imperfections or aberrations of an optical system under test. The term image-based wavefront sensing refers to a general class of algorithms that recover optical phase information, and phase-retrieval algorithms constitute a subset of this class. In phase retrieval, one utilizes the measured response of the optical system under test to produce a phase estimate. The optical response of the system is defined as the image of a point-source object, which could be a star or a laboratory point source. The phase-retrieval problem is characterized as image-based in the sense that a charge-coupled-device camera, preferably of scientific imaging quality, is used to collect image data where the optical system would normally form an image. In a variant of phase retrieval, denoted phase-diverse phase retrieval [which can include focus-diverse phase retrieval (in which various defocus planes are used)], an additional known aberration (or an equivalent diversity function) is superimposed as an aid in estimating unknown aberrations by use of an image-based wavefront-sensing algorithm. Image-based phase-retrieval differs from such other wavefront-sensing methods, such as interferometry, shearing interferometry, curvature wavefront sensing, and Shack-Hartmann sensing, all of which entail disadvantages in comparison with image-based methods. The main disadvantages of these non-image based methods are complexity of test equipment and the need for a wavefront reference.
Kim, Heekang; Kwon, Soon; Kim, Sungho
2016-01-01
This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen). PMID:27399720
Image quality prediction - An aid to the Viking lander imaging investigation on Mars
NASA Technical Reports Server (NTRS)
Huck, F. O.; Wall, S. D.
1976-01-01
Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
Lavagnino, Zeno; Sancataldo, Giuseppe; d’Amora, Marta; Follert, Philipp; De Pietri Tonelli, Davide; Diaspro, Alberto; Cella Zanacchi, Francesca
2016-01-01
In the last decade light sheet fluorescence microscopy techniques, such as selective plane illumination microscopy (SPIM), has become a well established method for developmental biology. However, conventional SPIM architectures hardly permit imaging of certain tissues since the common sample mounting procedure, based on gel embedding, could interfere with the sample morphology. In this work we propose an inverted selective plane microscopy system (iSPIM), based on non-linear excitation, suitable for 3D tissue imaging. First, the iSPIM architecture provides flexibility on the sample mounting, getting rid of the gel-based mounting typical of conventional SPIM, permitting 3D imaging of hippocampal slices from mouse brain. Moreover, all the advantages brought by two photon excitation (2PE) in terms of reduction of scattering effects and contrast improvement are exploited, demonstrating an improved image quality and contrast compared to single photon excitation. The system proposed represents an optimal platform for tissue imaging and it smooths the way to the applicability of light sheet microscopy to a wider range of samples including those that have to be mounted on non-transparent surfaces. PMID:27033347
Electronic method for autofluorography of macromolecules on two-D matrices
Davidson, Jackson B.; Case, Arthur L.
1983-01-01
A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100-1000 times.
Shirai, Tomohiro; Barnes, Thomas H
2002-02-01
A liquid-crystal adaptive optics system using all-optical feedback interferometry is applied to partially coherent imaging through a phase disturbance. A theoretical analysis based on the propagation of the cross-spectral density shows that the blurred image due to the phase disturbance can be restored, in principle, irrespective of the state of coherence of the light illuminating the object. Experimental verification of the theory has been performed for two cases when the object to be imaged is illuminated by spatially coherent light originating from a He-Ne laser and by spatially incoherent white light from a halogen lamp. We observed in both cases that images blurred by the phase disturbance were successfully restored, in agreement with the theory, immediately after the adaptive optics system was activated. The origin of the deviation of the experimental results from the theory, together with the effect of the feedback misalignment inherent in our optical arrangement, is also discussed.
Daylight characterization through vision-based sensing of lighting conditions in buildings
NASA Astrophysics Data System (ADS)
di Dio, Joseph, III
A new method for describing daylight under unknown weather conditions, as captured in images of a room, is proposed. This method considers pixel brightness information to be a linear combination of diffuse and directional light components, as received by a web cam from the walls and ceiling of an occupied office. The nature of these components in each image is determined by building orientation, room geometry, neighboring structures and the position of the sun. Considering daylight in this manner also allows for an estimation of the sky conditions at a given instant to be made, and presents a means to uncover seasonal trends in the behavior of light simply by monitoring the brightness variations of points on the walls and ceiling. Significantly, this daylight characterization method also allows for an estimation of the illumination level on a target surface to be made from image data. Currently, illumination at a target surface is estimated through the use of a ceiling-mounted photosensor, as part of a lighting control system, in the hopes of achieving a suitable balance between daylight and electrical lighting in a space. Improving the ability of a sensor to estimate the illumination is of great importance to those who wish to minimize unnecessary energy consumption, as a significant percentage of all U.S. electricity is currently consumed by light fixtures. A photosensor detects light that falls on its location, which does not necessarily correspond in a fixed manner to the light level on the target areas that the photosensor is meant to monitor. Additionally, a photosensor cannot discern variations in light distribution across a room, which often occur with daylight. By considering pixel brightness information to be a linear combination of diffuse and directional light components at selected pixels in an image, information about the light reaching these pixels can be extracted from observed patterns of brightness, under different light conditions. In this manner, each pixel provides information about the light field at its corresponding point in the room, and thus each pixel can be considered to behave as if it were a remote photosensor. By using multiple pixel readings in lieu of a single photosensor reading of a given light condition, an improved assessment of the illumination level on a target surface can been achieved. It is shown that on average, the camera-based method was approximately 25% more accurate in estimating illuminance in the test room than was a simulated ceiling-mounted photosensor. It is hoped that the methodology detailed here will aid in the eventual development of a camera-based daylight characterization sensor for use in lighting control systems, so that the potential for enhanced energy savings can be realized.
Fiber-optic fluorescence imaging
Flusberg, Benjamin A; Cocker, Eric D; Piyawattanametha, Wibool; Jung, Juergen C; Cheung, Eunice L M; Schnitzer, Mark J
2010-01-01
Optical fibers guide light between separate locations and enable new types of fluorescence imaging. Fiber-optic fluorescence imaging systems include portable handheld microscopes, flexible endoscopes well suited for imaging within hollow tissue cavities and microendoscopes that allow minimally invasive high-resolution imaging deep within tissue. A challenge in the creation of such devices is the design and integration of miniaturized optical and mechanical components. Until recently, fiber-based fluorescence imaging was mainly limited to epifluorescence and scanning confocal modalities. Two new classes of photonic crystal fiber facilitate ultrashort pulse delivery for fiber-optic two-photon fluorescence imaging. An upcoming generation of fluorescence imaging devices will be based on microfabricated device components. PMID:16299479
Gioux, Sylvain; Lomnes, Stephen J.; Choi, Hak Soo; Frangioni, John V.
2010-01-01
Fluorescence lifetime imaging (FLi) could potentially improve exogenous near-infrared (NIR) fluorescence imaging, because it offers the capability of discriminating a signal of interest from background, provides real-time monitoring of a chemical environment, and permits the use of several different fluorescent dyes having the same emission wavelength. We present a high-power, LED-based, NIR light source for the clinical translation of wide-field (larger than 5 cm in diameter) FLi at frequencies up to 35 MHz. Lifetime imaging of indocyanine green (ICG), IRDye 800-CW, and 3,3′-diethylthiatricarbocyanine iodide (DTTCI) was performed over a large field of view (10 cm by 7.5 cm) using the LED light source. For comparison, a laser diode light source was employed as a gold standard. Experiments were performed both on the bench by diluting the fluorescent dyes in various chemical environments in Eppendorf tubes, and in vivo by injecting the fluorescent dyes mixed in Matrigel subcutaneously into CD-1 mice. Last, measured fluorescence lifetimes obtained using the LED and the laser diode sources were compared with those obtained using a state-of-the-art time-domain imaging system and with those previously described in the literature. On average, lifetime values obtained using the LED and the laser diode light sources were consistent, exhibiting a mean difference of 3% from the expected values and a coefficient of variation of 12%. Taken together, our study offers an alternative to laser diodes for clinical translation of FLi and explores the use of relatively low frequency modulation for in vivo imaging. PMID:20459250
Zhou, Yong; Hu, Ye; Zeng, Nan; Ji, Yanhong; Dai, Xiangsong; Li, Peng; Ma, Hui; He, Yonghong
2011-01-01
We present a noninvasive method of detecting substance concentration in the aqueous humor based on dual-wavelength iris imaging technology. Two light sources, one centered within (392 nm) and the other centered outside (850 nm) of an absorption band of Pirenoxine Sodium, a common type of drugs in eye disease treatment, were used for dual-wavelength iris imaging measurement. After passing through the aqueous humor twice, the back-scattering light was detected by a charge-coupled device (CCD). The detected images were then used to calculate the concentration of Pirenoxine Sodium. In eye model experiment, a resolution of 0.6525 ppm was achieved. Meanwhile, at least 4 ppm can be distinguished in in vivo experiment. These results demonstrated that our method can measure Pirenoxine Sodium concentration in the aqueous humor and its potential ability to monitor other materials’ concentration in the aqueous humor. PMID:21339869
Preliminary study of diagnostic spectroscopic imaging for nasopharyngeal carcinoma
NASA Astrophysics Data System (ADS)
Li, Buhong; Xie, Shusen; Zhang, Xiaodong; Li, Depin
2003-12-01
The optical biopsy system for nasopharyngeal carcinoma based on the technique of laser-induced exogenous fluorescence has been successful developed. Ar+ laser was selected as the excitation light source based on the measurement of the Emission-Excitation Matrix of Hematoporphyrin Monomethyl Ether. Tissue-simulating optical phantoms diluted with different concentration of HMME were used to simulated nasopharyngeal carcinoma lesions in the performance test for the drug-fluorescence optical biopsy system, especially for the comparison of fluorescence image contrast between the excitation wavelength of 488nm and 514.5nm, respectively. Experimental results show that the fluorescence image contrast of simulated nasopharyngeal carcinoma lesions excited by the light at the wavelength of 488nm is about three fold higher than that at 514.5nm, and the sensitivity and resolution of the fluorescence and reflection twilight image can satisfy the needs for clinical diagnosis and localization.
NASA Astrophysics Data System (ADS)
Okubo, C. H.; Schultz, R. A.; Nahm, A. L.
2007-07-01
The strength and deformability of light-toned layered deposits are estimated based on measurements of porosity from Microscopic Imager data acquired by MER Opportunity during its traverse from Eagle Crater to Erebus Crater.
High resolution Cerenkov light imaging of induced positron distribution in proton therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Fujii, Kento; Morishita, Yuki
2014-11-01
Purpose: In proton therapy, imaging of the positron distribution produced by fragmentation during or soon after proton irradiation is a useful method to monitor the proton range. Although positron emission tomography (PET) is typically used for this imaging, its spatial resolution is limited. Cerenkov light imaging is a new molecular imaging technology that detects the visible photons that are produced from high-speed electrons using a high sensitivity optical camera. Because its inherent spatial resolution is much higher than PET, the authors can measure more precise information of the proton-induced positron distribution with Cerenkov light imaging technology. For this purpose, theymore » conducted Cerenkov light imaging of induced positron distribution in proton therapy. Methods: First, the authors evaluated the spatial resolution of our Cerenkov light imaging system with a {sup 22}Na point source for the actual imaging setup. Then the transparent acrylic phantoms (100 × 100 × 100 mm{sup 3}) were irradiated with two different proton energies using a spot scanning proton therapy system. Cerenkov light imaging of each phantom was conducted using a high sensitivity electron multiplied charge coupled device (EM-CCD) camera. Results: The Cerenkov light’s spatial resolution for the setup was 0.76 ± 0.6 mm FWHM. They obtained high resolution Cerenkov light images of the positron distributions in the phantoms for two different proton energies and made fused images of the reference images and the Cerenkov light images. The depths of the positron distribution in the phantoms from the Cerenkov light images were almost identical to the simulation results. The decay curves derived from the region-of-interests (ROIs) set on the Cerenkov light images revealed that Cerenkov light images can be used for estimating the half-life of the radionuclide components of positrons. Conclusions: High resolution Cerenkov light imaging of proton-induced positron distribution was possible. The authors conclude that Cerenkov light imaging of proton-induced positron is promising for proton therapy.« less
Polarimetric imaging of retinal disease by polarization sensitive SLO
NASA Astrophysics Data System (ADS)
Miura, Masahiro; Elsner, Ann E.; Iwasaki, Takuya; Goto, Hiroshi
2015-03-01
Polarimetry imaging is used to evaluate different features of the macular disease. Polarimetry images were recorded using a commercially- available polarization-sensitive scanning laser opthalmoscope at 780 nm (PS-SLO, GDx-N). From data sets of PS-SLO, we computed average reflectance image, depolarized light images, and ratio-depolarized light images. The average reflectance image is the grand mean of all input polarization states. The depolarized light image is the minimum of crossed channel. The ratio-depolarized light image is a ratio between the average reflectance image and depolarized light image, and was used to compensate for variation of brightness. Each polarimetry image is compared with the autofluorescence image at 800 nm (NIR-AF) and autofluorescence image at 500 nm (SW-AF). We evaluated four eyes with geographic atrophy in age related macular degeneration, one eye with retinal pigment epithelium hyperplasia, and two eyes with chronic central serous chorioretinopathy. Polarization analysis could selectively emphasize different features of the retina. Findings in ratio depolarized light image had similarities and differences with NIR-AF images. Area of hyper-AF in NIR-AF images showed high intensity areas in the ratio depolarized light image, representing melanin accumulation. Areas of hypo-AF in NIR-AF images showed low intensity areas in the ratio depolarized light images, representing melanin loss. Drusen were high-intensity areas in the ratio depolarized light image, but NIR-AF images was insensitive to the presence of drusen. Unlike NIR-AF images, SW-AF images showed completely different features from the ratio depolarized images. Polarization sensitive imaging is an effective tool as a non-invasive assessment of macular disease.
CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel
Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun
2014-01-01
A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments. PMID:25426316
Time-of-flight camera via a single-pixel correlation image sensor
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua
2018-04-01
A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Magneto-optical imaging technique for hostile environments: The ghost imaging approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meda, A.; Caprile, A.; Avella, A.
2015-06-29
In this paper, we develop an approach to magneto optical imaging (MOI), applying a ghost imaging (GI) protocol to perform Faraday microscopy. MOI is of the utmost importance for the investigation of magnetic properties of material samples, through Weiss domains shape, dimension and dynamics analysis. Nevertheless, in some extreme conditions such as cryogenic temperatures or high magnetic field applications, there exists a lack of domain images due to the difficulty in creating an efficient imaging system in such environments. Here, we present an innovative MOI technique that separates the imaging optical path from the one illuminating the object. The techniquemore » is based on thermal light GI and exploits correlations between light beams to retrieve the image of magnetic domains. As a proof of principle, the proposed technique is applied to the Faraday magneto-optical observation of the remanence domain structure of an yttrium iron garnet sample.« less
High resolution PET breast imager with improved detection efficiency
Majewski, Stanislaw
2010-06-08
A highly efficient PET breast imager for detecting lesions in the entire breast including those located close to the patient's chest wall. The breast imager includes a ring of imaging modules surrounding the imaged breast. Each imaging module includes a slant imaging light guide inserted between a gamma radiation sensor and a photodetector. The slant light guide permits the gamma radiation sensors to be placed in close proximity to the skin of the chest wall thereby extending the sensitive region of the imager to the base of the breast. Several types of photodetectors are proposed for use in the detector modules, with compact silicon photomultipliers as the preferred choice, due to its high compactness. The geometry of the detector heads and the arrangement of the detector ring significantly reduce dead regions thereby improving detection efficiency for lesions located close to the chest wall.
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
Interferometric detection of nanoparticles
NASA Astrophysics Data System (ADS)
Hayrapetyan, Karen
Interferometric surfaces enhance light scattering from nanoparticles through constructive interference of partial scattered waves. By placing the nanoparticles on interferometric surfaces tuned to a special surface phase interferometric condition, the particles are detectable in the dilute limit through interferometric image contrast in a heterodyne light scattering configuration, or through diffraction in a homodyne scattering configuration. The interferometric enhancement has applications for imaging and diffractive biosensors. We present a modified model based on Double Interaction (DI) to explore bead-based detection mechanisms using imaging, scanning and diffraction. The application goal of this work is to explore the trade-offs between the sensitivity and throughput among various detection methods. Experimentally we use thermal oxide on silicon to establish and control surface interferometric conditions. Surface-captured gold beads are detected using Molecular Interferometric Imaging (MI2) and Spinning-Disc Interferometry (SDI). Double-resonant enhancement of light scattering leads to high-contrast detection of 100 nm radius gold nanoparticles on an interferometric surface. The double-resonance condition is achieved when resonance (or anti-resonance) from an asymmetric Fabry-Perot substrate coincides with the Mie resonance of the gold nanoparticle. The double-resonance condition is observed experimentally using molecular interferometric imaging (MI2). An invisibility condition is identified for which the gold nanoparticles are optically cloaked by the interferometric surface.
Jacques, Steven L.; Roussel, Stéphane; Samatham, Ravikant
2016-01-01
Abstract. This report describes how optical images acquired using linearly polarized light can specify the anisotropy of scattering (g) and the ratio of reduced scattering [μs′=μs(1−g)] to absorption (μa), i.e., N′=μs′/μa. A camera acquired copolarized (HH) and crosspolarized (HV) reflectance images of a tissue (skin), which yielded images based on the intensity (I=HH+HV) and difference (Q=HH−HV) of reflectance images. Monte Carlo simulations generated an analysis grid (or lookup table), which mapped Q and I into a grid of g versus N′, i.e., g(Q,I) and N′(Q,I). The anisotropy g is interesting because it is sensitive to the submicrometer structure of biological tissues. Hence, polarized light imaging can monitor shifts in the submicrometer (50 to 1000 nm) structure of tissues. The Q values for forearm skin on two subjects (one Caucasian, one pigmented) were in the range of 0.046±0.007 (24), which is the mean±SD for 24 measurements on 8 skin sites×3 visible wavelengths, 470, 524, and 625 nm, which indicated g values of 0.67±0.07 (24). PMID:27165546
A photophoretic-trap volumetric display
NASA Astrophysics Data System (ADS)
Smalley, D. E.; Nygaard, E.; Squire, K.; van Wagoner, J.; Rasmussen, J.; Gneiting, S.; Qaderi, K.; Goodsell, J.; Rogers, W.; Lindsey, M.; Costner, K.; Monk, A.; Pearson, M.; Haymore, B.; Peatross, J.
2018-01-01
Free-space volumetric displays, or displays that create luminous image points in space, are the technology that most closely resembles the three-dimensional displays of popular fiction. Such displays are capable of producing images in ‘thin air’ that are visible from almost any direction and are not subject to clipping. Clipping restricts the utility of all three-dimensional displays that modulate light at a two-dimensional surface with an edge boundary; these include holographic displays, nanophotonic arrays, plasmonic displays, lenticular or lenslet displays and all technologies in which the light scattering surface and the image point are physically separate. Here we present a free-space volumetric display based on photophoretic optical trapping that produces full-colour graphics in free space with ten-micrometre image points using persistence of vision. This display works by first isolating a cellulose particle in a photophoretic trap created by spherical and astigmatic aberrations. The trap and particle are then scanned through a display volume while being illuminated with red, green and blue light. The result is a three-dimensional image in free space with a large colour gamut, fine detail and low apparent speckle. This platform, named the Optical Trap Display, is capable of producing image geometries that are currently unobtainable with holographic and light-field technologies, such as long-throw projections, tall sandtables and ‘wrap-around’ displays.
High visibility temporal ghost imaging with classical light
NASA Astrophysics Data System (ADS)
Liu, Jianbin; Wang, Jingjing; Chen, Hui; Zheng, Huaibin; Liu, Yanyan; Zhou, Yu; Li, Fu-li; Xu, Zhuo
2018-03-01
High visibility temporal ghost imaging with classical light is possible when superbunching pseudothermal light is employed. In the numerical simulation, the visibility of temporal ghost imaging with pseudothermal light, equaling (4 . 7 ± 0 . 2)%, can be increased to (75 ± 8)% in the same scheme with superbunching pseudothermal light. The reasons for that the retrieved images are different for superbunching pseudothermal light with different values of degree of second-order coherence are discussed in detail. It is concluded that high visibility and high quality temporal ghost image can be obtained by collecting sufficient number of data points. The results are helpful to understand the difference between ghost imaging with classical light and entangled photon pairs. The superbunching pseudothermal light can be employed to improve the image quality in ghost imaging applications.
NASA Astrophysics Data System (ADS)
Jünger, Felix; Olshausen, Philipp V.; Rohrbach, Alexander
2016-07-01
Living cells are highly dynamic systems with cellular structures being often below the optical resolution limit. Super-resolution microscopes, usually based on fluorescence cell labelling, are usually too slow to resolve small, dynamic structures. We present a label-free microscopy technique, which can generate thousands of super-resolved, high contrast images at a frame rate of 100 Hertz and without any post-processing. The technique is based on oblique sample illumination with coherent light, an approach believed to be not applicable in life sciences because of too many interference artefacts. However, by circulating an incident laser beam by 360° during one image acquisition, relevant image information is amplified. By combining total internal reflection illumination with dark-field detection, structures as small as 150 nm become separable through local destructive interferences. The technique images local changes in refractive index through scattered laser light and is applied to living mouse macrophages and helical bacteria revealing unexpected dynamic processes.
Jünger, Felix; Olshausen, Philipp v.; Rohrbach, Alexander
2016-01-01
Living cells are highly dynamic systems with cellular structures being often below the optical resolution limit. Super-resolution microscopes, usually based on fluorescence cell labelling, are usually too slow to resolve small, dynamic structures. We present a label-free microscopy technique, which can generate thousands of super-resolved, high contrast images at a frame rate of 100 Hertz and without any post-processing. The technique is based on oblique sample illumination with coherent light, an approach believed to be not applicable in life sciences because of too many interference artefacts. However, by circulating an incident laser beam by 360° during one image acquisition, relevant image information is amplified. By combining total internal reflection illumination with dark-field detection, structures as small as 150 nm become separable through local destructive interferences. The technique images local changes in refractive index through scattered laser light and is applied to living mouse macrophages and helical bacteria revealing unexpected dynamic processes. PMID:27465033
Schleede, Simone; Meinel, Felix G.; Bech, Martin; Herzen, Julia; Achterhold, Klaus; Potdevin, Guillaume; Malecki, Andreas; Adam-Neumair, Silvia; Thieme, Sven F.; Bamberg, Fabian; Nikolaou, Konstantin; Bohla, Alexander; Yildirim, Ali Ö.; Loewen, Roderick; Gifford, Martin; Ruth, Ronald; Eickelberg, Oliver; Reiser, Maximilian; Pfeiffer, Franz
2012-01-01
In early stages of various pulmonary diseases, such as emphysema and fibrosis, the change in X-ray attenuation is not detectable with absorption-based radiography. To monitor the morphological changes that the alveoli network undergoes in the progression of these diseases, we propose using the dark-field signal, which is related to small-angle scattering in the sample. Combined with the absorption-based image, the dark-field signal enables better discrimination between healthy and emphysematous lung tissue in a mouse model. All measurements have been performed at 36 keV using a monochromatic laser-driven miniature synchrotron X-ray source (Compact Light Source). In this paper we present grating-based dark-field images of emphysematous vs. healthy lung tissue, where the strong dependence of the dark-field signal on mean alveolar size leads to improved diagnosis of emphysema in lung radiographs. PMID:23074250
Toslak, Devrim; Liu, Changgeng; Alam, Minhaj Nur; Yao, Xincheng
2018-06-01
A portable fundus imager is essential for emerging telemedicine screening and point-of-care examination of eye diseases. However, existing portable fundus cameras have limited field of view (FOV) and frequently require pupillary dilation. We report here a miniaturized indirect ophthalmoscopy-based nonmydriatic fundus camera with a snapshot FOV up to 67° external angle, which corresponds to a 101° eye angle. The wide-field fundus camera consists of a near-infrared light source (LS) for retinal guidance and a white LS for color retinal imaging. By incorporating digital image registration and glare elimination methods, a dual-image acquisition approach was used to achieve reflection artifact-free fundus photography.
Diattenuation of brain tissue and its impact on 3D polarized light imaging
Menzel, Miriam; Reckfort, Julia; Weigand, Daniel; Köse, Hasan; Amunts, Katrin; Axer, Markus
2017-01-01
3D-polarized light imaging (3D-PLI) reconstructs nerve fibers in histological brain sections by measuring their birefringence. This study investigates another effect caused by the optical anisotropy of brain tissue – diattenuation. Based on numerical and experimental studies and a complete analytical description of the optical system, the diattenuation was determined to be below 4 % in rat brain tissue. It was demonstrated that the diattenuation effect has negligible impact on the fiber orientations derived by 3D-PLI. The diattenuation signal, however, was found to highlight different anatomical structures that cannot be distinguished with current imaging techniques, which makes Diattenuation Imaging a promising extension to 3D-PLI. PMID:28717561
NASA Astrophysics Data System (ADS)
Sanger, Demas S.; Haneishi, Hideaki; Miyake, Yoichi
1995-08-01
This paper proposed a simple and automatic method for recognizing the light sources from various color negative film brands by means of digital image processing. First, we stretched the image obtained from a negative based on the standardized scaling factors, then extracted the dominant color component among red, green, and blue components of the stretched image. The dominant color component became the discriminator for the recognition. The experimental results verified that any one of the three techniques could recognize the light source from negatives of any film brands and all brands greater than 93.2 and 96.6% correct recognitions, respectively. This method is significant for the automation of color quality control in color reproduction from color negative film in mass processing and printing machine.
Computational method for multi-modal microscopy based on transport of intensity equation
NASA Astrophysics Data System (ADS)
Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao
2017-02-01
In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.
Illumination adaptation with rapid-response color sensors
NASA Astrophysics Data System (ADS)
Zhang, Xinchi; Wang, Quan; Boyer, Kim L.
2014-09-01
Smart lighting solutions based on imaging sensors such as webcams or time-of-flight sensors suffer from rising privacy concerns. In this work, we use low-cost non-imaging color sensors to measure local luminous flux of different colors in an indoor space. These sensors have much higher data acquisition rate and are much cheaper than many o_-the-shelf commercial products. We have developed several applications with these sensors, including illumination feedback control and occupancy-driven lighting.
Chakkarapani, Suresh Kumar; Sun, Yucheng; Lee, Seungah; Fang, Ning; Kang, Seong Ho
2018-05-22
Three-dimensional (3D) orientations of individual anisotropic plasmonic nanoparticles in aggregates were observed in real time by integrated light sheet super-resolution microscopy ( iLSRM). Asymmetric light scattering of a gold nanorod (AuNR) was used to trigger signals based on the polarizer angle. Controlled photoswitching was achieved by turning the polarizer and obtaining a series of images at different polarization directions. 3D subdiffraction-limited super-resolution images were obtained by superlocalization of scattering signals as a function of the anisotropic optical properties of AuNRs. Varying the polarizer angle allowed resolution of the orientation of individual AuNRs. 3D images of individual nanoparticles were resolved in aggregated regions, resulting in as low as 64 nm axial resolution and 28 nm spatial resolution. The proposed imaging setup and localization approach demonstrates a convenient method for imaging under a noisy environment where the majority of scattering noise comes from cellular components. This integrated 3D iLSRM and localization technique was shown to be reliable and useful in the field of 3D nonfluorescence super-resolution imaging.
Reconfigurable and responsive droplet-based compound micro-lenses.
Nagelberg, Sara; Zarzar, Lauren D; Nicolas, Natalie; Subramanian, Kaushikaram; Kalow, Julia A; Sresht, Vishnu; Blankschtein, Daniel; Barbastathis, George; Kreysing, Moritz; Swager, Timothy M; Kolle, Mathias
2017-03-07
Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications-integral micro-scale imaging devices and light field display technology-thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses.
Reconfigurable and responsive droplet-based compound micro-lenses
Nagelberg, Sara; Zarzar, Lauren D.; Nicolas, Natalie; Subramanian, Kaushikaram; Kalow, Julia A.; Sresht, Vishnu; Blankschtein, Daniel; Barbastathis, George; Kreysing, Moritz; Swager, Timothy M.; Kolle, Mathias
2017-01-01
Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications—integral micro-scale imaging devices and light field display technology—thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses. PMID:28266505
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-27
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
NASA Astrophysics Data System (ADS)
Morita, Shogo; Ito, Shusei; Yamamoto, Hirotsugu
2017-02-01
Aerial display can form transparent floating screen in the mid-air and expected to provide aerial floating signage. We have proposed aerial imaging by retro-reflection (AIRR) to form a large aerial LED screen. However, luminance of aerial image is not sufficiently high so as to be used for signage under broad daylight. The purpose of this paper is to propose a novel aerial display scheme that features hybrid display of two different types of images. Under daylight, signs made of cubes are visible. At night, or under dark lighting situation, aerial LED signs become visible. Our proposed hybrid display is composed of an LED sign, a beam splitter, retro-reflectors, and transparent acrylic cubes. Aerial LED sign is formed with AIRR. Furthermore, we place transparent acrylic cubes on the beam splitter. Light from the LED sign enters transparent acrylic cubes, reflects twice in the transparent acrylic cubes, exit and converge to planesymmetrical position with light source regarding the cube array. Thus, transparent acrylic cubes also form the real image of the source LED sign. Now, we form a sign with the transparent acrylic cubes so that this cube-based sign is apparent under daylight. We have developed a proto-type display by use of 1-cm transparent cubes and retro-reflective sheeting and successfully confirmed aerial image forming with AIRR and transparent cubes as well as cube-based sign under daylight.
NASA Astrophysics Data System (ADS)
Wang, Xicheng; Gao, Jiaobo; Wu, Jianghui; Li, Jianjun; Cheng, Hongliang
2017-02-01
Recently, hyperspectral image projectors (HIP) have been developed in the field of remote sensing. For the advanced performance of system-level validation, target detection and hyperspectral image calibration, HIP has great possibility of development in military, medicine, commercial and so on. HIP is based on the digital micro-mirror device (DMD) and projection technology, which is capable to project arbitrary programmable spectra (controlled by PC) into the each pixel of the IUT1 (instrument under test), such that the projected image could simulate realistic scenes that hyperspectral image could be measured during its use and enable system-level performance testing and validation. In this paper, we built a visible hyperspectral image projector also called the visible target simulator with double DMDs, which the first DMD is used to product the selected monochromatic light from the wavelength of 410 to 720 um, and the light come to the other one. Then we use computer to load image of realistic scenes to the second DMD, so that the target condition and background could be project by the second DMD with the selected monochromatic light. The target condition can be simulated and the experiment could be controlled and repeated in the lab, making the detector instrument could be tested in the lab. For the moment, we make the focus on the spectral engine design include the optical system, research of DMD programmable spectrum and the spectral resolution of the selected spectrum. The detail is shown.
Moon Phases Over the Persian Gulf
2017-12-08
NASA images acquired October 15, 2012 The Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured these nighttime views of the Persian Gulf region on September 30, October 5, October 10, and October 15, 2012. The images are from the VIIRS “day-night band,” which detects light in a range of wavelengths from green to near-infrared and uses filtering techniques to observe signals such as gas flares, auroras, wildfires, city lights, and reflected moonlight. Each image includes an inset of the Moon in four different phases. September 30 shows the Persian Gulf by the light of the full Moon; October 15 shows the effects of a new Moon. As the amount of moonlight decreases, some land surface features become harder to detect, but the lights from cities and ships become more obvious. Urbanization is most apparent along the northeastern coast of Saudi Arabia, in Qatar, and in the United Arab Emirates (UAE). In Qatar and UAE, major highways can even be discerned by nighttime lights. In eighteenth-century England, a small group of entrepreneurs, inventors and free thinkers—James Watt and Charles Darwin’s grandfathers among them—started a club. They named it the Lunar Society, and the “lunaticks” scheduled their dinner meetings on evenings of the full Moon. The timing wasn’t based on any kind of superstition, it was based on practicality. In the days before electricity, seeing one’s way home after dark was far easier by the light of a full Moon. In the early twenty-first century, electricity has banished the need for such careful scheduling, but the light of the full Moon still makes a difference. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS day-night band data from the Suomi National Polar-orbiting Partnership. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Michon Scott. Instrument: Suomi NPP - VIIRS Credit: NASA Earth Observatory Click here to view all of the Earth at Night 2012 images Click here to read more about this image NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Physics-based subsurface visualization of human tissue.
Sharp, Richard; Adams, Jacob; Machiraju, Raghu; Lee, Robert; Crane, Robert
2007-01-01
In this paper, we present a framework for simulating light transport in three-dimensional tissue with inhomogeneous scattering properties. Our approach employs a computational model to simulate light scattering in tissue through the finite element solution of the diffusion equation. Although our model handles both visible and nonvisible wavelengths, we especially focus on the interaction of near infrared (NIR) light with tissue. Since most human tissue is permeable to NIR light, tools to noninvasively image tumors, blood vasculature, and monitor blood oxygenation levels are being constructed. We apply this model to a numerical phantom to visually reproduce the images generated by these real-world tools. Therefore, in addition to enabling inverse design of detector instruments, our computational tools produce physically-accurate visualizations of subsurface structures.
NASA Astrophysics Data System (ADS)
Fard, Ali M.; Gardecki, Joseph A.; Ughi, Giovanni J.; Hyun, Chulho; Tearney, Guillermo J.
2016-02-01
Intravascular optical coherence tomography (OCT) is a high-resolution catheter-based imaging method that provides three-dimensional microscopic images of coronary artery in vivo, facilitating coronary artery disease treatment decisions based on detailed morphology. Near-infrared spectroscopy (NIRS) has proven to be a powerful tool for identification of lipid-rich plaques inside the coronary walls. We have recently demonstrated a dual-modality intravascular imaging technology that integrates OCT and NIRS into one imaging catheter using a two-fiber arrangement and a custom-made dual-channel fiber rotary junction. It therefore enables simultaneous acquisition of microstructural and composition information at 100 frames/second for improved diagnosis of coronary lesions. The dual-modality OCT-NIRS system employs a single wavelength-swept light source for both OCT and NIRS modalities. It subsequently uses a high-speed photoreceiver to detect the NIRS spectrum in the time domain. Although use of one light source greatly simplifies the system configuration, such light source exhibits pulse-to-pulse wavelength and intensity variation due to mechanical scanning of the wavelength. This can be in particular problematic for NIRS modality and sacrifices the reliability of the acquired spectra. In order to address this challenge, here we developed a robust data acquisition and processing method that compensates for the spectral variations of the wavelength-swept light source. The proposed method extracts the properties of the light source, i.e., variation period and amplitude from a reference spectrum and subsequently calibrates the NIRS datasets. We have applied this method on datasets obtained from cadaver human coronary arteries using a polygon-scanning (1230-1350nm) OCT system, operating at 100,000 sweeps per second. The results suggest that our algorithm accurately and robustly compensates the spectral variations and visualizes the dual-modality OCT-NIRS images. These findings are therefore crucial for the practical application and clinical translation of dual-modality intravascular OCT-NIRS imaging when the same swept sources are used for both OCT and spectroscopy.
Design of a concise Féry-prism hyperspectral imaging system based on multi-configuration
NASA Astrophysics Data System (ADS)
Dong, Wei; Nie, Yun-feng; Zhou, Jin-song
2013-08-01
In order to meet the needs of space borne and airborne hyperspectral imaging system for light weight, simplification and high spatial resolution, a novel design of Féry-prism hyperspectral imaging system based on Zemax multi-configuration method is presented. The novel structure is well arranged by analyzing optical monochromatic aberrations theoretically, and the optical structure of this design is concise. The fundamental of this design is Offner relay configuration, whereas the secondary mirror is replaced by Féry-prism with curved surfaces and a reflective front face. By reflection, the light beam passes through the Féry-prism twice, which promotes spectral resolution and enhances image quality at the same time. The result shows that the system can achieve light weight and simplification, compared to other hyperspectral imaging systems. Composed of merely two spherical mirrors and one achromatized Féry-prism to perform both dispersion and imaging functions, this structure is concise and compact. The average spectral resolution is 6.2nm; The MTFs for 0.45~1.00um spectral range are greater than 0.75, RMSs are less than 2.4um; The maximal smile is less than 10% pixel, while the keystones is less than 2.8% pixel; image quality approximates the diffraction limit. The design result shows that hyperspectral imaging system with one modified Féry-prism substituting the secondary mirror of Offner relay configuration is feasible from the perspective of both theory and practice, and possesses the merits of simple structure, convenient optical alignment, and good image quality, high resolution in space and spectra, adjustable dispersive nonlinearity. The system satisfies the requirements of airborne or space borne hyperspectral imaging system.
Scanned Image Projection System Employing Intermediate Image Plane
NASA Technical Reports Server (NTRS)
DeJong, Christian Dean (Inventor); Hudman, Joshua M. (Inventor)
2014-01-01
In imaging system, a spatial light modulator is configured to produce images by scanning a plurality light beams. A first optical element is configured to cause the plurality of light beams to converge along an optical path defined between the first optical element and the spatial light modulator. A second optical element is disposed between the spatial light modulator and a waveguide. The first optical element and the spatial light modulator are arranged such that an image plane is created between the spatial light modulator and the second optical element. The second optical element is configured to collect the diverging light from the image plane and collimate it. The second optical element then delivers the collimated light to a pupil at an input of the waveguide.
Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2017-05-08
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.
White-light optical vortex coronagraph
NASA Astrophysics Data System (ADS)
Kanburapa, Prachyathit
An optical vortex is characterized by a dark core of destructive interference in a light beam. One of the methods commonly employed to create an optical vortex is by using a computer-generated hologram. A vortex hologram pattern is computed from the interference pattern between a reference plane wave and a vortex wave, resulting in a forked grating pattern. In astronomy, an optical vortex coronagraph is one of the most promising high contrast imaging techniques for the direct imaging of extra-solar planets. Direct imaging of extra-solar planets is a challenging task since the brightness of the parent star is extremely high compared to its orbiting planets. The on-axis light from the parent star gets diffracted in the coronagraph, forming a "ring of fire" pattern, whereas the slightly off-axis light from the planet remains intact. Lyot stop can then be used to block the ring of fire pattern, thus allowing only the planetary light to get through to the imaging camera. Contrast enhancements of 106 or more are possible, provided the vortex lens (spiral phase plate) has exceptional optical quality. By using a vortex hologram with a 4 microm pitch, and an f/300 focusing lens, we were able to demonstrate the creation of a "ring of fire" using a white light emitting diode as a source. A dispersion compensating linear diffraction grating of 4 microm pitch was used to bring the rings together to form a single white light ring of fire. To our knowledge, this is the first time a vortex hologram based OVC has been demonstrated, resulting in a well-formed white light ring of fire. Experimental results show measured power contrast of 1/515 when HeNe laser source was used as a light source and 1/77 when using a white light emitting diode.
Development of integrated semiconductor optical sensors for functional brain imaging
NASA Astrophysics Data System (ADS)
Lee, Thomas T.
Optical imaging of neural activity is a widely accepted technique for imaging brain function in the field of neuroscience research, and has been used to study the cerebral cortex in vivo for over two decades. Maps of brain activity are obtained by monitoring intensity changes in back-scattered light, called Intrinsic Optical Signals (IOS), that correspond to fluctuations in blood oxygenation and volume associated with neural activity. Current imaging systems typically employ bench-top equipment including lamps and CCD cameras to study animals using visible light. Such systems require the use of anesthetized or immobilized subjects with craniotomies, which imposes limitations on the behavioral range and duration of studies. The ultimate goal of this work is to overcome these limitations by developing a single-chip semiconductor sensor using arrays of sources and detectors operating at near-infrared (NIR) wavelengths. A single-chip implementation, combined with wireless telemetry, will eliminate the need for immobilization or anesthesia of subjects and allow in vivo studies of free behavior. NIR light offers additional advantages because it experiences less absorption in animal tissue than visible light, which allows for imaging through superficial tissues. This, in turn, reduces or eliminates the need for traumatic surgery and enables long-term brain-mapping studies in freely-behaving animals. This dissertation concentrates on key engineering challenges of implementing the sensor. This work shows the feasibility of using a GaAs-based array of vertical-cavity surface emitting lasers (VCSELs) and PIN photodiodes for IOS imaging. I begin with in-vivo studies of IOS imaging through the skull in mice, and use these results along with computer simulations to establish minimum performance requirements for light sources and detectors. I also evaluate the performance of a current commercial VCSEL for IOS imaging, and conclude with a proposed prototype sensor.
Virtual reality 3D headset based on DMD light modulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernacki, Bruce E.; Evans, Allan; Tang, Edward
We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.
Wavefront detection method of a single-sensor based adaptive optics system.
Wang, Chongchong; Hu, Lifa; Xu, Huanyu; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Mu, Quanquan; Yang, Chengliang; Cao, Zhaoliang; Lu, Xinghai; Xuan, Li
2015-08-10
In adaptive optics system (AOS) for optical telescopes, the reported wavefront sensing strategy consists of two parts: a specific sensor for tip-tilt (TT) detection and another wavefront sensor for other distortions detection. Thus, a part of incident light has to be used for TT detection, which decreases the light energy used by wavefront sensor and eventually reduces the precision of wavefront correction. In this paper, a single Shack-Hartmann wavefront sensor based wavefront measurement method is presented for both large amplitude TT and other distortions' measurement. Experiments were performed for testing the presented wavefront method and validating the wavefront detection and correction ability of the single-sensor based AOS. With adaptive correction, the root-mean-square of residual TT was less than 0.2 λ, and a clear image was obtained in the lab. Equipped on a 1.23-meter optical telescope, the binary stars with angle distance of 0.6″ were clearly resolved using the AOS. This wavefront measurement method removes the separate TT sensor, which not only simplifies the AOS but also saves light energy for subsequent wavefront sensing and imaging, and eventually improves the detection and imaging capability of the AOS.
NASA Astrophysics Data System (ADS)
Ma, Yang; Wu, Congjun; Xu, Zhihao; Wang, Fei; Wang, Min
2018-05-01
Photoconductor arrays with both high responsivity and large ON/OFF ratios are of great importance for the application of image sensors. Herein, a ZnO vertical nanorod array based photoconductor with a light absorption layer separated from the device channel has been designed, in which the photo-generated carriers along the axial ZnO nanorods drive to the external electrodes through nanorod-nanorod junctions in the dense layer at the bottom. This design allows us to enhance the photocurrent with unchanged dark current by increasing the ratio between the ZnO nanorod length and the thickness of the dense layer to achieve both high responsivity and large ON/OFF ratios. As a result, the as-fabricated devices possess a high responsivity of 1.3 × 105 A/W, a high ON/OFF ratio of 790, a high detectivity of 1.3 × 1013 Jones, and a low detectable light intensity of 1 μW/cm2. More importantly, the developed approach enables the integration of ZnO vertical nanorod array based photodetectors as image sensors with uniform device-to-device performance.
Comprehensive model for predicting perceptual image quality of smart mobile devices.
Gong, Rui; Xu, Haisong; Luo, M R; Li, Haifeng
2015-01-01
An image quality model for smart mobile devices was proposed based on visual assessments of several image quality attributes. A series of psychophysical experiments were carried out on two kinds of smart mobile devices, i.e., smart phones and tablet computers, in which naturalness, colorfulness, brightness, contrast, sharpness, clearness, and overall image quality were visually evaluated under three lighting environments via categorical judgment method for various application types of test images. On the basis of Pearson correlation coefficients and factor analysis, the overall image quality could first be predicted by its two constituent attributes with multiple linear regression functions for different types of images, respectively, and then the mathematical expressions were built to link the constituent image quality attributes with the physical parameters of smart mobile devices and image appearance factors. The procedure and algorithms were applicable to various smart mobile devices, different lighting conditions, and multiple types of images, and performance was verified by the visual data.
Biocular vehicle display optical designs
NASA Astrophysics Data System (ADS)
Chu, H.; Carter, Tom
2012-06-01
Biocular vehicle display optics is a fast collimating lens (f / # < 0.9) that presents the image of the display at infinity to both eyes of the viewer. Each eye captures the scene independently and the brain merges the two images into one through the overlapping portions of the images. With the recent conversion from analog CRT based displays to lighter, more compact active-matrix organic light-emitting diodes (AMOLED) digital image sources, display optical designs have evolved to take advantage of the higher resolution AMOLED image sources. To maximize the field of view of the display optics and fully resolve the smaller pixels, the digital image source is pre-magnified by relay optics or a coherent taper fiber optics plate. Coherent taper fiber optics plates are used extensively to: 1. Convert plano focal planes to spherical focal planes in order to eliminate Petzval field curvature. This elimination enables faster lens speed and/or larger field of view of eye pieces, display optics. 2. Provide pre-magnification to lighten the work load of the optics to further increase the numerical aperture and/or field of view. 3. Improve light flux collection efficiency and field of view by collecting all the light emitted by the image source and guiding imaging light bundles toward the lens aperture stop. 4. Reduce complexity of the optical design and overall packaging volume by replacing pre-magnification optics with a compact taper fiber optics plate. This paper will review and compare the performance of biocular vehicle display designs without and with taper fiber optics plate.
Near Infrared Fluorescence Imaging in Nano-Therapeutics and Photo-Thermal Evaluation
Vats, Mukti; Mishra, Sumit Kumar; Baghini, Mahdieh Shojaei; Chauhan, Deepak S.; Srivastava, Rohit; De, Abhijit
2017-01-01
The unresolved and paramount challenge in bio-imaging and targeted therapy is to clearly define and demarcate the physical margins of tumor tissue. The ability to outline the healthy vital tissues to be carefully navigated with transection while an intraoperative surgery procedure is performed sets up a necessary and under-researched goal. To achieve the aforementioned objectives, there is a need to optimize design considerations in order to not only obtain an effective imaging agent but to also achieve attributes like favorable water solubility, biocompatibility, high molecular brightness, and a tissue specific targeting approach. The emergence of near infra-red fluorescence (NIRF) light for tissue scale imaging owes to the provision of highly specific images of the target organ. The special characteristics of near infra-red window such as minimal auto-fluorescence, low light scattering, and absorption of biomolecules in tissue converge to form an attractive modality for cancer imaging. Imparting molecular fluorescence as an exogenous contrast agent is the most beneficial attribute of NIRF light as a clinical imaging technology. Additionally, many such agents also display therapeutic potentials as photo-thermal agents, thus meeting the dual purpose of imaging and therapy. Here, we primarily discuss molecular imaging and therapeutic potentials of two such classes of materials, i.e., inorganic NIR dyes and metallic gold nanoparticle based materials. PMID:28452928
Improved proton CT imaging using a bismuth germanium oxide scintillator.
Tanaka, Sodai; Nishio, Teiji; Tsuneda, Masato; Matsushita, Keiichiro; Kabuki, Shigeto; Uesaka, Mitsuru
2018-02-02
Range uncertainty is among the most formidable challenges associated with the treatment planning of proton therapy. Proton imaging, which includes proton radiography and proton computed tomography (pCT), is a useful verification tool. We have developed a pCT detection system that uses a thick bismuth germanium oxide (BGO) scintillator and a CCD camera. The current method is based on a previous detection system that used a plastic scintillator, and implements improved image processing techniques. In the new system, the scintillation light intensity is integrated along the proton beam path by the BGO scintillator, and acquired as a two-dimensional distribution with the CCD camera. The range of a penetrating proton is derived from the integrated light intensity using a light-to-range conversion table, and a pCT image can be reconstructed. The proton range in the BGO scintillator is shorter than in the plastic scintillator, so errors due to extended proton ranges can be reduced. To demonstrate the feasibility of the pCT system, an experiment was performed using a 70 MeV proton beam created by the AVF930 cyclotron at the National Institute of Radiological Sciences. The accuracy of the light-to-range conversion table, which is susceptible to errors due to its spatial dependence, was investigated, and the errors in the acquired pixel values were less than 0.5 mm. Images of various materials were acquired, and the pixel-value errors were within 3.1%, which represents an improvement over previous results. We also obtained a pCT image of an edible chicken piece, the first of its kind for a biological material, and internal structures approximately one millimeter in size were clearly observed. This pCT imaging system is fast and simple, and based on these findings, we anticipate that we can acquire 200 MeV pCT images using the BGO scintillator system.
Improved proton CT imaging using a bismuth germanium oxide scintillator
NASA Astrophysics Data System (ADS)
Tanaka, Sodai; Nishio, Teiji; Tsuneda, Masato; Matsushita, Keiichiro; Kabuki, Shigeto; Uesaka, Mitsuru
2018-02-01
Range uncertainty is among the most formidable challenges associated with the treatment planning of proton therapy. Proton imaging, which includes proton radiography and proton computed tomography (pCT), is a useful verification tool. We have developed a pCT detection system that uses a thick bismuth germanium oxide (BGO) scintillator and a CCD camera. The current method is based on a previous detection system that used a plastic scintillator, and implements improved image processing techniques. In the new system, the scintillation light intensity is integrated along the proton beam path by the BGO scintillator, and acquired as a two-dimensional distribution with the CCD camera. The range of a penetrating proton is derived from the integrated light intensity using a light-to-range conversion table, and a pCT image can be reconstructed. The proton range in the BGO scintillator is shorter than in the plastic scintillator, so errors due to extended proton ranges can be reduced. To demonstrate the feasibility of the pCT system, an experiment was performed using a 70 MeV proton beam created by the AVF930 cyclotron at the National Institute of Radiological Sciences. The accuracy of the light-to-range conversion table, which is susceptible to errors due to its spatial dependence, was investigated, and the errors in the acquired pixel values were less than 0.5 mm. Images of various materials were acquired, and the pixel-value errors were within 3.1%, which represents an improvement over previous results. We also obtained a pCT image of an edible chicken piece, the first of its kind for a biological material, and internal structures approximately one millimeter in size were clearly observed. This pCT imaging system is fast and simple, and based on these findings, we anticipate that we can acquire 200 MeV pCT images using the BGO scintillator system.
Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik
2017-02-10
This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyfirstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.
Binarization algorithm for document image with complex background
NASA Astrophysics Data System (ADS)
Miao, Shaojun; Lu, Tongwei; Min, Feng
2015-12-01
The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.
High-frame-rate imaging of biological samples with optoacoustic micro-tomography
NASA Astrophysics Data System (ADS)
Deán-Ben, X. Luís.; López-Schier, Hernán.; Razansky, Daniel
2018-02-01
Optical microscopy remains a major workhorse in biological discovery despite the fact that light scattering limits its applicability to depths of ˜ 1 mm in scattering tissues. Optoacoustic imaging has been shown to overcome this barrier by resolving optical absorption with microscopic resolution in significantly deeper regions. Yet, the time domain is paramount for the observation of biological dynamics in living systems that exhibit fast motion. Commonly, acquisition of microscopy data involves raster scanning across the imaged volume, which significantly limits temporal resolution in 3D. To overcome these limitations, we have devised a fast optoacoustic micro-tomography (OMT) approach based on simultaneous acquisition of 3D image data with a high-density hemispherical ultrasound array having effective detection bandwidth around 25 MHz. We performed experiments by imaging tissue-mimicking phantoms and zebrafish larvae, demonstrating that OMT can provide nearly cellular resolution and imaging speed of 100 volumetric frames per second. As opposed to other optical microscopy techniques, OMT is a hybrid method that resolves optical absorption contrast acoustically using unfocused light excitation. Thus, no penetration barriers are imposed by light scattering in deep tissues, suggesting it as a powerful approach for multi-scale functional and molecular imaging applications.
Fiber optic spectroscopic digital imaging sensor and method for flame properties monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelepouga, Serguei A; Rue, David M; Saveliev, Alexei V
2011-03-15
A system for real-time monitoring of flame properties in combustors and gasifiers which includes an imaging fiber optic bundle having a light receiving end and a light output end and a spectroscopic imaging system operably connected with the light output end of the imaging fiber optic bundle. Focusing of the light received by the light receiving end of the imaging fiber optic bundle by a wall disposed between the light receiving end of the fiber optic bundle and a light source, which wall forms a pinhole opening aligned with the light receiving end.
Evaluation method based on the image correlation for laser jamming image
NASA Astrophysics Data System (ADS)
Che, Jinxi; Li, Zhongmin; Gao, Bo
2013-09-01
The jamming effectiveness evaluation of infrared imaging system is an important part of electro-optical countermeasure. The infrared imaging devices in the military are widely used in the searching, tracking and guidance and so many other fields. At the same time, with the continuous development of laser technology, research of laser interference and damage effect developed continuously, laser has been used to disturbing the infrared imaging device. Therefore, the effect evaluation of the infrared imaging system by laser has become a meaningful problem to be solved. The information that the infrared imaging system ultimately present to the user is an image, so the evaluation on jamming effect can be made from the point of assessment of image quality. The image contains two aspects of the information, the light amplitude and light phase, so the image correlation can accurately perform the difference between the original image and disturbed image. In the paper, the evaluation method of digital image correlation, the assessment method of image quality based on Fourier transform, the estimate method of image quality based on error statistic and the evaluation method of based on peak signal noise ratio are analysed. In addition, the advantages and disadvantages of these methods are analysed. Moreover, the infrared disturbing images of the experiment result, in which the thermal infrared imager was interfered by laser, were analysed by using these methods. The results show that the methods can better reflect the jamming effects of the infrared imaging system by laser. Furthermore, there is good consistence between evaluation results by using the methods and the results of subjective visual evaluation. And it also provides well repeatability and convenient quantitative analysis. The feasibility of the methods to evaluate the jamming effect was proved. It has some extent reference value for the studying and developing on electro-optical countermeasures equipments and effectiveness evaluation.
NASA Astrophysics Data System (ADS)
Lin, Yongping; Zhang, Xiyang; He, Youwu; Cai, Jianyong; Li, Hui
2018-02-01
The Jones matrix and the Mueller matrix are main tools to study polarization devices. The Mueller matrix can also be used for biological tissue research to get complete tissue properties, while the commercial optical coherence tomography system does not give relevant analysis function. Based on the LabVIEW, a near real time display method of Mueller matrix image of biological tissue is developed and it gives the corresponding phase retardant image simultaneously. A quarter-wave plate was placed at 45 in the sample arm. Experimental results of the two orthogonal channels show that the phase retardance based on incident light vector fixed mode and the Mueller matrix based on incident light vector dynamic mode can provide an effective analysis method of the existing system.
NASA Astrophysics Data System (ADS)
Mitic, Jelena; Anhut, Tiemo; Serov, Alexandre; Lasser, Theo; Bourquin, Stephane
2003-07-01
Real-time optically sectioned microscopy is demonstrated using an AC-sensitive detection concept realized with smart CMOS image sensor and structured light illumination by a continuously moving periodic pattern. We describe two different detection systems based on CMOS image sensors for the detection and on-chip processing of the sectioned images in real time. A region-of-interest is sampled at high frame rate. The demodulated signal delivered by the detector corresponds to the depth discriminated image of the sample. The measured FWHM of the axial response depends on the spatial frequency of the projected grid illumination and is in the μm-range. The effect of using broadband incoherent illumination is discussed. The performance of these systems is demonstrated by imaging technical as well as biological samples.
Accelerated wavefront determination technique for optical imaging through scattering medium
NASA Astrophysics Data System (ADS)
He, Hexiang; Wong, Kam Sing
2016-03-01
Wavefront shaping applied on scattering light is a promising optical imaging method in biological systems. Normally, optimized modulation can be obtained by a Liquid-Crystal Spatial Light Modulator (LC-SLM) and CCD hardware iteration. Here we introduce an improved method for this optimization process. The core of the proposed method is to firstly detect the disturbed wavefront, and then to calculate the modulation phase pattern by computer simulation. In particular, phase retrieval method together with phase conjugation is most effective. In this way, the LC-SLM based system can complete the wavefront optimization and imaging restoration within several seconds which is two orders of magnitude faster than the conventional technique. The experimental results show good imaging quality and may contribute to real time imaging recovery in scattering medium.
Study on real-time images compounded using spatial light modulator
NASA Astrophysics Data System (ADS)
Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang
2007-01-01
Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
Sugimura, Daisuke; Kobayashi, Suguru; Hamamoto, Takayuki
2017-11-01
Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.
Integration of OLEDs in biomedical sensor systems: design and feasibility analysis
NASA Astrophysics Data System (ADS)
Rai, Pratyush; Kumar, Prashanth S.; Varadan, Vijay K.
2010-04-01
Organic (electronic) Light Emitting Diodes (OLEDs) have been shown to have applications in the field of lighting and flexible display. These devices can also be incorporated in sensors as light source for imaging/fluorescence sensing for miniaturized systems for biomedical applications and low-cost displays for sensor output. The current device capability aligns well with the aforementioned applications as low power diffuse lighting and momentary/push button dynamic display. A top emission OLED design has been proposed that can be incorporated with the sensor and peripheral electrical circuitry, also based on organic electronics. Feasibility analysis is carried out for an integrated optical imaging/sensor system, based on luminosity and spectrum band width. A similar study is also carried out for sensor output display system that functions as a pseudo active OLED matrix. A power model is presented for device power requirements and constraints. The feasibility analysis is also supplemented with the discussion about implementation of ink-jet printing and stamping techniques for possibility of roll to roll manufacturing.
Young Stars Emerge from Orion Head
2007-05-17
This image from NASA's Spitzer Space Telescope shows infant stars "hatching" in the head of the hunter constellation, Orion. Astronomers suspect that shockwaves from a supernova explosion in Orion's head, nearly three million years ago, may have initiated this newfound birth . The region featured in this Spitzer image is called Barnard 30. It is located approximately 1,300 light-years away and sits on the right side of Orion's "head," just north of the massive star Lambda Orionis. Wisps of red in the cloud are organic molecules called polycyclic aromatic hydrocarbons. These molecules are formed anytime carbon-based materials are burned incompletely. On Earth, they can be found in the sooty exhaust from automobile and airplane engines. They also coat the grills where charcoal-broiled meats are cooked. This image shows infrared light captured by Spitzer's infrared array camera. Light with wavelengths of 8 and 5.8 microns (red and orange) comes mainly from dust that has been heated by starlight. Light of 4.5 microns (green) shows hot gas and dust; and light of 3.6 microns (blue) is from starlight. http://photojournal.jpl.nasa.gov/catalog/PIA09412
Young Stars Emerge from Orion's Head
NASA Technical Reports Server (NTRS)
2007-01-01
This image from NASA's Spitzer Space Telescope shows infant stars 'hatching' in the head of the hunter constellation, Orion. Astronomers suspect that shockwaves from a supernova explosion in Orion's head, nearly three million years ago, may have initiated this newfound birth The region featured in this Spitzer image is called Barnard 30. It is located approximately 1,300 light-years away and sits on the right side of Orion's 'head,' just north of the massive star Lambda Orionis. Wisps of red in the cloud are organic molecules called polycyclic aromatic hydrocarbons. These molecules are formed anytime carbon-based materials are burned incompletely. On Earth, they can be found in the sooty exhaust from automobile and airplane engines. They also coat the grills where charcoal-broiled meats are cooked. This image shows infrared light captured by Spitzer's infrared array camera. Light with wavelengths of 8 and 5.8 microns (red and orange) comes mainly from dust that has been heated by starlight. Light of 4.5 microns (green) shows hot gas and dust; and light of 3.6 microns (blue) is from starlight.NASA Astrophysics Data System (ADS)
Xiao, Ze-xin; Chen, Kuan
2008-03-01
Biochemical analyzer is one of the important instruments in the clinical diagnosis, and its optical system is the important component. The operation of this optical system can be regard as three parts. The first is transforms the duplicate colored light as the monochromatic light. The second is transforms the light signal of the monochromatic, which have the information of the measured sample, as the electric signal by use the photoelectric detector. And the last is to send the signal to data processing system by use the control system. Generally, there are three types monochromators: prism, optical grating and narrow-band pass filter. Thereinto, the narrow-band pass filter were widely used in the semi-auto biochemical analyzer. Through analysed the principle of biochemical analyzer base on the narrow-band pass filter, we known that the optical has three features. The first is the optical path of the optical system is a non- imaging system. The second, this system is wide spectrum region that contain visible light and ultraviolet spectrum. The third, this is a little aperture and little field monochromatic light system. Therefore, design idea of this optical system is: (1) luminous energy in the system less transmission loss; (2) detector coupled to the luminous energy efficient; mainly correct spherical aberration. Practice showed the point of Image quality evaluation: (1) dispersion circle diameter equal the receiving device pixel effective width of 125%, and the energy distribution should point target of 80% of energy into the receiving device pixel width of the effective diameter in this dispersion circle; (2) With MTF evaluation, the requirements in 20lp/ mm spatial frequency, the MTF values should not be lower than 0.6. The optical system should be fit in with ultraviolet and visible light width spectrum, and the detector image plane can but suited the majority visible light spectrum when by defocus optimization, and the image plane of violet and ultraviolet excursion quite large. Traditional biochemical analyzer optical design not fully consider this point, the authors introduce a effective image plane compensation measure innovatively, it greatly increased the reception efficiency of the violet and ultraviolet.
Enhancing the performance of the light field microscope using wavefront coding
Cohen, Noy; Yang, Samuel; Andalman, Aaron; Broxton, Michael; Grosenick, Logan; Deisseroth, Karl; Horowitz, Mark; Levoy, Marc
2014-01-01
Light field microscopy has been proposed as a new high-speed volumetric computational imaging method that enables reconstruction of 3-D volumes from captured projections of the 4-D light field. Recently, a detailed physical optics model of the light field microscope has been derived, which led to the development of a deconvolution algorithm that reconstructs 3-D volumes with high spatial resolution. However, the spatial resolution of the reconstructions has been shown to be non-uniform across depth, with some z planes showing high resolution and others, particularly at the center of the imaged volume, showing very low resolution. In this paper, we enhance the performance of the light field microscope using wavefront coding techniques. By including phase masks in the optical path of the microscope we are able to address this non-uniform resolution limitation. We have also found that superior control over the performance of the light field microscope can be achieved by using two phase masks rather than one, placed at the objective’s back focal plane and at the microscope’s native image plane. We present an extended optical model for our wavefront coded light field microscope and develop a performance metric based on Fisher information, which we use to choose adequate phase masks parameters. We validate our approach using both simulated data and experimental resolution measurements of a USAF 1951 resolution target; and demonstrate the utility for biological applications with in vivo volumetric calcium imaging of larval zebrafish brain. PMID:25322056
Enhancing the performance of the light field microscope using wavefront coding.
Cohen, Noy; Yang, Samuel; Andalman, Aaron; Broxton, Michael; Grosenick, Logan; Deisseroth, Karl; Horowitz, Mark; Levoy, Marc
2014-10-06
Light field microscopy has been proposed as a new high-speed volumetric computational imaging method that enables reconstruction of 3-D volumes from captured projections of the 4-D light field. Recently, a detailed physical optics model of the light field microscope has been derived, which led to the development of a deconvolution algorithm that reconstructs 3-D volumes with high spatial resolution. However, the spatial resolution of the reconstructions has been shown to be non-uniform across depth, with some z planes showing high resolution and others, particularly at the center of the imaged volume, showing very low resolution. In this paper, we enhance the performance of the light field microscope using wavefront coding techniques. By including phase masks in the optical path of the microscope we are able to address this non-uniform resolution limitation. We have also found that superior control over the performance of the light field microscope can be achieved by using two phase masks rather than one, placed at the objective's back focal plane and at the microscope's native image plane. We present an extended optical model for our wavefront coded light field microscope and develop a performance metric based on Fisher information, which we use to choose adequate phase masks parameters. We validate our approach using both simulated data and experimental resolution measurements of a USAF 1951 resolution target; and demonstrate the utility for biological applications with in vivo volumetric calcium imaging of larval zebrafish brain.
Passive lighting responsive three-dimensional integral imaging
NASA Astrophysics Data System (ADS)
Lou, Yimin; Hu, Juanmei
2017-11-01
A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.
NASA Astrophysics Data System (ADS)
Peller, Joseph A.; Ceja, Nancy K.; Wawak, Amanda J.; Trammell, Susan R.
2018-02-01
Polarized light imaging and optical spectroscopy can be used to distinguish between healthy and diseased tissue. In this study, the design and testing of a single-pixel hyperspectral imaging system that uses differences in the polarization of light reflected from tissue to differentiate between healthy and thermally damaged tissue is discussed. Thermal lesions were created in porcine skin (n = 8) samples using an IR laser. The damaged regions were clearly visible in the polarized light hyperspectral images. Reflectance hyperspectral and white light imaging was also obtained for all tissue samples. Sizes of the thermally damaged regions as measured via polarized light hyperspectral imaging are compared to sizes of these regions as measured in the reflectance hyperspectral images and white light images. Good agreement between the sizes measured by all three imaging modalities was found. Hyperspectral polarized light imaging can differentiate between healthy and damaged tissue. Possible applications of this imaging system include determination of tumor margins during cancer surgery or pre-surgical biopsy.
Differences in the intensity of light-induced fluorescence emitted by resin composites.
Kim, Bo-Ra; Kang, Si-Mook; Kim, Gyung-Min; Kim, Baek-Il
2016-03-01
The aims of this study were to compare the intensities of fluorescence emitted by different resin composites as detected using quantitative light-induced fluorescence (QLF) technology, and to compare the fluorescence intensity contrast with the color contrast between a restored composite and the adjacent region of the tooth. Six brands of light-cured resin composites (shade A2) were investigated. The composites were used to prepare composite discs, and fill holes that had been prepared in extracted human teeth. White-light and fluorescence images of all specimens were obtained using a fluorescence camera based on QLF technology (QLF-D) and converted into 8-bit grayscale images. The fluorescence intensity of the discs as well as the fluorescence intensity contrast and the color contrast between the composite restoration and adjacent tooth region were calculated as grayscale levels. The grayscale levels for the composite discs differed significantly with the brand (p<0.001): DenFil (10.84±0.35, mean±SD), Filtek Z350 (58.28±1.37), Premisa (156.94±1.58), Grandio (177.20±0.81), Charisma (207.05±0.77), and Gradia direct posterior (211.52±1.66). The difference in grayscale levels between a resin restoration and the adjacent tooth was significantly greater in fluorescence images for each brand than in white-light images, except for the Filtek Z350 (p<0.05). However, the Filtek Z350 restoration was distinguishable from the adjacent tooth in a fluorescence image. The intensities of fluorescence detected from the resin composites varied. The differences between the composite and adjacent tooth were greater for the fluorescence intensity contrast than for the colors observed in the white-light images. Copyright © 2016 Elsevier B.V. All rights reserved.
Device for wavelength-selective imaging
Frangioni, John V.
2010-09-14
An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.
Single-pixel imaging based on compressive sensing with spectral-domain optical mixing
NASA Astrophysics Data System (ADS)
Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin
2017-11-01
In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.
Imaging polarimetry and retinal blood vessel quantification at the epiretinal membrane
Miura, Masahiro; Elsner, Ann E.; Cheney, Michael C.; Usui, Masahiko; Iwasaki, Takuya
2007-01-01
We evaluated a polarimetry method to enhance retinal blood vessels masked by the epiretinal membrane. Depolarized light images were computed by removing the polarization retaining light reaching the instrument and were compared with parallel polarized light images, average reflectance images, and the corresponding images at 514 nm. Contrasts were computed for retinal vessel profiles for arteries and veins. Contrasts were higher in the 514 nm images in normal eyes but higher in the depolarized light image in the eyes with epiretinal membranes. Depolarized light images were useful for examining the retinal vasculature in the presence of retinal disease. PMID:17429490
Image analysis for material characterisation
NASA Astrophysics Data System (ADS)
Livens, Stefan
In this thesis, a number of image analysis methods are presented as solutions to two applications concerning the characterisation of materials. Firstly, we deal with the characterisation of corrosion images, which is handled using a multiscale texture analysis method based on wavelets. We propose a feature transformation that deals with the problem of rotation invariance. Classification is performed with a Learning Vector Quantisation neural network and with combination of outputs. In an experiment, 86,2% of the images showing either pit formation or cracking, are correctly classified. Secondly, we develop an automatic system for the characterisation of silver halide microcrystals. These are flat crystals with a triangular or hexagonal base and a thickness in the 100 to 200 nm range. A light microscope is used to image them. A novel segmentation method is proposed, which allows to separate agglomerated crystals. For the measurement of shape, the ratio between the largest and the smallest radius yields the best results. The thickness measurement is based on the interference colours that appear for light reflected at the crystals. The mean colour of different thickness populations is determined, from which a calibration curve is derived. With this, the thickness of new populations can be determined accurately.
Imaging arrangement and microscope
Pertsinidis, Alexandros; Chu, Steven
2015-12-15
An embodiment of the present invention is an imaging arrangement that includes imaging optics, a fiducial light source, and a control system. In operation, the imaging optics separate light into first and second tight by wavelength and project the first and second light onto first and second areas within first and second detector regions, respectively. The imaging optics separate fiducial light from the fiducial light source into first and second fiducial light and project the first and second fiducial light onto third and fourth areas within the first and second detector regions, respectively. The control system adjusts alignment of the imaging optics so that the first and second fiducial light projected onto the first and second detector regions maintain relatively constant positions within the first and second detector regions, respectively. Another embodiment of the present invention is a microscope that includes the imaging arrangement.
Development of image analysis software for quantification of viable cells in microchips.
Georg, Maximilian; Fernández-Cabada, Tamara; Bourguignon, Natalia; Karp, Paola; Peñaherrera, Ana B; Helguera, Gustavo; Lerner, Betiana; Pérez, Maximiliano S; Mertelsmann, Roland
2018-01-01
Over the past few years, image analysis has emerged as a powerful tool for analyzing various cell biology parameters in an unprecedented and highly specific manner. The amount of data that is generated requires automated methods for the processing and analysis of all the resulting information. The software available so far are suitable for the processing of fluorescence and phase contrast images, but often do not provide good results from transmission light microscopy images, due to the intrinsic variation of the acquisition of images technique itself (adjustment of brightness / contrast, for instance) and the variability between image acquisition introduced by operators / equipment. In this contribution, it has been presented an image processing software, Python based image analysis for cell growth (PIACG), that is able to calculate the total area of the well occupied by cells with fusiform and rounded morphology in response to different concentrations of fetal bovine serum in microfluidic chips, from microscopy images in transmission light, in a highly efficient way.
Compact reflective imaging spectrometer utilizing immersed gratings
Chrisp, Michael P [Danville, CA
2006-05-09
A compact imaging spectrometer comprising an entrance slit for directing light, a first mirror that receives said light and reflects said light, an immersive diffraction grating that diffracts said light, a second mirror that focuses said light, and a detector array that receives said focused light. The compact imaging spectrometer can be utilized for remote sensing imaging spectrometers where size and weight are of primary importance.
Detection of viability of micro-algae cells by optofluidic hologram pattern.
Wang, Junsheng; Yu, Xiaomei; Wang, Yanjuan; Pan, Xinxiang; Li, Dongqing
2018-03-01
A rapid detection of micro-algae activity is critical for analysis of ship ballast water. A new method for detecting micro-algae activity based on lens-free optofluidic holographic imaging is presented in this paper. A compact lens-free optofluidic holographic imaging device was developed. This device is mainly composed of a light source, a small through-hole, a light propagation module, a microfluidic chip, and an image acquisition and processing module. The excited light from the light source passes through a small hole to reach the surface of the micro-algae cells in the microfluidic chip, and a holographic image is formed by the diffraction light of surface of micro-algae cells. The relation between the characteristics in the hologram pattern and the activity of micro-algae cells was investigated by using this device. The characteristics of the hologram pattern were extracted to represent the activity of micro-algae cells. To demonstrate the accuracy of the presented method and device, four species of micro-algae cells were employed as the test samples and the comparison experiments between the alive and dead cells of four species of micro-algae were conducted. The results show that the developed method and device can determine live/dead microalgae cells accurately.
Simulation and analysis of light scattering by multilamellar bodies present in the human eye
Méndez-Aguilar, Emilia M.; Kelly-Pérez, Ismael; Berriel-Valdos, L. R.; Delgado-Atencio, José A.
2017-01-01
A modified computational model of the human eye was used to obtain and compare different probability density functions, radial profiles of light pattern distributions, and images of the point spread function formed in the human retina under the presence of different kinds of particles inside crystalline lenses suffering from cataracts. Specifically, this work uses simple particles without shells and multilamellar bodies (MLBs) with shells. The emergence of such particles alters the formation of images on the retina. Moreover, the MLBs change over time, which affects properties such as the refractive index of their shell. Hence, this work not only simulates the presence of such particles but also evaluates the incidence of particle parameters such as particle diameter, particle thickness, and shell refractive index, which are set based on reported experimental values. In addition, two wavelengths (400 nm and 700 nm) are used for light passing through the different layers of the computational model. The effects of these parameters on light scattering are analyzed using the simulation results. Further, in these results, the effects of light scattering on image formation can be seen when single particles, early-stage MLBs, or mature MLBs are incorporated in the model. Finally, it is found that particle diameter has the greatest impact on image formation. PMID:28663924
Simulation and analysis of light scattering by multilamellar bodies present in the human eye.
Méndez-Aguilar, Emilia M; Kelly-Pérez, Ismael; Berriel-Valdos, L R; Delgado-Atencio, José A
2017-06-01
A modified computational model of the human eye was used to obtain and compare different probability density functions, radial profiles of light pattern distributions, and images of the point spread function formed in the human retina under the presence of different kinds of particles inside crystalline lenses suffering from cataracts. Specifically, this work uses simple particles without shells and multilamellar bodies (MLBs) with shells. The emergence of such particles alters the formation of images on the retina. Moreover, the MLBs change over time, which affects properties such as the refractive index of their shell. Hence, this work not only simulates the presence of such particles but also evaluates the incidence of particle parameters such as particle diameter, particle thickness, and shell refractive index, which are set based on reported experimental values. In addition, two wavelengths (400 nm and 700 nm) are used for light passing through the different layers of the computational model. The effects of these parameters on light scattering are analyzed using the simulation results. Further, in these results, the effects of light scattering on image formation can be seen when single particles, early-stage MLBs, or mature MLBs are incorporated in the model. Finally, it is found that particle diameter has the greatest impact on image formation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solodar, A., E-mail: asisolodar@gmail.com; Arun Kumar, T.; Sarusi, G.
2016-01-11
Combination of InGaAs/InP heterojunction photodetector with nematic liquid crystal (LC) as the electro-optic modulating material for optically addressed spatial light modulator for short wavelength infra-red (SWIR) to visible light image conversion was designed, fabricated, and tested. The photodetector layer is composed of 640 × 512 photodiodes array based on heterojunction InP/InGaAs having 15 μm pitch on InP substrate and with backside illumination architecture. The photodiodes exhibit extremely low, dark current at room temperature, with optimum photo-response in the SWIR region. The photocurrent generated in the heterojunction, due to the SWIR photons absorption, is drifted to the surface of the InP,more » thus modulating the electric field distribution which modifies the orientation of the LC molecules. This device can be attractive for SWIR to visible image upconversion, such as for uncooled night vision goggles under low ambient light conditions.« less
Spectrally resolved laser interference microscopy
NASA Astrophysics Data System (ADS)
Butola, Ankit; Ahmad, Azeem; Dubey, Vishesh; Senthilkumaran, P.; Singh Mehta, Dalip
2018-07-01
We developed a new quantitative phase microscopy technique, namely, spectrally resolved laser interference microscopy (SR-LIM), with which it is possible to quantify multi-spectral phase information related to biological specimens without color crosstalk using a color CCD camera. It is a single shot technique where sequential switched on/off of red, green, and blue (RGB) wavelength light sources are not required. The method is implemented using a three-wavelength interference microscope and a customized compact grating based imaging spectrometer fitted at the output port. The results of the USAF resolution chart while employing three different light sources, namely, a halogen lamp, light emitting diodes, and lasers, are discussed and compared. The broadband light sources like the halogen lamp and light emitting diodes lead to stretching in the spectrally decomposed images, whereas it is not observed in the case of narrow-band light sources, i.e. lasers. The proposed technique is further successfully employed for single-shot quantitative phase imaging of human red blood cells at three wavelengths simultaneously without color crosstalk. Using the present technique, one can also use a monochrome camera, even though the experiments are performed using multi-color light sources. Finally, SR-LIM is not only limited to RGB wavelengths, it can be further extended to red, near infra-red, and infra-red wavelengths, which are suitable for various biological applications.
Image processing occupancy sensor
Brackney, Larry J.
2016-09-27
A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.
Speckle-learning-based object recognition through scattering media.
Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun
2015-12-28
We experimentally demonstrated object recognition through scattering media based on direct machine learning of a number of speckle intensity images. In the experiments, speckle intensity images of amplitude or phase objects on a spatial light modulator between scattering plates were captured by a camera. We used the support vector machine for binary classification of the captured speckle intensity images of face and non-face data. The experimental results showed that speckles are sufficient for machine learning.
Coherent diffractive imaging methods for semiconductor manufacturing
NASA Astrophysics Data System (ADS)
Helfenstein, Patrick; Mochi, Iacopo; Rajeev, Rajendran; Fernandez, Sara; Ekinci, Yasin
2017-12-01
The paradigm shift of the semiconductor industry moving from deep ultraviolet to extreme ultraviolet lithography (EUVL) brought about new challenges in the fabrication of illumination and projection optics, which constitute one of the core sources of cost of ownership for many of the metrology tools needed in the lithography process. For this reason, lensless imaging techniques based on coherent diffractive imaging started to raise interest in the EUVL community. This paper presents an overview of currently on-going research endeavors that use a number of methods based on lensless imaging with coherent light.
NASA Astrophysics Data System (ADS)
Zhu, Yiting; Narendran, Nadarajah; Tan, Jianchuan; Mou, Xi
2014-09-01
The organic light-emitting diode (OLED) has demonstrated its novelty in displays and certain lighting applications. Similar to white light-emitting diode (LED) technology, it also holds the promise of saving energy. Even though the luminous efficacy values of OLED products have been steadily growing, their longevity is still not well understood. Furthermore, currently there is no industry standard for photometric and colorimetric testing, short and long term, of OLEDs. Each OLED manufacturer tests its OLED panels under different electrical and thermal conditions using different measurement methods. In this study, an imaging-based photometric and colorimetric measurement method for OLED panels was investigated. Unlike an LED that can be considered as a point source, the OLED is a large form area source. Therefore, for an area source to satisfy lighting application needs, it is important that it maintains uniform light level and color properties across the emitting surface of the panel over a long period. This study intended to develop a measurement procedure that can be used to test long-term photometric and colorimetric properties of OLED panels. The objective was to better understand how test parameters such as drive current or luminance and temperature affect the degradation rate. In addition, this study investigated whether data interpolation could allow for determination of degradation and lifetime, L70, at application conditions based on the degradation rates measured at different operating conditions.
Strobl, Frederic; Schmitz, Alexander; Stelzer, Ernst H K
2017-06-01
Light-sheet-based fluorescence microscopy features optical sectioning in the excitation process. This reduces phototoxicity and photobleaching by up to four orders of magnitude compared with that caused by confocal fluorescence microscopy, simplifies segmentation and quantification for three-dimensional cell biology, and supports the transition from on-demand to systematic data acquisition in developmental biology applications.
Ground-based full-sky imaging polarimeter based on liquid crystal variable retarders.
Zhang, Ying; Zhao, Huijie; Song, Ping; Shi, Shaoguang; Xu, Wujian; Liang, Xiao
2014-04-07
A ground-based full-sky imaging polarimeter based on liquid crystal variable retarders (LCVRs) is proposed in this paper. Our proposed method can be used to realize the rapid detection of the skylight polarization information with hemisphere field-of-view for the visual band. The characteristics of the incidence angle of light on the LCVR are investigated, based on the electrically controlled birefringence. Then, the imaging polarimeter with hemisphere field-of-view is designed. Furthermore, the polarization calibration method with the field-of-view multiplexing and piecewise linear fitting is proposed, based on the rotation symmetry of the polarimeter. The polarization calibration of the polarimeter is implemented with the hemisphere field-of-view. This imaging polarimeter is investigated by the experiment of detecting the skylight image. The consistency between the obtained experimental distribution of polarization angle with that due to Rayleigh scattering model is 90%, which confirms the effectivity of our proposed imaging polarimeter.
Salas, Desirée; Le Gall, Antoine; Fiche, Jean-Bernard; Valeri, Alessandro; Ke, Yonggang; Bron, Patrick; Bellot, Gaetan
2017-01-01
Superresolution light microscopy allows the imaging of labeled supramolecular assemblies at a resolution surpassing the classical diffraction limit. A serious limitation of the superresolution approach is sample heterogeneity and the stochastic character of the labeling procedure. To increase the reproducibility and the resolution of the superresolution results, we apply multivariate statistical analysis methods and 3D reconstruction approaches originally developed for cryogenic electron microscopy of single particles. These methods allow for the reference-free 3D reconstruction of nanomolecular structures from two-dimensional superresolution projection images. Since these 2D projection images all show the structure in high-resolution directions of the optical microscope, the resulting 3D reconstructions have the best possible isotropic resolution in all directions. PMID:28811371
Non-destructive evaluation of water ingress in photovoltaic modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bora, Mihail; Kotovsky, Jack
Systems and techniques for non-destructive evaluation of water ingress in photovoltaic modules include and/or are configured to illuminate a photovoltaic module comprising a photovoltaic cell and an encapsulant with at least one beam of light having a wavelength in a range from about 1400 nm to about 2700 nm; capture one or more images of the illuminated photovoltaic module, each image relating to a water content of the photovoltaic module; and determine a water content of the photovoltaic module based on the one or more images. Systems preferably include one or more of a light source, a moving mirror, amore » focusing lens, a beam splitter, a stationary mirror, an objective lens and an imaging module.« less
Bhattacharya, Dipanjan; Singh, Vijay Raj; Zhi, Chen; So, Peter T. C.; Matsudaira, Paul; Barbastathis, George
2012-01-01
Laser sheet based microscopy has become widely accepted as an effective active illumination method for real time three-dimensional (3D) imaging of biological tissue samples. The light sheet geometry, where the camera is oriented perpendicular to the sheet itself, provides an effective method of eliminating some of the scattered light and minimizing the sample exposure to radiation. However, residual background noise still remains, limiting the contrast and visibility of potentially interesting features in the samples. In this article, we investigate additional structuring of the illumination for improved background rejection, and propose a new technique, “3D HiLo” where we combine two HiLo images processed from orthogonal directions to improve the condition of the 3D reconstruction. We present a comparative study of conventional structured illumination based demodulation methods, namely 3Phase and HiLo with a newly implemented 3D HiLo approach and demonstrate that the latter yields superior signal-to-background ratio in both lateral and axial dimensions, while simultaneously suppressing image processing artifacts. PMID:23262684
Bhattacharya, Dipanjan; Singh, Vijay Raj; Zhi, Chen; So, Peter T C; Matsudaira, Paul; Barbastathis, George
2012-12-03
Laser sheet based microscopy has become widely accepted as an effective active illumination method for real time three-dimensional (3D) imaging of biological tissue samples. The light sheet geometry, where the camera is oriented perpendicular to the sheet itself, provides an effective method of eliminating some of the scattered light and minimizing the sample exposure to radiation. However, residual background noise still remains, limiting the contrast and visibility of potentially interesting features in the samples. In this article, we investigate additional structuring of the illumination for improved background rejection, and propose a new technique, "3D HiLo" where we combine two HiLo images processed from orthogonal directions to improve the condition of the 3D reconstruction. We present a comparative study of conventional structured illumination based demodulation methods, namely 3Phase and HiLo with a newly implemented 3D HiLo approach and demonstrate that the latter yields superior signal-to-background ratio in both lateral and axial dimensions, while simultaneously suppressing image processing artifacts.
Optical Linear Algebra for Computational Light Transport
NASA Astrophysics Data System (ADS)
O'Toole, Matthew
Active illumination refers to optical techniques that use controllable lights and cameras to analyze the way light propagates through the world. These techniques confer many unique imaging capabilities (e.g. high-precision 3D scanning, image-based relighting, imaging through scattering media), but at a significant cost; they often require long acquisition and processing times, rely on predictive models for light transport, and cease to function when exposed to bright ambient sunlight. We develop a mathematical framework for describing and analyzing such imaging techniques. This framework is deeply rooted in numerical linear algebra, and models the transfer of radiant energy through an unknown environment with the so-called light transport matrix. Performing active illumination on a scene equates to applying a numerical operator on this unknown matrix. The brute-force approach to active illumination follows a two-step procedure: (1) optically measure the light transport matrix and (2) evaluate the matrix operator numerically. This approach is infeasible in general, because the light transport matrix is often much too large to measure, store, and analyze directly. Using principles from optical linear algebra, we evaluate these matrix operators in the optical domain, without ever measuring the light transport matrix in the first place. Specifically, we explore numerical algorithms that can be implemented partially or fully with programmable optics. These optical algorithms provide solutions to many longstanding problems in computer vision and graphics, including the ability to (1) photo-realistically change the illumination conditions of a given photo with only a handful of measurements, (2) accurately capture the 3D shape of objects in the presence of complex transport properties and strong ambient illumination, and (3) overcome the multipath interference problem associated with time-of-flight cameras. Most importantly, we introduce an all-new imaging regime---optical probing---that provides unprecedented control over which light paths contribute to a photo.
Non-flickering 100 m RGB visible light communication transmission based on a CMOS image sensor.
Chow, Chi-Wai; Shiu, Ruei-Jie; Liu, Yen-Chun; Liu, Yang; Yeh, Chien-Hung
2018-03-19
We demonstrate a non-flickering 100 m long-distance RGB visible light communication (VLC) transmission based on a complementary-metal-oxide-semiconductor (CMOS) camera. Experimental bit-error rate (BER) measurements under different camera ISO values and different transmission distances are evaluated. Here, we also experimentally reveal that the rolling shutter effect- (RSE) based VLC system cannot work at long distance transmission, and the under-sampled modulation- (USM) based VLC system is a good choice.