A robust color image fusion for low light level and infrared images
NASA Astrophysics Data System (ADS)
Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang
2016-09-01
The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.
A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading
NASA Astrophysics Data System (ADS)
Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.
2018-05-01
Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
Plant Chlorophyll Content Imager with Reference Detection Signals
NASA Technical Reports Server (NTRS)
Spiering, Bruce A. (Inventor); Carter, Gregory A. (Inventor)
2000-01-01
A portable plant chlorophyll imaging system is described which collects light reflected from a target plant and separates the collected light into two different wavelength bands. These wavelength bands, or channels, are described as having center wavelengths of 700 nm and 840 nm. The light collected in these two channels is processed using synchronized video cameras. A controller provided in the system compares the level of light of video images reflected from a target plant with a reference level of light from a source illuminating the plant. The percent of reflection in the two separate wavelength bands from a target plant are compared to provide a ratio video image which indicates a relative level of plant chlorophyll content and physiological stress. Multiple display modes are described for viewing the video images.
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
NASA Astrophysics Data System (ADS)
Gong, Rui; Xu, Haisong; Wang, Binyu; Luo, Ming Ronnier
2012-08-01
The image quality of two active matrix organic light emitting diode (AMOLED) smart-phone displays and two in-plane switching (IPS) ones was visually assessed at two levels of ambient lighting conditions corresponding to indoor and outdoor applications, respectively. Naturalness, colorfulness, brightness, contrast, sharpness, and overall image quality were evaluated via psychophysical experiment by categorical judgment method using test images selected from different application categories. The experimental results show that the AMOLED displays perform better on colorfulness because of their wide color gamut, while the high pixel resolution and high peak luminance of the IPS panels help the perception of brightness, contrast, and sharpness. Further statistical analysis of ANOVA indicates that ambient lighting levels have significant influences on the attributes of brightness and contrast.
Using compressive measurement to obtain images at ultra low-light-level
NASA Astrophysics Data System (ADS)
Ke, Jun; Wei, Ping
2013-08-01
In this paper, a compressive imaging architecture is used for ultra low-light-level imaging. In such a system, features, instead of object pixels, are imaged onto a photocathode, and then magnified by an image intensifier. By doing so, system measurement SNR is increased significantly. Therefore, the new system can image objects at ultra low-ligh-level, while a conventional system has difficulty. PCA projection is used to collect feature measurements in this work. Linear Wiener operator and nonlinear method based on FoE model are used to reconstruct objects. Root mean square error (RMSE) is used to quantify system reconstruction quality.
Optimisation approaches for concurrent transmitted light imaging during confocal microscopy.
Collings, David A
2015-01-01
The transmitted light detectors present on most modern confocal microscopes are an under-utilised tool for the live imaging of plant cells. As the light forming the image in this detector is not passed through a pinhole, out-of-focus light is not removed. It is this extended focus that allows the transmitted light image to provide cellular and organismal context for fluorescence optical sections generated confocally. More importantly, the transmitted light detector provides images that have spatial and temporal registration with the fluorescence images, unlike images taken with a separately-mounted camera. Because plants often provide difficulties for taking transmitted light images, with the presence of pigments and air pockets in leaves, this study documents several approaches to improving transmitted light images beginning with ensuring that the light paths through the microscope are correctly aligned (Köhler illumination). Pigmented samples can be imaged in real colour using sequential scanning with red, green and blue lasers. The resulting transmitted light images can be optimised and merged in ImageJ to generate colour images that maintain registration with concurrent fluorescence images. For faster imaging of pigmented samples, transmitted light images can be formed with non-absorbed wavelengths. Transmitted light images of Arabidopsis leaves expressing GFP can be improved by concurrent illumination with green and blue light. If the blue light used for YFP excitation is blocked from the transmitted light detector with a cheap, coloured glass filters, the non-absorbed green light will form an improved transmitted light image. Changes in sample colour can be quantified by transmitted light imaging. This has been documented in red onion epidermal cells where changes in vacuolar pH triggered by the weak base methylamine result in measurable colour changes in the vacuolar anthocyanin. Many plant cells contain visible levels of pigment. The transmitted light detector provides a useful tool for documenting and measuring changes in these pigments while maintaining registration with confocal imaging.
Shaw, S L; Salmon, E D; Quatrano, R S
1995-12-01
In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.
LEDs as light source: examining quality of acquired images
NASA Astrophysics Data System (ADS)
Bachnak, Rafic; Funtanilla, Jeng; Hernandez, Jose
2004-05-01
Recent advances in technology have made light emitting diodes (LEDs) viable in a number of applications, including vehicle stoplights, traffic lights, machine-vision-inspection, illumination, and street signs. This paper presents the results of comparing images taken by a videoscope using two different light sources. One of the sources is the internal metal halide lamp and the other is a LED placed at the tip of the insertion tube. Images acquired using these two light sources were quantitatively compared using their histogram, intensity profile along a line segment, and edge detection. Also, images were qualitatively compared using image registration and transformation. The gray-level histogram, edge detection, image profile and image registration do not offer conclusive results. The LED light source, however, produces good images for visual inspection by an operator. The paper will present the results and discuss the usefulness and shortcomings of various comparison methods.
NASA Astrophysics Data System (ADS)
Zhang, Liandong; Bai, Xiaofeng; Song, De; Fu, Shencheng; Li, Ye; Duanmu, Qingduo
2015-03-01
Low-light-level night vision technology is magnifying low light level signal large enough to be seen by naked eye, which uses the photons - photoelectron as information carrier. Until the micro-channel plate was invented, it has been possibility for the realization of high performance and miniaturization of low-light-level night vision device. The device is double-proximity focusing low-light-level image intensifier which places a micro-channel plate close to photocathode and phosphor screen. The advantages of proximity focusing low-light-level night vision are small size, light weight, small power consumption, no distortion, fast response speed, wide dynamic range and so on. It is placed parallel to each other for Micro-channel plate (both sides of it with metal electrode), the photocathode and the phosphor screen are placed parallel to each other. The voltage is applied between photocathode and the input of micro-channel plate when image intensifier works. The emission electron excited by photo on the photocathode move towards to micro-channel plate under the electric field in 1st proximity focusing region, and then it is multiplied through the micro-channel. The movement locus of emission electrons can be calculated and simulated when the distributions of electrostatic field equipotential lines are determined in the 1st proximity focusing region. Furthermore the resolution of image tube can be determined. However the distributions of electrostatic fields and equipotential lines are complex due to a lot of micro-channel existing in the micro channel plate. This paper simulates electrostatic distribution of 1st proximity region in double-proximity focusing low-light-level image intensifier with the finite element simulation analysis software Ansoft maxwell 3D. The electrostatic field distributions of 1st proximity region are compared when the micro-channel plates' pore size, spacing and inclination angle ranged. We believe that the electron beam movement trajectory in 1st proximity region will be better simulated when the electronic electrostatic fields are simulated.
A cost-effective line-based light-balancing technique using adaptive processing.
Hsia, Shih-Chang; Chen, Ming-Huei; Chen, Yu-Min
2006-09-01
The camera imaging system has been widely used; however, the displaying image appears to have an unequal light distribution. This paper presents novel light-balancing techniques to compensate uneven illumination based on adaptive signal processing. For text image processing, first, we estimate the background level and then process each pixel with nonuniform gain. This algorithm can balance the light distribution while keeping a high contrast in the image. For graph image processing, the adaptive section control using piecewise nonlinear gain is proposed to equalize the histogram. Simulations show that the performance of light balance is better than the other methods. Moreover, we employ line-based processing to efficiently reduce the memory requirement and the computational cost to make it applicable in real-time systems.
NASA Astrophysics Data System (ADS)
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Quantitative Assessment of Fat Levels in Caenorhabditis elegans Using Dark Field Microscopy
Fouad, Anthony D.; Pu, Shelley H.; Teng, Shelly; Mark, Julian R.; Fu, Moyu; Zhang, Kevin; Huang, Jonathan; Raizen, David M.; Fang-Yen, Christopher
2017-01-01
The roundworm Caenorhabditis elegans is widely used as a model for studying conserved pathways for fat storage, aging, and metabolism. The most broadly used methods for imaging fat in C. elegans require fixing and staining the animal. Here, we show that dark field images acquired through an ordinary light microscope can be used to estimate fat levels in worms. We define a metric based on the amount of light scattered per area, and show that this light scattering metric is strongly correlated with worm fat levels as measured by Oil Red O (ORO) staining across a wide variety of genetic backgrounds and feeding conditions. Dark field imaging requires no exogenous agents or chemical fixation, making it compatible with live worm imaging. Using our method, we track fat storage with high temporal resolution in developing larvae, and show that fat storage in the intestine increases in at least one burst during development. PMID:28404661
Low Voltage Low Light Imager and Photodetector
NASA Technical Reports Server (NTRS)
Nikzad, Shouleh (Inventor); Martin, Chris (Inventor); Hoenk, Michael E. (Inventor)
2013-01-01
Highly efficient, low energy, low light level imagers and photodetectors are provided. In particular, a novel class of Della-Doped Electron Bombarded Array (DDEBA) photodetectors that will reduce the size, mass, power, complexity, and cost of conventional imaging systems while improving performance by using a thinned imager that is capable of detecting low-energy electrons, has high gain, and is of low noise.
A 256×256 low-light-level CMOS imaging sensor with digital CDS
NASA Astrophysics Data System (ADS)
Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin
2016-10-01
In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.
Object detectability at increased ambient lighting conditions.
Pollard, Benjamin J; Chawla, Amarpreet S; Delong, David M; Hashimoto, Noriyuki; Samei, Ehsan
2008-06-01
Under typical dark conditions encountered in diagnostic reading rooms, a reader's pupils will contract and dilate as the visual focus intermittently shifts between the high luminance display and the darker background wall, resulting in increased visual fatigue and the degradation of diagnostic performance. A controlled increase of ambient lighting may, however, reduce the severity of these pupillary adjustments by minimizing the difference between the luminance level to which the eyes adapt while viewing an image (L(adp)) and the luminance level of diffusely reflected light from the area surrounding the display (L(s)). Although ambient lighting in reading rooms has conventionally been kept at a minimum to maintain the perceived contrast of film images, proper Digital Imaging and Communications in Medicine (DICOM) calibration of modern medical-grade liquid crystal displays can compensate for minor lighting increases with very little loss of image contrast. This paper describes two psychophysical studies developed to evaluate and refine optimum reading room ambient lighting conditions through the use of observational tasks intended to simulate real clinical practices. The first study utilized the biologic contrast response of the human visual system to determine a range of representative L(adp) values for typical medical images. Readers identified low contrast horizontal objects in circular foregrounds of uniform luminance (5, 12, 20, and 30 cd/m2) embedded within digitized mammograms. The second study examined the effect of increased ambient lighting on the detection of subtle objects embedded in circular foregrounds of uniform luminance (5, 12, and 35 cd/m2) centered within a constant background of 12 cd/m2 luminance. The images were displayed under a dark room condition (1 lux) and an increased ambient lighting level (50 lux) such that the luminance level of the diffusely reflected light from the background wall was approximately equal to the image L(adp) value of 12 cd/m2. Results from the first study demonstrated that observer true positive and false positive detection rates and true positive detection times were considerably better while viewing foregrounds at 12 and 20 cd/m2 than at the other foreground luminance levels. Results from the second study revealed that under increased room illuminance, the average true positive detection rate improved a statistically significant amount from 39.3% to 55.6% at 5 cd/m2 foreground luminance. Additionally, the true positive rate increased from 46.4% to 56.6% at 35 cd/m2 foreground luminance, and decreased slightly from 90.2% to 87.5% at 12 cd/m2 foreground luminance. False positive rates at all foreground luminance levels remained approximately constant with increased ambient lighting. Furthermore, under increased room illuminance, true positive detection times declined at every foreground luminance level, with the most considerable decrease (approximately 500 ms) at the 5 cd/m2 foreground luminance. The first study suggests that L(adp) of typical mammograms lies between 12 and 20 cd/m2, leading to an optimum reading room illuminance of approximately 50-80 lux. Findings from the second study provide psychophysical evidence that ambient lighting may be increased to a level within this range, potentially improving radiologist comfort, without deleterious effects on diagnostic performance.
Backscatter absorption gas imaging systems and light sources therefore
Kulp, Thomas Jan [Livermore, CA; Kliner, Dahv A. V. [San Ramon, CA; Sommers, Ricky [Oakley, CA; Goers, Uta-Barbara [Campbell, NY; Armstrong, Karla M [Livermore, CA
2006-12-19
The location of gases that are not visible to the unaided human eye can be determined using tuned light sources that spectroscopically probe the gases and cameras that can provide images corresponding to the absorption of the gases. The present invention is a light source for a backscatter absorption gas imaging (BAGI) system, and a light source incorporating the light source, that can be used to remotely detect and produce images of "invisible" gases. The inventive light source has a light producing element, an optical amplifier, and an optical parametric oscillator to generate wavelength tunable light in the IR. By using a multi-mode light source and an amplifier that operates using 915 nm pump sources, the power consumption of the light source is reduced to a level that can be operated by batteries for long periods of time. In addition, the light source is tunable over the absorption bands of many hydrocarbons, making it useful for detecting hazardous gases.
Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image
NASA Astrophysics Data System (ADS)
Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian
2014-07-01
To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.
Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung
2018-01-01
In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447
NASA Astrophysics Data System (ADS)
Guan, Jinge; Ren, Wei; Cheng, Yaoyu
2018-04-01
We demonstrate an efficient polarization-difference imaging system in turbid conditions by using the Stokes vector of light. The interaction of scattered light with the polarizer is analyzed by the Stokes-Mueller formalism. An interpolation method is proposed to replace the mechanical rotation of the polarization axis of the analyzer theoretically, and its performance is verified by the experiment at different turbidity levels. We show that compared with direct imaging, the Stokes vector based imaging method can effectively reduce the effect of light scattering and enhance the image contrast.
Maire, E; Lelièvre, E; Brau, D; Lyons, A; Woodward, M; Fafeur, V; Vandenbunder, B
2000-04-10
We have developed an approach to study in single living epithelial cells both cell migration and transcriptional activation, which was evidenced by the detection of luminescence emission from cells transfected with luciferase reporter vectors. The image acquisition chain consists of an epifluorescence inverted microscope, connected to an ultralow-light-level photon-counting camera and an image-acquisition card associated to specialized image analysis software running on a PC computer. Using a simple method based on a thin calibrated light source, the image acquisition chain has been optimized following comparisons of the performance of microscopy objectives and photon-counting cameras designed to observe luminescence. This setup allows us to measure by image analysis the luminescent light emitted by individual cells stably expressing a luciferase reporter vector. The sensitivity of the camera was adjusted to a high value, which required the use of a segmentation algorithm to eliminate the background noise. Following mathematical morphology treatments, kinetic changes of luminescent sources were analyzed and then correlated with the distance and speed of migration. Our results highlight the usefulness of our image acquisition chain and mathematical morphology software to quantify the kinetics of luminescence changes in migrating cells.
Parallel-multiplexed excitation light-sheet microscopy (Conference Presentation)
NASA Astrophysics Data System (ADS)
Xu, Dongli; Zhou, Weibin; Peng, Leilei
2017-02-01
Laser scanning light-sheet imaging allows fast 3D image of live samples with minimal bleach and photo-toxicity. Existing light-sheet techniques have very limited capability in multi-label imaging. Hyper-spectral imaging is needed to unmix commonly used fluorescent proteins with large spectral overlaps. However, the challenge is how to perform hyper-spectral imaging without sacrificing the image speed, so that dynamic and complex events can be captured live. We report wavelength-encoded structured illumination light sheet imaging (λ-SIM light-sheet), a novel light-sheet technique that is capable of parallel multiplexing in multiple excitation-emission spectral channels. λ-SIM light-sheet captures images of all possible excitation-emission channels in true parallel. It does not require compromising the imaging speed and is capable of distinguish labels by both excitation and emission spectral properties, which facilitates unmixing fluorescent labels with overlapping spectral peaks and will allow more labels being used together. We build a hyper-spectral light-sheet microscope that combined λ-SIM with an extended field of view through Bessel beam illumination. The system has a 250-micron-wide field of view and confocal level resolution. The microscope, equipped with multiple laser lines and an unlimited number of spectral channels, can potentially image up to 6 commonly used fluorescent proteins from blue to red. Results from in vivo imaging of live zebrafish embryos expressing various genetic markers and sensors will be shown. Hyper-spectral images from λ-SIM light-sheet will allow multiplexed and dynamic functional imaging in live tissue and animals.
Tomaszewski, Michał; Ruszczak, Bogdan; Michalski, Paweł
2018-06-01
Electrical insulators are elements of power lines that require periodical diagnostics. Due to their location on the components of high-voltage power lines, their imaging can be cumbersome and time-consuming, especially under varying lighting conditions. Insulator diagnostics with the use of visual methods may require localizing insulators in the scene. Studies focused on insulator localization in the scene apply a number of methods, including: texture analysis, MRF (Markov Random Field), Gabor filters or GLCM (Gray Level Co-Occurrence Matrix) [1], [2]. Some methods, e.g. those which localize insulators based on colour analysis [3], rely on object and scene illumination, which is why the images from the dataset are taken under varying lighting conditions. The dataset may also be used to compare the effectiveness of different methods of localizing insulators in images. This article presents high-resolution images depicting a long rod electrical insulator under varying lighting conditions and against different backgrounds: crops, forest and grass. The dataset contains images with visible laser spots (generated by a device emitting light at the wavelength of 532 nm) and images without such spots, as well as complementary data concerning the illumination level and insulator position in the scene, the number of registered laser spots, and their coordinates in the image. The laser spots may be used to support object-localizing algorithms, while the images without spots may serve as a source of information for those algorithms which do not need spots to localize an insulator.
Evaluation of visual acuity with Gen 3 night vision goggles
NASA Technical Reports Server (NTRS)
Bradley, Arthur; Kaiser, Mary K.
1994-01-01
Using laboratory simulations, visual performance was measured at luminance and night vision imaging system (NVIS) radiance levels typically encountered in the natural nocturnal environment. Comparisons were made between visual performance with unaided vision and that observed with subjects using image intensification. An Amplified Night Vision Imaging System (ANVIS6) binocular image intensifier was used. Light levels available in the experiments (using video display technology and filters) were matched to those of reflecting objects illuminated by representative night-sky conditions (e.g., full moon, starlight). Results show that as expected, the precipitous decline in foveal acuity experienced with decreasing mesopic luminance levels is effectively shifted to much lower light levels by use of an image intensification system. The benefits of intensification are most pronounced foveally, but still observable at 20 deg eccentricity. Binocularity provides a small improvement in visual acuity under both intensified and unintensified conditions.
Differences in the intensity of light-induced fluorescence emitted by resin composites.
Kim, Bo-Ra; Kang, Si-Mook; Kim, Gyung-Min; Kim, Baek-Il
2016-03-01
The aims of this study were to compare the intensities of fluorescence emitted by different resin composites as detected using quantitative light-induced fluorescence (QLF) technology, and to compare the fluorescence intensity contrast with the color contrast between a restored composite and the adjacent region of the tooth. Six brands of light-cured resin composites (shade A2) were investigated. The composites were used to prepare composite discs, and fill holes that had been prepared in extracted human teeth. White-light and fluorescence images of all specimens were obtained using a fluorescence camera based on QLF technology (QLF-D) and converted into 8-bit grayscale images. The fluorescence intensity of the discs as well as the fluorescence intensity contrast and the color contrast between the composite restoration and adjacent tooth region were calculated as grayscale levels. The grayscale levels for the composite discs differed significantly with the brand (p<0.001): DenFil (10.84±0.35, mean±SD), Filtek Z350 (58.28±1.37), Premisa (156.94±1.58), Grandio (177.20±0.81), Charisma (207.05±0.77), and Gradia direct posterior (211.52±1.66). The difference in grayscale levels between a resin restoration and the adjacent tooth was significantly greater in fluorescence images for each brand than in white-light images, except for the Filtek Z350 (p<0.05). However, the Filtek Z350 restoration was distinguishable from the adjacent tooth in a fluorescence image. The intensities of fluorescence detected from the resin composites varied. The differences between the composite and adjacent tooth were greater for the fluorescence intensity contrast than for the colors observed in the white-light images. Copyright © 2016 Elsevier B.V. All rights reserved.
Fluorescence Imaging Reveals Surface Contamination
NASA Technical Reports Server (NTRS)
Schirato, Richard; Polichar, Raulf
1992-01-01
In technique to detect surface contamination, object inspected illuminated by ultraviolet light to make contaminants fluoresce; low-light-level video camera views fluorescence. Image-processing techniques quantify distribution of contaminants. If fluorescence of material expected to contaminate surface is not intense, tagged with low concentration of dye.
Quantitative luminescence imaging system
Erwin, D.N.; Kiel, J.L.; Batishko, C.R.; Stahl, K.A.
1990-08-14
The QLIS images and quantifies low-level chemiluminescent reactions in an electromagnetic field. It is capable of real time nonperturbing measurement and simultaneous recording of many biochemical and chemical reactions such as luminescent immunoassays or enzyme assays. The system comprises image transfer optics, a low-light level digitizing camera with image intensifying microchannel plates, an image process or, and a control computer. The image transfer optics may be a fiber image guide with a bend, or a microscope, to take the light outside of the RF field. Output of the camera is transformed into a localized rate of cumulative digitalized data or enhanced video display or hard-copy images. The system may be used as a luminescent microdosimetry device for radiofrequency or microwave radiation, as a thermal dosimeter, or in the dosimetry of ultra-sound (sonoluminescence) or ionizing radiation. It provides a near-real-time system capable of measuring the extremely low light levels from luminescent reactions in electromagnetic fields in the areas of chemiluminescence assays and thermal microdosimetry, and is capable of near-real-time imaging of the sample to allow spatial distribution analysis of the reaction. It can be used to instrument three distinctly different irradiation configurations, comprising (1) RF waveguide irradiation of a small Petri-dish-shaped sample cell, (2) RF irradiation of samples in a microscope for the microscopic imaging and measurement, and (3) RF irradiation of small to human body-sized samples in an anechoic chamber. 22 figs.
Quantitative luminescence imaging system
Erwin, David N.; Kiel, Johnathan L.; Batishko, Charles R.; Stahl, Kurt A.
1990-01-01
The QLIS images and quantifies low-level chemiluminescent reactions in an electromagnetic field. It is capable of real time nonperturbing measurement and simultaneous recording of many biochemical and chemical reactions such as luminescent immunoassays or enzyme assays. The system comprises image transfer optics, a low-light level digitizing camera with image intensifying microchannel plates, an image process or, and a control computer. The image transfer optics may be a fiber image guide with a bend, or a microscope, to take the light outside of the RF field. Output of the camera is transformed into a localized rate of cumulative digitalized data or enhanced video display or hard-copy images. The system may be used as a luminescent microdosimetry device for radiofrequency or microwave radiation, as a thermal dosimeter, or in the dosimetry of ultra-sound (sonoluminescence) or ionizing radiation. It provides a near-real-time system capable of measuring the extremely low light levels from luminescent reactions in electromagnetic fields in the areas of chemiluminescence assays and thermal microdosimetry, and is capable of near-real-time imaging of the sample to allow spatial distribution analysis of the reaction. It can be used to instrument three distinctly different irradiation configurations, comprising (1) RF waveguide irradiation of a small Petri-dish-shaped sample cell, (2) RF irradiation of samples in a microscope for the microscopie imaging and measurement, and (3) RF irradiation of small to human body-sized samples in an anechoic chamber.
Low-Light Image Enhancement Using Adaptive Digital Pixel Binning
Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki
2015-01-01
This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609
Sun Emits a Mid-Level Flare on Dec. 4, 2014
2017-12-08
The sun emitted a solar flare on Dec. 4, 2014, seen as the flash of light in this image from NASA's Solar Dynamics Observatory. The image blends two wavelengths of extreme ultraviolet light – 131 and 171 Angstroms – which are typically colored in teal and gold, respectively. Read more: 1.usa.gov/121n7PP Image Credit: NASA/SDO
NASA Astrophysics Data System (ADS)
Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun
2016-10-01
The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.
Jemielita, Matthew; Taormina, Michael J; Delaurier, April; Kimmel, Charles B; Parthasarathy, Raghuveer
2013-12-01
The combination of genetically encoded fluorescent proteins and three-dimensional imaging enables cell-type-specific studies of embryogenesis. Light sheet microscopy, in which fluorescence excitation is provided by a plane of laser light, is an appealing approach to live imaging due to its high speed and efficient use of photons. While the advantages of rapid imaging are apparent from recent work, the importance of low light levels to studies of development is not well established. We examine the zebrafish opercle, a craniofacial bone that exhibits pronounced shape changes at early developmental stages, using both spinning disk confocal and light sheet microscopies of fluorescent osteoblast cells. We find normal and aberrant opercle morphologies for specimens imaged with short time intervals using light sheet and spinning disk confocal microscopies, respectively, under equivalent exposure conditions over developmentally-relevant time scales. Quantification of shapes reveals that the differently imaged specimens travel along distinct trajectories in morphological space. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Radiological image presentation requires consideration of human adaptation characteristics
NASA Astrophysics Data System (ADS)
O'Connell, N. M.; Toomey, R. J.; McEntee, M.; Ryan, J.; Stowe, J.; Adams, A.; Brennan, P. C.
2008-03-01
Visualisation of anatomical or pathological image data is highly dependent on the eye's ability to discriminate between image brightnesses and this is best achieved when these data are presented to the viewer at luminance levels to which the eye is adapted. Current ambient light recommendations are often linked to overall monitor luminance but this relies on specific regions of interest matching overall monitor brightness. The current work investigates the luminances of specific regions of interest within three image-types: postero-anterior (PA) chest; PA wrist; computerised tomography (CT) of the head. Luminance levels were measured within the hilar region and peripheral lung distal radius and supra-ventricular grey matter. For each image type average monitor luminances were calculated with a calibrated photometer at ambient light levels of 0, 100 and 400 lux. Thirty samples of each image-type were employed, resulting in a total of over 6,000 measurements. Results demonstrate that average monitor luminances varied from clinically-significant values by up to a factor of 4, 2 and 6 for chest, wrist and CT head images respectively. Values for the thoracic hilum and wrist were higher and for the peripheral lung and CT brain lower than overall monitor levels. The ambient light level had no impact on the results. The results demonstrate that clinically important radiological information for common radiological examinations is not being presented to the viewer in a way that facilitates optimised visual adaptation and subsequent interpretation. The importance of image-processing algorithms focussing on clinically-significant anatomical regions instead of radiographic projections is highlighted.
NASA Astrophysics Data System (ADS)
Jantzen, Connie; Slagle, Rick
1997-05-01
The distinction between exposure time and sample rate is often the first point raised in any discussion of high speed imaging. Many high speed events require exposure times considerably shorter than those that can be achieved solely by the sample rate of the camera, where exposure time equals 1/sample rate. Gating, a method of achieving short exposure times in digital cameras, is often difficult to achieve for exposure time requirements shorter than 100 microseconds. This paper discusses the advantages and limitations of using the short duration light pulse of a near infrared laser with high speed digital imaging systems. By closely matching the output wavelength of the pulsed laser to the peak near infrared response of current sensors, high speed image capture can be accomplished at very low (visible) light levels of illumination. By virtue of the short duration light pulse, adjustable to as short as two microseconds, image capture of very high speed events can be achieved at relatively low sample rates of less than 100 pictures per second, without image blur. For our initial investigations, we chose a ballistic subject. The results of early experimentation revealed the limitations of applying traditional ballistic imaging methods when using a pulsed infrared lightsource with a digital imaging system. These early disappointing results clarified the need to further identify the unique system characteristics of the digital imager and pulsed infrared combination. It was also necessary to investigate how the infrared reflectance and transmittance of common materials affects the imaging process. This experimental work yielded a surprising, successful methodology which will prove useful in imaging ballistic and weapons tests, as well as forensics, flow visualizations, spray pattern analyses, and nocturnal animal behavioral studies.
Performance of PHOTONIS' low light level CMOS imaging sensor for long range observation
NASA Astrophysics Data System (ADS)
Bourree, Loig E.
2014-05-01
Identification of potential threats in low-light conditions through imaging is commonly achieved through closed-circuit television (CCTV) and surveillance cameras by combining the extended near infrared (NIR) response (800-10000nm wavelengths) of the imaging sensor with NIR LED or laser illuminators. Consequently, camera systems typically used for purposes of long-range observation often require high-power lasers in order to generate sufficient photons on targets to acquire detailed images at night. While these systems may adequately identify targets at long-range, the NIR illumination needed to achieve such functionality can easily be detected and therefore may not be suitable for covert applications. In order to reduce dependency on supplemental illumination in low-light conditions, the frame rate of the imaging sensors may be reduced to increase the photon integration time and thus improve the signal to noise ratio of the image. However, this may hinder the camera's ability to image moving objects with high fidelity. In order to address these particular drawbacks, PHOTONIS has developed a CMOS imaging sensor (CIS) with a pixel architecture and geometry designed specifically to overcome these issues in low-light level imaging. By combining this CIS with field programmable gate array (FPGA)-based image processing electronics, PHOTONIS has achieved low-read noise imaging with enhanced signal-to-noise ratio at quarter moon illumination, all at standard video frame rates. The performance of this CIS is discussed herein and compared to other commercially available CMOS and CCD for long-range observation applications.
Bessel light sheet structured illumination microscopy
NASA Astrophysics Data System (ADS)
Noshirvani Allahabadi, Golchehr
Biomedical study researchers using animals to model disease and treatment need fast, deep, noninvasive, and inexpensive multi-channel imaging methods. Traditional fluorescence microscopy meets those criteria to an extent. Specifically, two-photon and confocal microscopy, the two most commonly used methods, are limited in penetration depth, cost, resolution, and field of view. In addition, two-photon microscopy has limited ability in multi-channel imaging. Light sheet microscopy, a fast developing 3D fluorescence imaging method, offers attractive advantages over traditional two-photon and confocal microscopy. Light sheet microscopy is much more applicable for in vivo 3D time-lapsed imaging, owing to its selective illumination of tissue layer, superior speed, low light exposure, high penetration depth, and low levels of photobleaching. However, standard light sheet microscopy using Gaussian beam excitation has two main disadvantages: 1) the field of view (FOV) of light sheet microscopy is limited by the depth of focus of the Gaussian beam. 2) Light-sheet images can be degraded by scattering, which limits the penetration of the excitation beam and blurs emission images in deep tissue layers. While two-sided sheet illumination, which doubles the field of view by illuminating the sample from opposite sides, offers a potential solution, the technique adds complexity and cost to the imaging system. We investigate a new technique to address these limitations: Bessel light sheet microscopy in combination with incoherent nonlinear Structured Illumination Microscopy (SIM). Results demonstrate that, at visible wavelengths, Bessel excitation penetrates up to 250 microns deep in the scattering media with single-side illumination. Bessel light sheet microscope achieves confocal level resolution at a lateral resolution of 0.3 micron and an axial resolution of 1 micron. Incoherent nonlinear SIM further reduces the diffused background in Bessel light sheet images, resulting in confocal quality images in thick tissue. The technique was applied to live transgenic zebra fish tg(kdrl:GFP), and the sub-cellular structure of fish vasculature genetically labeled with GFP was captured in 3D. The superior speed of the microscope enables us to acquire signal from 200 layers of a thick sample in 4 minutes. The compact microscope uses exclusively off-the-shelf components and offers a low-cost imaging solution for studying small animal models or tissue samples.
NASA Astrophysics Data System (ADS)
Randunu Pathirannehelage, Nishantha
Fourier telescopy imaging is a recently-developed imaging method that relies on active structured-light illumination of the object. Reflected/scattered light is measured by a large "light bucket" detector; processing of the detected signal yields the magnitude and phase of spatial frequency components of the object reflectance or transmittance function. An inverse Fourier transform results in the image. In 2012 a novel method, known as time-average Fourier telescopy (TAFT), was introduced by William T. Rhodes as a means for diffraction-limited imaging through ground-level atmospheric turbulence. This method, which can be applied to long horizontal-path terrestrial imaging, addresses a need that is not solved by the adaptive optics methods being used in astronomical imaging. Field-experiment verification of the TAFT concept requires instrumentation that is not available at Florida Atlantic University. The objective of this doctoral research program is thus to demonstrate, in the absence of full-scale experimentation, the feasibility of time-average Fourier telescopy through (a) the design, construction, and testing of small-scale laboratory instrumentation capable of exploring basic Fourier telescopy data-gathering operations, and (b) the development of MATLAB-based software capable of demonstrating the effect of kilometer-scale passage of laser beams through ground-level turbulence in a numerical simulation of TAFT.
Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method
NASA Astrophysics Data System (ADS)
Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan
2018-04-01
Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.
Fixing the Leak: Empirical Corrections for the Small Light Leak in Hinode XRT
NASA Astrophysics Data System (ADS)
Saar, Steven H.; DeLuca, E. E.; McCauley, P.; Kobelski, A.
2013-07-01
On May 9, 2012, the the straylight level of XRT on Hinode suddenly increased, consistent with the appearance of a pinhole in the entrance filter (possibly a micrometeorite breach). The effect of this event is most noticeable in the optical G band data, which shows an average light excess of ~30%. However, data in several of the X-ray filters is also affected, due to low sensitivity "tails" of their filter responses into the visible. Observations taken with the G band filter but with the visible light shutter (VLS) closed show a weak, slightly shifted, out-of-focus image, revealing the leaked light. The intensity of the leak depends on telescope pointing, dropping strongly for images taken off-disk. By monitoring light levels in the corners of full-Sun Ti-poly filter images, we determine the approximate time of the event: ~13:30 UT. We use pairs of images taken just-before and after the filter breach to directly measure the leakage in two affected X-ray filters. We then develop a model using a scaled, shifted, and smoothed versions of the VLS closed images to remove the contamination. We estimate the uncertainties involved in our proposed correction procedure. This research was supported under NASA contract NNM07AB07C for Hinode XRT.
USDA-ARS?s Scientific Manuscript database
Hyperspectral microscope imaging (HMI) has the potential to classify foodborne pathogenic bacteria at cell level by combining microscope images with a spectrophotometer. In this study, the spectra generated from HMIs of five live Salmonella serovars from two light sources, metal halide (MH) and tun...
Design of a new type synchronous focusing mechanism
NASA Astrophysics Data System (ADS)
Zhang, Jintao; Tan, Ruijun; Chen, Zhou; Zhang, Yongqi; Fu, Panlong; Qu, Yachen
2018-05-01
Aiming at the dual channel telescopic imaging system composed of infrared imaging system, low-light-level imaging system and image fusion module, In the fusion of low-light-level images and infrared images, it is obvious that using clear source images is easier to obtain high definition fused images. When the target is imaged at 15m to infinity, focusing is needed to ensure the imaging quality of the dual channel imaging system; therefore, a new type of synchronous focusing mechanism is designed. The synchronous focusing mechanism realizes the focusing function through the synchronous translational imaging devices, mainly including the structure of the screw rod nut, the shaft hole coordination structure and the spring steel ball eliminating clearance structure, etc. Starting from the synchronous focusing function of two imaging devices, the structure characteristics of the synchronous focusing mechanism are introduced in detail, and the focusing range is analyzed. The experimental results show that the synchronous focusing mechanism has the advantages of ingenious design, high focusing accuracy and stable and reliable operation.
Dastiridou, Anna; Marion, Kenneth; Niemeyer, Moritz; Francis, Brian; Sadda, Srinivas; Chopra, Vikas
2018-04-11
To investigate the effects of ambient light level variation on spectral domain anterior segment optical coherence tomography (SD-ΟCT)-derived anterior chamber angle metrics in Caucasians versus Asians. Caucasian (n = 24) and Asian participants of Chinese ancestry (n = 24) with open angles on gonioscopy had one eye imaged twice at five strictly controlled, ambient light levels. Ethnicity was self-reported. Light levels were strictly controlled using a light meter at 1.0, 0.75, 0.5, 0.25, and 0 foot candle illumination levels. SD-OCT 5-line raster scans at the inferior 270° irido-corneal angle were measured by two trained, masked graders from the Doheny Image Reading Center using customized Image-J software. Schwalbe's line-angle opening distance (SL-AOD) and SL-trabecular iris space area (SL-TISA) in different light meter readings (LMRs) between the two groups were compared. Baseline light SL-AOD and SL-TISA measured 0.464 ± 0.115mm/0.351 ± 0.110mm 2 and 0.344 ± 0.118mm/0.257 ± 0.092mm 2 , respectively, in the Caucasian and the Asian group. SL-AOD and SL-TISA in each LMR were significantly larger in the Caucasian group compared to the Asian group (p < 0.05). Despite this difference in angle size between the groups, there were no statistically significant differences in the degree of change in angle parameters from light to dark (% changes in SL-AOD or SL-TISA between the two groups were statistically similar with all p-values >0.3). SL-based angle dimensions using SD-OCT are sensitive to changes in ambient illumination in participants with Caucasian and Asian ancestry. Although Caucasian eyes had larger baseline angle opening under bright light conditions, the light-to-dark change in angle dimensions was similar in the two groups.
Design of a Borescope for Extravehicular Non-Destructive Applications
NASA Technical Reports Server (NTRS)
Bachnak, Rafic
2003-01-01
Anomalies such as corrosion, structural damage, misalignment, cracking, stress fiactures, pitting, or wear can be detected and monitored by the aid of a borescope. A borescope requires a source of light for proper operation. Today s current lighting technology market consists of incandescent lamps, fluorescent lamps and other types of electric arc and electric discharge vapor lamp. Recent advances in LED technology have made LEDs viable for a number of applications, including vehicle stoplights, traffic lights, machine-vision-inspection, illumination, and street signs. LEDs promise significant reduction in power consumption compared to other sources of light. This project focused on comparing images taken by the Olympus IPLEX, using two different light sources. One of the sources is the 50-W internal metal halide lamp and the other is a 1 W LED placed at the tip of the insertion tube. Images acquired using these two light sources were quantitatively compared using their histogram, intensity profile along a line segment, and edge detection. Also, images were qualitatively compared using image registration and transformation [l]. The gray-level histogram, edge detection, image profile and image registration do not offer conclusive results. The LED light source, however, produces good images for visual inspection by an operator. Analysis using pattern recognition using Eigenfaces and Gaussian Pyramid in face recognition may be more useful.
InGaAs focal plane arrays for low-light-level SWIR imaging
NASA Astrophysics Data System (ADS)
MacDougal, Michael; Hood, Andrew; Geske, Jon; Wang, Jim; Patel, Falgun; Follman, David; Manzo, Juan; Getty, Jonathan
2011-06-01
Aerius Photonics will present their latest developments in large InGaAs focal plane arrays, which are used for low light level imaging in the short wavelength infrared (SWIR) regime. Aerius will present imaging in both 1280x1024 and 640x512 formats. Aerius will present characterization of the FPA including dark current measurements. Aerius will also show the results of development of SWIR FPAs for high temperaures, including imagery and dark current data. Finally, Aerius will show results of using the SWIR camera with Aerius' SWIR illuminators using VCSEL technology.
Resonant imaging of carotenoid pigments in the human retina
NASA Astrophysics Data System (ADS)
Gellermann, Werner; Emakov, Igor V.; McClane, Robert W.
2002-06-01
We have generated high spatial resolution images showing the distribution of carotenoid macular pigments in the human retina using Raman spectroscopy. A low level of macular pigments is associated with an increased risk of developing age-related macular degeneration, a leading cause of irreversible blindness. Using excised human eyecups and resonant excitation of the pigment molecules with narrow bandwidth blue light from a mercury arc lamp, we record Raman images originating from the carbon-carbon double bond stretch vibrations of lutein and zeaxanthin, the carotenoids comprising human macular pigments. Our Raman images reveal significant differences among subjects, both in regard to absolute levels as well as spatial distribution within the macula. Since the light levels used to obtain these images are well below established safety limits, this technique holds promise for developing a rapid screening diagnostic in large populations at risk for vision loss from age-related macular degeneration.
Improved defect analysis of Gallium Arsenide solar cells using image enhancement
NASA Technical Reports Server (NTRS)
Kilmer, Louis C.; Honsberg, Christiana; Barnett, Allen M.; Phillips, James E.
1989-01-01
A new technique has been developed to capture, digitize, and enhance the image of light emission from a forward biased direct bandgap solar cell. Since the forward biased light emission from a direct bandgap solar cell has been shown to display both qualitative and quantitative information about the solar cell's performance and its defects, signal processing techniques can be applied to the light emission images to identify and analyze shunt diodes. Shunt diodes are of particular importance because they have been found to be the type of defect which is likely to cause failure in a GaAs solar cell. The presence of a shunt diode can be detected from the light emission by using a photodetector to measure the quantity of light emitted at various current densities. However, to analyze how the shunt diodes affect the quality of the solar cell the pattern of the light emission must be studied. With the use of image enhancement routines, the light emission can be studied at low light emission levels where shunt diode effects are dominant.
Two-mode squeezed light source for quantum illumination and quantum imaging
NASA Astrophysics Data System (ADS)
Masada, Genta
2015-09-01
We started to research quantum illumination radar and quantum imaging by utilizing high quality continuous-wave two-mode squeezed light source as a quantum entanglement resource. Two-mode squeezed light is a macroscopic quantum entangled state of the electro-magnetic field and shows strong correlation between quadrature phase amplitudes of each optical field. One of the most effective methods to generate two-mode squeezed light is combining two independent single-mode squeezed lights by using a beam splitter with relative phase of 90 degrees between each optical field. As a first stage of our work we are developing two-mode squeezed light source for exploring the possibility of quantum illumination radar and quantum imaging. In this article we introduce current development of experimental investigation of single-mode squeezed light. We utilize a sub-threshold optical parametric oscillator with bow-tie configuration which includes a periodically-polled potassium titanyl phosphate crystal as a nonlinear optical medium. We observed the noise level of squeezed quadrature -3.08+/-0.13 dB and anti-squeezed quadrature at 9.29+/-0.13 dB, respectively. We also demonstrated the remote tuning of squeezing level of the light source which leads to the technology for tuning the quantum entanglement in order to adapt to the actual environmental condition.
Images of photoreceptors in living primate eyes using adaptive optics two-photon ophthalmoscopy
Hunter, Jennifer J.; Masella, Benjamin; Dubra, Alfredo; Sharma, Robin; Yin, Lu; Merigan, William H.; Palczewska, Grazyna; Palczewski, Krzysztof; Williams, David R.
2011-01-01
In vivo two-photon imaging through the pupil of the primate eye has the potential to become a useful tool for functional imaging of the retina. Two-photon excited fluorescence images of the macaque cone mosaic were obtained using a fluorescence adaptive optics scanning laser ophthalmoscope, overcoming the challenges of a low numerical aperture, imperfect optics of the eye, high required light levels, and eye motion. Although the specific fluorophores are as yet unknown, strong in vivo intrinsic fluorescence allowed images of the cone mosaic. Imaging intact ex vivo retina revealed that the strongest two-photon excited fluorescence signal comes from the cone inner segments. The fluorescence response increased following light stimulation, which could provide a functional measure of the effects of light on photoreceptors. PMID:21326644
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
Ambient lighting: setting international standards for the viewing of softcopy chest images
NASA Astrophysics Data System (ADS)
McEntee, Mark F.; Ryan, John; Evanoff, Micheal G.; Keeling, Aoife; Chakraborty, Dev; Manning, David; Brennan, Patrick C.
2007-03-01
Clinical radiological judgments are increasingly being made on softcopy LCD monitors. These monitors are found throughout the hospital environment in radiological reading rooms, outpatient clinics and wards. This means that ambient lighting where clinical judgments from images are made can vary widely. Inappropriate ambient lighting has several deleterious effects: monitor reflections reduce contrast; veiling glare adds brightness; dynamic range and detectability of low contrast objects is limited. Radiological images displayed on LCDs are more sensitive to the impact of inappropriate ambient lighting and with these devices problems described above are often more evident. The current work aims to provide data on optimum ambient lighting, based on lesions within chest images. The data provided may be used for the establishment of workable ambient lighting standards. Ambient lighting at 30cms from the monitor was set at 480 Lux (office lighting) 100 Lux (WHO recommendations), 40 Lux and <10 Lux. All monitors were calibrated to DICOM part 14 GSDF. Sixty radiologists were presented with 30 chest images, 15 images having simulated nodular lesions of varying subtlety and size. Lesions were positioned in accordance with typical clinical presentation and were validated radiologically. Each image was presented for 30 seconds and viewers were asked to identify and score any visualized lesion from 1-4 to indicate confidence level of detection. At the end of the session, sensitivity and specificity were calculated. Analysis of the data suggests that visualization of chest lesions is affected by inappropriate lighting with chest radiologists demonstrating greater ambient lighting dependency. JAFROC analyses are currently being performed.
Light Microscopy Module (LMM)-Emulator
NASA Technical Reports Server (NTRS)
Levine, Howard G.; Smith, Trent M.; Richards, Stephanie E.
2016-01-01
The Light Microscopy Module (LMM) is a microscope facility developed at Glenn Research Center (GRC) that provides researchers with powerful imaging capability onboard the International Space Station (ISS). LMM has the ability to have its hardware recongured on-orbit to accommodate a wide variety of investigations, with the capability of remotely acquiring and downloading digital images across multiple levels of magnication.
NASA Astrophysics Data System (ADS)
Wade, Alex Robert; Fitzke, Frederick W.
1998-08-01
We describe an image processing system which we have developed to align autofluorescence and high-magnification images taken with a laser scanning ophthalmoscope. The low signal to noise ratio of these images makes pattern recognition a non-trivial task. However, once n images are aligned and averaged, the noise levels drop by a factor of n and the image quality is improved. We include examples of autofluorescence images and images of the cone photoreceptor mosaic obtained using this system.
NASA Astrophysics Data System (ADS)
Lai, Puxiang; Suzuki, Yuta; Xu, Xiao; Wang, Lihong V.
2013-07-01
Scattering dominates light propagation in biological tissue, and therefore restricts both resolution and penetration depth in optical imaging within thick tissue. As photons travel into the diffusive regime, typically 1 mm beneath human skin, their trajectories transition from ballistic to diffusive due to the increased number of scattering events, which makes it impossible to focus, much less track, photon paths. Consequently, imaging methods that rely on controlled light illumination are ineffective in deep tissue. This problem has recently been addressed by a novel method capable of dynamically focusing light in thick scattering media via time reversal of ultrasonically encoded (TRUE) diffused light. Here, using photorefractive materials as phase conjugate mirrors, we show a direct visualization and dynamic control of optical focusing with this light delivery method, and demonstrate its application for focused fluorescence excitation and imaging in thick turbid media. These abilities are increasingly critical for understanding the dynamic interactions of light with biological matter and processes at different system levels, as well as their applications for biomedical diagnosis and therapy.
Yoshioka, Yosuke; Nakayama, Masayoshi; Noguchi, Yuji; Horie, Hideki
2013-01-01
Strawberry is rich in anthocyanins, which are responsible for the red color, and contains several colorless phenolic compounds. Among the colorless phenolic compounds, some, such as hydroxycinammic acid derivatives, emit blue-green fluorescence when excited with ultraviolet (UV) light. Here, we investigated the effectiveness of image analyses for estimating the levels of anthocyanins and UV-excited fluorescent phenolic compounds in fruit. The fruit skin and cut surface of 12 cultivars were photographed under visible and UV light conditions; colors were evaluated based on the color components of images. The levels of anthocyanins and UV-excited fluorescent compounds in each fruit were also evaluated by spectrophotometric and high performance liquid chromatography (HPLC) analyses, respectively and relationships between these levels and the image data were investigated. Red depth of the fruits differed greatly among the cultivars and anthocyanin content was well estimated based on the color values of the cut surface images. Strong UV-excited fluorescence was observed on the cut surfaces of several cultivars, and the grayscale values of the UV-excited fluorescence images were markedly correlated with the levels of those fluorescent compounds as evaluated by HPLC analysis. These results indicate that image analyses can select promising genotypes rich in anthocyanins and fluorescent phenolic compounds. PMID:23853516
Image processing operations achievable with the Microchannel Spatial Light Modulator
NASA Astrophysics Data System (ADS)
Warde, C.; Fisher, A. D.; Thackara, J. I.; Weiss, A. M.
1980-01-01
The Microchannel Spatial Light Modulator (MSLM) is a versatile, optically-addressed, highly-sensitive device that is well suited for low-light-level, real-time, optical information processing. It consists of a photocathode, a microchannel plate (MCP), a planar acceleration grid, and an electro-optic plate in proximity focus. A framing rate of 20 Hz with full modulation depth, and 100 Hz with 20% modulation depth has been achieved in a vacuum-demountable LiTaO3 device. A halfwave exposure sensitivity of 2.2 mJ/sq cm and an optical information storage time of more than 2 months have been achieved in a similar gridless LiTaO3 device employing a visible photocathode. Image processing operations such as analog and digital thresholding, real-time image hard clipping, contrast reversal, contrast enhancement, image addition and subtraction, and binary-level logic operations such as AND, OR, XOR, and NOR can be achieved with this device. This collection of achievable image processing characteristics makes the MSLM potentially useful for a number of smart sensor applications.
Advances in detection of diffuse seafloor venting using structured light imaging.
NASA Astrophysics Data System (ADS)
Smart, C.; Roman, C.; Carey, S.
2016-12-01
Systematic, remote detection and high resolution mapping of low temperature diffuse hydrothermal venting is inefficient and not currently tractable using traditional remotely operated vehicle (ROV) mounted sensors. Preliminary results for hydrothermal vent detection using a structured light laser sensor were presented in 2011 and published in 2013 (Smart) with continual advancements occurring in the interim. As the structured light laser passes over active venting, the projected laser line effectively blurs due to the associated turbulence and density anomalies in the vent fluid. The degree laser disturbance is captured by a camera collecting images of the laser line at 20 Hz. Advancements in the detection of the laser and fluid interaction have included extensive normalization of the collected laser data and the implementation of a support vector machine algorithm to develop a classification routine. The image data collected over a hydrothermal vent field is then labeled as seafloor, bacteria or a location of venting. The results can then be correlated with stereo images, bathymetry and backscatter data. This sensor is a component of an ROV mounted imaging suite which also includes stereo cameras and a multibeam sonar system. Originally developed for bathymetric mapping, the structured light laser sensor, and other imaging suite components, are capable of creating visual and bathymetric maps with centimeter level resolution. Surveys are completed in a standard mowing the lawn pattern completing a 30m x 30m survey with centimeter level resolution in under an hour. Resulting co-registered data includes, multibeam and structured light laser bathymetry and backscatter, stereo images and vent detection. This system allows for efficient exploration of areas with diffuse and small point source hydrothermal venting increasing the effectiveness of scientific sampling and observation. Recent vent detection results collected during the 2013-2015 E/V Nautilus seasons will be presented. Smart, C. J. and Roman, C. and Carey, S. N. (2013) Detection of diffuse seafloor venting using structured light imaging, Geochemistry, Geophysics, Geosystems, 14, 4743-4757
Color image processing and vision system for an automated laser paint-stripping system
NASA Astrophysics Data System (ADS)
Hickey, John M., III; Hise, Lawson
1994-10-01
Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.
Ito, Yuhei; Suzuki, Kyouichi; Ichikawa, Tsuyoshi; Watanabe, Yoichi; Sato, Taku; Sakuma, Jun; Saito, Kiyoshi
2018-06-12
Laser surgical microscopes should enable uniform illumination of the operative field, and require less luminous energy compared with existing xenon surgical microscopes. To examine the utility of laser illumination in fluorescence cerebral angiography. Fluorescein sodium (fluorescein) was used as a fluorescent dye. We first compared the clarity of cerebral blood flow images collected by fluorescence angiography between the laser illumination and xenon illumination methods. We then assessed use of the laser illuminator for simultaneous observation of blood flow and surrounding structures during fluorescence angiography. Furthermore, the study was designed to evaluate usefulness of the thus determined excitation light in clinical cases. Fluorescence angiography using blue light laser for excitation provided higher clarity and contrast blood flow images compared with using blue light generated from a xenon lamp. Further, illumination with excitation light consisting of a combination of 3 types of laser (higher level of blue light, no green light, and lower level of red light) enabled both blood flow and surrounding structures to be observed through the microscope directly by the surgeon. Laser-illuminated fluorescence angiography provides high clarity and contrast images of cerebral blood flow. Further, a laser providing strong blue light and weak red light for excitation light enables simultaneous visual observation of fluorescent blood flow and surrounding structures by the surgeon using a surgical microscope. Overall, these data suggest that laser surgical microscopes are useful for both ordinary operative manipulations and fluorescence angiography.
Image classification at low light levels
NASA Astrophysics Data System (ADS)
Wernick, Miles N.; Morris, G. Michael
1986-12-01
An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.
Safety assessment in macaques of light exposures for functional two-photon ophthalmoscopy in humans
Schwarz, Christina; Sharma, Robin; Fischer, William S.; Chung, Mina; Palczewska, Grazyna; Palczewski, Krzysztof; Williams, David R.; Hunter, Jennifer J.
2016-01-01
Two-photon ophthalmoscopy has potential for in vivo assessment of function of normal and diseased retina. However, light safety of the sub-100 fs laser typically used is a major concern and safety standards are not well established. To test the feasibility of safe in vivo two-photon excitation fluorescence (TPEF) imaging of photoreceptors in humans, we examined the effects of ultrashort pulsed light and the required light levels with a variety of clinical and high resolution imaging methods in macaques. The only measure that revealed a significant effect due to exposure to pulsed light within existing safety standards was infrared autofluorescence (IRAF) intensity. No other structural or functional alterations were detected by other imaging techniques for any of the exposures. Photoreceptors and retinal pigment epithelium appeared normal in adaptive optics images. No effect of repeated exposures on TPEF time course was detected, suggesting that visual cycle function was maintained. If IRAF reduction is hazardous, it is the only hurdle to applying two-photon retinal imaging in humans. To date, no harmful effects of IRAF reduction have been detected. PMID:28018732
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
Improving NIR snow pit stratigraphy observations by introducing a controlled NIR light source
NASA Astrophysics Data System (ADS)
Dean, J.; Marshall, H.; Rutter, N.; Karlson, A.
2013-12-01
Near-infrared (NIR) photography in a prepared snow pit measures mm-/grain-scale variations in snow structure, as reflectivity is strongly dependent on microstructure and grain size at the NIR wavelengths. We explore using a controlled NIR light source to maximize signal to noise ratio and provide uniform incident, diffuse light on the snow pit wall. NIR light fired from the flash is diffused across and reflected by an umbrella onto the snow pit; the lens filter transmits NIR light onto the spectrum-modified sensor of the DSLR camera. Lenses are designed to refract visible light properly, not NIR light, so there must be a correction applied for the subsequent NIR bright spot. To avoid interpolation and debayering algorithms automatically performed by programs like Adobe's Photoshop on the images, the raw data are analyzed directly in MATLAB. NIR image data show a doubling of the amount of light collected in the same time for flash over ambient lighting. Transitions across layer boundaries in the flash-lit image are detailed by higher camera intensity values than ambient-lit images. Curves plotted using median intensity at each depth, normalized to the average profile intensity, show a separation between flash- and ambient-lit images in the upper 10-15 cm; the ambient-lit image curve asymptotically approaches the level of the flash-lit image curve below 15cm. We hypothesize that the difference is caused by additional ambient light penetrating the upper 10-15 cm of the snowpack from above and transmitting through the wall of the snow pit. This indicates that combining NIR ambient and flash photography could be a powerful technique for studying penetration depth of radiation as a function of microstructure and grain size. The NIR flash images do not increase the relative contrast at layer boundaries; however, the flash more than doubles the amount of recorded light and controls layer noise as well as layer boundary transition noise.
Enhanced Beetle Luciferase for High-Resolution Bioluminescence Imaging
Nakajima, Yoshihiro; Yamazaki, Tomomi; Nishii, Shigeaki; Noguchi, Takako; Hoshino, Hideto; Niwa, Kazuki; Viviani, Vadim R.; Ohmiya, Yoshihiro
2010-01-01
We developed an enhanced green-emitting luciferase (ELuc) to be used as a bioluminescence imaging (BLI) probe. ELuc exhibits a light signal in mammalian cells that is over 10-fold stronger than that of the firefly luciferase (FLuc), which is the most widely used luciferase reporter gene. We showed that ELuc produces a strong light signal in primary cells and tissues and that it enables the visualization of gene expression with high temporal resolution at the single-cell level. Moreover, we successfully imaged the nucleocytoplasmic shuttling of importin α by fusing ELuc at the intracellular level. These results demonstrate that the use of ELuc allows a BLI spatiotemporal resolution far greater than that provided by FLuc. PMID:20368807
Complete erasing of ghost images caused by deeply trapped electrons on computed radiography plates
NASA Astrophysics Data System (ADS)
Ohuchi, H.; Kondo, Y.
2011-03-01
The ghost images, i.e., latent image that is unerasable with visible light (LIunVL) and reappearing image appeared on computed radiography (CR) plates were completely erased by simultaneous exposing them to filtered ultraviolet light and visible light. Three different types of CR plates (Agfa, Kodak, and Fuji) were irradiated with 50 kV X-ray beams in the dose range 8.1 mGy to 8.0 Gy, and then conventionally erased for 2 h with visible light. The remaining LIunVL could be erased by repeating 6 h simultaneous exposures to filtered ultraviolet light and visible light. After the sixth round of exposure, all the LIunVL in the three types of CR plates were erased to the same level as in an unirradiated plate and no latent images reappeared after storage at 0°C for 14 days. The absorption spectra of deep centers were specified using polychromatic ultraviolet light from a deep-ultraviolet lamp. It was found that deep centers showed a dominant peak in the absorption spectra at around 324 nm for the Agfa and Kodak plates, and at around 320 nm for the Fuji plate, in each case followed by a few small peaks. After completely erasing CR plates, these peaks were no longer observed.
Traffic analysis and control using image processing
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.
2017-11-01
This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.
Image transport through a disordered optical fibre mediated by transverse Anderson localization.
Karbasi, Salman; Frazier, Ryan J; Koch, Karl W; Hawkins, Thomas; Ballato, John; Mafi, Arash
2014-02-25
Transverse Anderson localization of light allows localized optical-beam-transport through a transversely disordered and longitudinally invariant medium. Its successful implementation in disordered optical fibres recently resulted in the propagation of localized beams of radii comparable to that of conventional optical fibres. Here we demonstrate optical image transport using transverse Anderson localization of light. The image transport quality obtained in the polymer disordered optical fibre is comparable to or better than some of the best commercially available multicore image fibres with less pixelation and higher contrast. It is argued that considerable improvement in image transport quality can be obtained in a disordered fibre made from a glass matrix with near wavelength-size randomly distributed air-holes with an air-hole fill-fraction of 50%. Our results open the way to device-level implementation of the transverse Anderson localization of light with potential applications in biological and medical imaging.
NASA Astrophysics Data System (ADS)
Ying, Changsheng; Zhao, Peng; Li, Ye
2018-01-01
The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.
NASA Technical Reports Server (NTRS)
1996-01-01
PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.
NASA Astrophysics Data System (ADS)
Lawrence, Stephen S.; Hyder, Ali; Sugerman, Ben; Crotts, Arlin P. S.
2017-06-01
We report on our ongoing use of Hubble Space Telescope (HST) imaging to monitor the scattered light echoes of recent heavily-extincted supernovae in two nearby, albeit unusual, galaxies.Supernova 2014J was a highly-reddened Type Ia supernova that erupted in the nearby irregular star-forming galaxy M 82 in 2014 January. It was discovered to have light echo by Crotts (2016) in early epoch HST imaging and has been further described by Yang, et al. (2017) based on HST imaging through late 2014. Our ongoing monitoring in the WFC3 F438W, F555W, and F814W filters shows that, consistent with Crotts (2106) and Yang, et al. (2017), throughout 2015 and 2016 the main light echo arc expanded through a dust complex located approximately 230 pc in the foreground of the supernova. This main light echo has, however, faded dramatically in our most recent HST imaging from 2017 March. The supernova itself has also faded to undetectable levels by 2017 March.Supernova 2016adj is a highly-reddened core-collapse supernova that erupted inside the unusual dust lane of the nearby giant elliptical galaxy Centaurus A (NGC 5128) in 2016 February. It was discovered to have a light echo by Sugerman & Lawrence (2016) in early epoch HST imaging in 2016 April. Our ongoing monitoring in the WFC3 F438W, F547M, and F814W filters shows a slightly elliptical series of light echo arc segments hosted by a tilted dust complex ranging approximately 150--225 pc in the foreground of the supernova. The supernova itself has also faded to undetectable levels by 2017 April.References: Crotts, A. P. S., ApJL, 804, L37 (2016); Yang et al., ApJ, 834, 60 (2017); Sugerman, B. and Lawrence, S., ATel #8890 (2016).
Design of a Low-Light-Level Image Sensor with On-Chip Sigma-Delta Analog-to- Digital Conversion
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.; Fossum, Eric R.
1993-01-01
The design and projected performance of a low-light-level active-pixel-sensor (APS) chip with semi-parallel analog-to-digital (A/D) conversion is presented. The individual elements have been fabricated and tested using MOSIS* 2 micrometer CMOS technology, although the integrated system has not yet been fabricated. The imager consists of a 128 x 128 array of active pixels at a 50 micrometer pitch. Each column of pixels shares a 10-bit A/D converter based on first-order oversampled sigma-delta (Sigma-Delta) modulation. The 10-bit outputs of each converter are multiplexed and read out through a single set of outputs. A semi-parallel architecture is chosen to achieve 30 frames/second operation even at low light levels. The sensor is designed for less than 12 e^- rms noise performance.
THELMA: a mobile app for crowdsourcing environmental data
NASA Astrophysics Data System (ADS)
Hintz, Kenneth J.; Hintz, Christopher J.; Almomen, Faris; Adounvo, Christian; D'Amato, Michael
2014-06-01
The collection of environmental light pollution data related to sea turtle nesting sites is a laborious and time consuming effort entailing the use of several pieces of measurement equipment, their transportation and calibration, the manual logging of results in the field, and subsequent transfer of the data to a computer for post-collection analysis. Serendipitously, the current generation of mobile smart phones (e.g., iPhone® 5) contains the requisite measurement capability, namely location data in aided GPS coordinates, magnetic compass heading, and elevation at the time an image is taken, image parameter data, and the image itself. The Turtle Habitat Environmental Light Measurement App (THELMA) is a mobile phone app whose graphical user interface (GUI) guides an untrained user through the image acquisition process in order to capture 360° of images with pointing guidance. It subsequently uploads the user-tagged images, all of the associated image parameters, and position, azimuth, elevation metadata to a central internet repository. Provision is also made for the capture of calibration images and the review of images before upload. THELMA allows for inexpensive, highly-efficient, worldwide crowdsourcing of calibratable beachfront lighting/light pollution data collected by untrained volunteers. This data can be later processed, analyzed, and used by scientists conducting sea turtle conservation in order to identify beach locations with hazardous levels of light pollution that may alter sea turtle behavior and necessitate human intervention after hatchling emergence.
Single photon detection imaging of Cherenkov light emitted during radiation therapy
NASA Astrophysics Data System (ADS)
Adamson, Philip M.; Andreozzi, Jacqueline M.; LaRochelle, Ethan; Gladstone, David J.; Pogue, Brian W.
2018-03-01
Cherenkov imaging during radiation therapy has been developed as a tool for dosimetry, which could have applications in patient delivery verification or in regular quality audit. The cameras used are intensified imaging sensors, either ICCD or ICMOS cameras, which allow important features of imaging, including: (1) nanosecond time gating, (2) amplification by 103-104, which together allow for imaging which has (1) real time capture at 10-30 frames per second, (2) sensitivity at the level of single photon event level, and (3) ability to suppress background light from the ambient room. However, the capability to achieve single photon imaging has not been fully analyzed to date, and as such was the focus of this study. The ability to quantitatively characterize how a single photon event appears in amplified camera imaging from the Cherenkov images was analyzed with image processing. The signal seen at normal gain levels appears to be a blur of about 90 counts in the CCD detector, after going through the chain of photocathode detection, amplification through a microchannel plate PMT, excitation onto a phosphor screen and then imaged on the CCD. The analysis of single photon events requires careful interpretation of the fixed pattern noise, statistical quantum noise distributions, and the spatial spread of each pulse through the ICCD.
Research and application on imaging technology of line structure light based on confocal microscopy
NASA Astrophysics Data System (ADS)
Han, Wenfeng; Xiao, Zexin; Wang, Xiaofen
2009-11-01
In 2005, the theory of line structure light confocal microscopy was put forward firstly in China by Xingyu Gao and Zexin Xiao in the Institute of Opt-mechatronics of Guilin University of Electronic Technology. Though the lateral resolution of line confocal microscopy can only reach or approach the level of the traditional dot confocal microscopy. But compared with traditional dot confocal microscopy, it has two advantages: first, by substituting line scanning for dot scanning, plane imaging only performs one-dimensional scanning, with imaging velocity greatly improved and scanning mechanism simplified, second, transfer quantity of light is greatly improved by substituting detection hairline for detection pinhole, and low illumination CCD is used directly to collect images instead of photoelectric intensifier. In order to apply the line confocal microscopy to practical system, based on the further research on the theory of the line confocal microscopy, imaging technology of line structure light is put forward on condition of implementation of confocal microscopy. Its validity and reliability are also verified by experiments.
Entangled-photon compressive ghost imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerom, Petros; Chan, Kam Wai Clifford; Howell, John C.
2011-12-15
We have experimentally demonstrated high-resolution compressive ghost imaging at the single-photon level using entangled photons produced by a spontaneous parametric down-conversion source and using single-pixel detectors. For a given mean-squared error, the number of photons needed to reconstruct a two-dimensional image is found to be much smaller than that in quantum ghost imaging experiments employing a raster scan. This procedure not only shortens the data acquisition time, but also suggests a more economical use of photons for low-light-level and quantum image formation.
Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A
2016-08-01
Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule.
2016-04-18
ISS047e066551 (04/18/2016) --- NASA astronaut Jeff Williams configures the station’s Light Microscopy Module (LMM), a modified commercial, highly flexible, state-of-the-art light imaging microscope facility that provides researchers with powerful diagnostic hardware and software. The LMM enables novel research of microscopic phenomena in microgravity, with the capability of remotely acquiring and downloading digital images and videos across many levels of magnification.
Hubble Provides Infrared View of Jupiter's Moon, Ring, and Clouds
NASA Technical Reports Server (NTRS)
1997-01-01
Probing Jupiter's atmosphere for the first time, the Hubble Space Telescope's new Near Infrared Camera and Multi-Object Spectrometer (NICMOS) provides a sharp glimpse of the planet's ring, moon, and high-altitude clouds.
The presence of methane in Jupiter's hydrogen- and helium-rich atmosphere has allowed NICMOS to plumb Jupiter's atmosphere, revealing bands of high-altitude clouds. Visible light observations cannot provide a clear view of these high clouds because the underlying clouds reflect so much visible light that the higher level clouds are indistinguishable from the lower layer. The methane gas between the main cloud deck and the high clouds absorbs the reflected infrared light, allowing those clouds that are above most of the atmosphere to appear bright. Scientists will use NICMOS to study the high altitude portion of Jupiter's atmosphere to study clouds at lower levels. They will then analyze those images along with visible light information to compile a clearer picture of the planet's weather. Clouds at different levels tell unique stories. On Earth, for example, ice crystal (cirrus) clouds are found at high altitudes while water (cumulus) clouds are at lower levels.Besides showing details of the planet's high-altitude clouds, NICMOS also provides a clear view of the ring and the moon, Metis. Jupiter's ring plane, seen nearly edge-on, is visible as a faint line on the upper right portion of the NICMOS image. Metis can be seen in the ring plane (the bright circle on the ring's outer edge). The moon is 25 miles wide and about 80,000 miles from Jupiter.Because of the near-infrared camera's narrow field of view, this image is a mosaic constructed from three individual images taken Sept. 17, 1997. The color intensity was adjusted to accentuate the high-altitude clouds. The dark circle on the disk of Jupiter (center of image) is an artifact of the imaging system.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/Brand, Christine; Burkhardt, Eva; Schaeffel, Frank; Choi, Jeong Won; Feldkaemper, Marita Pauline
2005-04-28
To analyze mRNA expression changes of Egr-1, VIP, and Shh under different light and treatment conditions in mice. The mRNA expression levels of the three genes and additionally the Egr-1 protein expression were compared in form deprived eyes and eyes with normal vision. Moreover, the influence of dark to light and light to dark transitions and of changes in retinal illumination on mRNA levels was investigated. Form deprivation of mice was induced by fitting frosted diffusers over one eye and an attentuation matched neutral density (ND) filter over the other eye. To measure the effects of retinal illumination changes on mRNA expression, animals were bilaterally fitted with different ND filters. Semiquantitative real-time RT-PCR was used to measure the mRNA levels and immunohistochemistry was applied to localize and detect Egr-1 protein. The expression levels of both Egr-1 mRNA and protein were reduced in form deprived eyes compared to their fellow eyes after 30 min and 1 h, respectively. Egr-1 mRNA was strikingly upregulated both after dark to light and light to dark transitions, whereas minor changes in retinal illumination by covering the eyes with neutral density filters did not alter Egr-1 mRNA expression. In mice, the mRNA levels of VIP and Shh were not affected by form deprivation, but they were found to be regulated depending on the time of day. Both Egr-1 mRNA and protein expression levels were strongly regulated by light, especially by transitions between light and darkness. Image contrast may exert an additional influence on mRNA and protein expression of Egr-1, particularly in the cells in the ganglion cell layer and in bipolar cells.
Improved detection probability of low level light and infrared image fusion system
NASA Astrophysics Data System (ADS)
Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang
2018-02-01
Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.
Lobo, I C; Lemos, A L B; Aguiar, M F
2015-01-01
Objectives: This study aimed to assess how details on dental restorative composites with different radio-opacities are perceived under the influence of ambient light. Methods: Resin composite step wedges (six steps, each 1-mm thick) were custom manufactured from three materials, respectively: (M1) Filtek™ Z350 (3M/ESPE, Saint Paul, MN); (M2) Prisma AP.H™ (Dentsply International Inc., Brazil) and (M3) Glacier® (SDI Limited, Victoria, Australia). Each step of the manufactured wedge received three standardized drillings of different diameters and depths. An aluminium (Al) step wedge with 12 steps (1-mm thick) was used as an internal standard to calculate the radio-opacity as pixel intensity values. Standardized digital images of the set were obtained, and 11 observers independently recorded the images, noting the number of noticeable details (drillings) under 2 dissimilar conditions: in a light environment (light was turned on in the room) and in low-light conditions (light in the room was turned off). The differences between images in terms of the number of details that were observed were statistically compared using ANOVA, Cronbach's alpha coefficient and Wilcoxon and Kruskal–Wallis tests, with a significance level setting of 5% (α = 0.05). Results: The M2 showed higher radio-opacity, the M1 displayed intermediate radio-opacity and the M3 showed lower radio-opacity, respectively; however, all three were without significance (p > 0.05) compared with each other. The differences in radio-opacity resulted in a significant variation (p < 0.05) in the number of noticeable details in the image, which were influenced by characteristics of details, in addition to the ambient-light level. Conclusions: The radio-opacity of materials and ambient light can affect the perception of details in digital radiographic images. PMID:25629721
Color line scan camera technology and machine vision: requirements to consider
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.
1997-08-01
Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.
PlenoPatch: Patch-Based Plenoptic Image Manipulation.
Zhang, Fang-Lue; Wang, Jue; Shechtman, Eli; Zhou, Zi-Ye; Shi, Jia-Xin; Hu, Shi-Min
2017-05-01
Patch-based image synthesis methods have been successfully applied for various editing tasks on still images, videos and stereo pairs. In this work we extend patch-based synthesis to plenoptic images captured by consumer-level lenselet-based devices for interactive, efficient light field editing. In our method the light field is represented as a set of images captured from different viewpoints. We decompose the central view into different depth layers, and present it to the user for specifying the editing goals. Given an editing task, our method performs patch-based image synthesis on all affected layers of the central view, and then propagates the edits to all other views. Interaction is done through a conventional 2D image editing user interface that is familiar to novice users. Our method correctly handles object boundary occlusion with semi-transparency, thus can generate more realistic results than previous methods. We demonstrate compelling results on a wide range of applications such as hole-filling, object reshuffling and resizing, changing object depth, light field upscaling and parallax magnification.
NASA Astrophysics Data System (ADS)
Li, Tianmeng; Hui, Hui; Ma, He; Yang, Xin; Tian, Jie
2018-02-01
Non-invasive imaging technologies, such as magnetic resonance imaging (MRI) and optical multimodality imaging methods, are commonly used for diagnosing and supervising the development of inflammatory bowel disease (IBD). These in vivo imaging methods can provide morphology changes information of IBD in macro-scale. However, it is difficult to investigate the intestinal wall in molecular and cellular level. State-of-art light-sheet and two-photon microscopy have the ability to acquire the changes for IBD in micro-scale. The aim of this work is to evaluate the size of the enterocoel and the thickness of colon wall using both MRI for in vivo imaging, and light-sheet and two-photon microscope for in vitro imaging. C57BL/6 mice were received 3.5% Dextran sodium sulfate (DSS) in the drinking water for 5 days to build IBD model. Mice were imaged with MRI on days 0, 6 to observe colitis progression. After MRI imaging, the mice were sacrificed to take colons for tissue clearing. Then, light-sheet and two-photon microscopies are used for in vitro imaging of the cleared samples. The experimental group showed symptoms of bloody stools, sluggishness and weight loss. It showed that the colon wall was thicker while the enterocoel was narrower compare to control group. The more details are observed using light-sheet and two-photon microscope. It is demonstrated that hybrid of MRI in macro-scale and light-sheet and two-photon microscopy in micro-scale imaging is feasible for colon inflammation diagnosing and supervising.
Generation of light-sheet at the end of multimode fibre (Conference Presentation)
NASA Astrophysics Data System (ADS)
Plöschner, Martin; Kollárová, Véra; Dostál, Zbyněk.; Nylk, Jonathan; Barton-Owen, Thomas; Ferrier, David E. K.; Chmelik, Radim; Dholakia, Kishan; Cizmár, TomáÅ.¡
2017-02-01
Light-sheet fluorescence microscopy is quickly becoming one of the cornerstone imaging techniques in biology as it provides rapid, three-dimensional sectioning of specimens at minimal levels of phototoxicity. It is very appealing to bring this unique combination of imaging properties into an endoscopic setting and be able to perform optical sectioning deep in tissues. Current endoscopic approaches for delivery of light-sheet illumination are based on single-mode optical fibre terminated by cylindrical gradient index lens. Such configuration generates a light-sheet plane that is axially fixed and a mechanical movement of either the sample or the endoscope is required to acquire three-dimensional information about the sample. Furthermore, the axial resolution of this technique is limited to 5um. The delivery of the light-sheet through the multimode fibre provides better axial resolution limited only by its numerical aperture, the light-sheet is scanned holographically without any mechanical movement, and multiple advanced light-sheet imaging modalities, such as Bessel and structured illumination Bessel beam, are intrinsically supported by the system due to the cylindrical symmetry of the fibre. We discuss the holographic techniques for generation of multiple light-sheet types and demonstrate the imaging on a sample of fluorescent beads fixed in agarose gel, as well as on a biological sample of Spirobranchus Lamarcki.
NASA Astrophysics Data System (ADS)
Saito, Kenta; Kobayashi, Kentaro; Nagai, Takeharu
2011-12-01
Efficient bioluminescence resonance energy transfer (BRET) from a bioluminescent protein to a fluorescent protein with high fluorescent quantum yield has been utilized to enhance luminescence intensity, allowing single-cell imaging in near real time without external light illumination. We have applied this strategy to develop an autoluminescent Ca2+ indicator, BRAC, which is composed of Ca2+-binding protein, calmodulin, and its target peptide, M13, sandwiched between a yellow fluorescent protein variant, Venus, and an enhanced Renilla luciferase, RLuc8. With this BRAC, we succeeded visualization of Ca2+ dynamics at the single-cell level with temporal resolution at 1 Hz. Moreover, BRAC signals were acquired by ratiometric imaging capable of canceling out Ca2+-independent signal drifts due to change in cell shape, focus shift, etc. Taking advantage of the bioluminescence imaging property that does not require external excitation light, BRAC might become a powerful tool applicable in conjunction with so-called optogenetic technology by which we can control cellular and protein function by light illumination.
Intrinsic melanin and hemoglobin colour components for skin lesion malignancy detection.
Madooei, Ali; Drew, Mark S; Sadeghi, Maryam; Atkins, M Stella
2012-01-01
In this paper we propose a new log-chromaticity 2-D colour space, an extension of previous approaches, which succeeds in removing confounding factors from dermoscopic images: (i) the effects of the particular camera characteristics for the camera system used in forming RGB images; (ii) the colour of the light used in the dermoscope; (iii) shading induced by imaging non-flat skin surfaces; (iv) and light intensity, removing the effect of light-intensity falloff toward the edges of the dermoscopic image. In the context of a blind source separation of the underlying colour, we arrive at intrinsic melanin and hemoglobin images, whose properties are then used in supervised learning to achieve excellent malignant vs. benign skin lesion classification. In addition, we propose using the geometric-mean of colour for skin lesion segmentation based on simple grey-level thresholding, with results outperforming the state of the art.
Novel ray tracing method for stray light suppression from ocean remote sensing measurements.
Oh, Eunsong; Hong, Jinsuk; Kim, Sug-Whan; Park, Young-Je; Cho, Seong-Ick
2016-05-16
We developed a new integrated ray tracing (IRT) technique to analyze the stray light effect in remotely sensed images. Images acquired with the Geostationary Ocean Color Imager show a radiance level discrepancy at the slot boundary, which is suspected to be a stray light effect. To determine its cause, we developed and adjusted a novel in-orbit stray light analysis method, which consists of three simulated phases (source, target, and instrument). Each phase simulation was performed in a way that used ray information generated from the Sun and reaching the instrument detector plane efficiently. This simulation scheme enabled the construction of the real environment from the remote sensing data, with a focus on realistic phenomena. In the results, even in a cloud-free environment, a background stray light pattern was identified at the bottom of each slot. Variations in the stray light effect and its pattern according to bright target movement were simulated, with a maximum stray light ratio of 8.5841% in band 2 images. To verify the proposed method and simulation results, we compared the results with the real acquired remotely sensed image. In addition, after correcting for abnormal phenomena in specific cases, we confirmed that the stray light ratio decreased from 2.38% to 1.02% in a band 6 case, and from 1.09% to 0.35% in a band 8 case. IRT-based stray light analysis enabled clear determination of the stray light path and candidates in in-orbit circumstances, and the correction process aided recovery of the radiometric discrepancy.
Visual Method for Detecting Contaminant on Dried Nutmeg Using Fluorescence Imaging
NASA Astrophysics Data System (ADS)
Dahlan, S. A.; Ahmad, U.; Subrata, I. D. M.
2018-05-01
Traditional practice of nutmeg sun-drying causes some fungi such as Aspergillus flavus to grow. One of the secondary metabolites of A. flavus named aflatoxin (AFs) is known to be carcinogenic, so the dried nutmeg kernel must be aflatoxin-free in the trading. Aflatoxin detection requires time and costly, make it difficult to conduct at the farmers level. This study aims to develop a simple and low-cost method to detect aflatoxin at the farmer level. Fresh nutmeg seeds were dried in two ways; sundried everyday (continuous), and sundried every two days (intermittent), both for around 18 days. The dried nutmeg seeds are then stored in a rice sack under normal conditions until the fungi grow, then they were opened and the images of kernels were captured using a CCD camera, with normal light and UV light sources. Visual observation on images captured in normal light source was able to detect the presence of fungi on dried kernels, by 28.0% for continuous and 26.2% for intermittent sun-drying. Visual observation on images captured in UV light source was able to detect the presence of aflatoxin on dried kernels, indicated by blue luminance on kernel, by 10.4% and 13.4% for continuous and intermittent sun-drying.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozioziemski, B.
A foam shell, 1.2 mm outer diameter with a 35 μm thick foam layer, is used to quickly form a solid deuterium layer for ICF. Figures show the visible light microscope image and a corresponding schematic representation. In each case, images show the empty foam shell, with the dark and light patches due to the foam imperfections; the foam shell with liquid deuterium filling the foam (in this case, the liquid level exceeds the foam level because the deuterium will shrink when it freezes); and an image of the shell taken 10 minutes after the center image, after the temperaturemore » was reduced by 2 K to freeze the deuterium. This image shows that the majority of the solid deuterium has no observable defects, with the exception of the isolated crystal that formed on the foam surface. The next step is to get the correct level of liquid and cooling rate to prevent the extra crystal on the surface. In contrast, typical ICF DT fuel layers require ~13 hours to solidify in order to be defect free with a success rate of approximately 20%.« less
Laser-activated remote phosphor light engine for projection applications
NASA Astrophysics Data System (ADS)
Daniels, Martin; Mehl, Oliver; Hartwig, Ulrich
2015-09-01
Recent developments in blue emitting laser diodes enable attractive solutions in projection applications using phosphors for efficient light conversion with very high luminance levels. Various commercially available projectors incorporating this technology have entered the market in the past years. While luminous flux levels are still comparable to lamp-based systems, lifetime expectations of classical lamp systems are exceeded by far. OSRAM GmbH has been exploring this technology for several years and has introduced the PHASER® brand name (Phosphor + laser). State-of-the-art is a rotating phosphor wheel excited by blue laser diodes to deliver the necessary primary colors, either sequentially for single-imager projection engines, or simultaneously for 3-panel systems. The PHASER® technology enables flux and luminance scaling, which allows for smaller imagers and therefore cost-efficient projection solutions. The resulting overall efficiency and ANSI lumen specification at the projection screen of these systems is significantly determined by the target color gamut and the light transmission efficiency of the projection system. With increasing power and flux level demand, thermal issues, especially phosphor conversion related, dominate the opto-mechanical system design requirements. These flux levels are a great challenge for all components of an SSL-projection system (SSL:solid-state lighting). OSRAḾs PHASER® light engine platform is constantly expanded towards higher luminous flux levels as well as higher luminance levels for various applications. Recent experiments employ blue laser pump powers of multiple 100 Watts to excite various phosphors resulting in luminous flux levels of more than 40 klm.
NASA Technical Reports Server (NTRS)
2002-01-01
With the backing of NASA, researchers at Michigan State University, the University of Minnesota, and the University of Wisconsin have begun using satellite data to measure lake water quality and clarity of the lakes in the Upper Midwest. This false color IKONOS image displays the water clarity of the lakes in Eagan, Minnesota. Scientists measure the lake quality in satellite data by observing the ratio of blue to red light in the satellite data. When the amount of blue light reflecting off of the lake is high and the red light is low, a lake generally had high water quality. Lakes loaded with algae and sediments, on the other hand, reflect less blue light and more red light. In this image, scientists used false coloring to depict the level of clarity of the water. Clear lakes are blue, moderately clear lakes are green and yellow, and murky lakes are orange and red. Using images such as these along with data from the Landsat satellites and NASA's Terra satellite, the scientists plan to create a comprehensive water quality map for the entire Great Lakes region in the next few years. For more information, read: Testing the Waters (Image courtesy Upper Great Lakes Regional Earth Science Applications Center, based on data copyright Space Imaging)
First scattered-light image of the debris disk around HD 131835 with the Gemini Planet Imager
Hung, Li -Wei; Duchêne, Gaspard; Arriaga, Pauline; ...
2015-12-09
Here, we present the first scattered-light image of the debris disk around HD 131835 in the H band using the Gemini Planet Imager. HD 131835 is a ~15 Myr old A2IV star at a distance of ~120 pc in the Sco-Cen OB association. We detect the disk only in polarized light and place an upper limit on the peak total intensity. No point sources resembling exoplanets were identified. Compared to its mid-infrared thermal emission, in scattered light the disk shows similar orientation but different morphology. The scattered-light disk extends from ~75 to ~210 AU in the disk plane with roughlymore » flat surface density. Our Monte Carlo radiative transfer model can describe the observations with a model disk composed of a mixture of silicates and amorphous carbon. In addition to the obvious brightness asymmetry due to stronger forward scattering, we discover a weak brightness asymmetry along the major axis, with the northeast side being 1.3 times brighter than the southwest side at a 3σ level.« less
FIRST SCATTERED-LIGHT IMAGE OF THE DEBRIS DISK AROUND HD 131835 WITH THE GEMINI PLANET IMAGER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hung, Li-Wei; Arriaga, Pauline; Fitzgerald, Michael P.
2015-12-10
We present the first scattered-light image of the debris disk around HD 131835 in the H band using the Gemini Planet Imager. HD 131835 is a ∼15 Myr old A2IV star at a distance of ∼120 pc in the Sco-Cen OB association. We detect the disk only in polarized light and place an upper limit on the peak total intensity. No point sources resembling exoplanets were identified. Compared to its mid-infrared thermal emission, in scattered light the disk shows similar orientation but different morphology. The scattered-light disk extends from ∼75 to ∼210 AU in the disk plane with roughly flatmore » surface density. Our Monte Carlo radiative transfer model can describe the observations with a model disk composed of a mixture of silicates and amorphous carbon. In addition to the obvious brightness asymmetry due to stronger forward scattering, we discover a weak brightness asymmetry along the major axis, with the northeast side being 1.3 times brighter than the southwest side at a 3σ level.« less
Digital video system for on-line portal verification
NASA Astrophysics Data System (ADS)
Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott
1990-07-01
A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.
NASA Astrophysics Data System (ADS)
Zhang, Jie; Sabarinathan, Ranjani; Bubel, Tracy; Williams, David R.; Hunter, Jennifer J.
2016-03-01
Observations of RPE disruption and autofluorescence (AF) photobleaching at light levels below the ANSI photochemical maximum permissible exposure (MPE) (Morgan et al., 2008) indicates a demand to modify future light safety standards to protect the retina from harm. To establish safe light exposures, we measured the visible light action spectrum for RPE disruption in an in vivo monkey model with fluorescence adaptive optics retinal imaging. Using this high resolution imaging modality can provide insight into the consequences of light on a cellular level and allow for longitudinal monitoring of retinal changes. The threshold retinal radiant exposures (RRE) for RPE disruption were determined for 4 wavelengths (460, 488, 544, and 594 nm). The anaesthetized macaque retina was exposed to a uniform 0.5° × 0.5° field of view (FOV). Imaging within a 2° × 2° FOV was performed before, immediately after and at 2 week intervals for 10 weeks. At each wavelength, multiple RREs were tested with 4 repetitions each to determine the threshold for RPE disruption. For qualitative analysis, RPE disruption is defined as any detectable change from the pre exposure condition in the cell mosaic in the exposed region relative to the corresponding mosaic in the immediately surrounding area. We have tested several metrics to evaluate the RPE images obtained before and after exposure. The measured action spectrum for photochemical RPE disruption has a shallower slope than the current ANSI photochemical MPE for the same conditions and suggests that longer wavelength light is more hazardous than other measurements would suggest.
Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung
2017-07-08
A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.
NASA Astrophysics Data System (ADS)
Lu, Chieh Han; Chen, Peilin; Chen, Bi-Chang
2017-02-01
Optical imaging techniques provide much important information in understanding life science especially cellular structure and morphology because "seeing is believing". However, the resolution of optical imaging is limited by the diffraction limit, which is discovered by Ernst Abbe, i.e. λ/2(NA) (NA is the numerical aperture of the objective lens). Fluorescence super-resolution microscopic techniques such as Stimulated emission depletion microscopy (STED), Photoactivated localization microscopy (PALM), and Stochastic optical reconstruction microscopy (STORM) are invented to have the capability of seeing biological entities down to molecular level that are smaller than the diffraction limit (around 200-nm in lateral resolution). These techniques do not physically violate the Abbe limit of resolution but exploit the photoluminescence properties and labelling specificity of fluorescence molecules to achieve super-resolution imaging. However, these super-resolution techniques limit most of their applications to the 2D imaging of fixed or dead samples due to the high laser power needed or slow speed for the localization process. Extended from 2D imaging, light sheet microscopy has been proven to have a lot of applications on 3D imaging at much better spatiotemporal resolutions due to its intrinsic optical sectioning and high imaging speed. Herein, we combine the advantage of localization microscopy and light-sheet microscopy to have super-resolved cellular imaging in 3D across large field of view. With high-density labeled spontaneous blinking fluorophore and wide-field detection of light-sheet microscopy, these allow us to construct 3D super-resolution multi-cellular imaging at high speed ( minutes) by light-sheet single-molecule localization microscopy.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
The impact of faceplate surface characteristics on detection of pulmonary nodules
NASA Astrophysics Data System (ADS)
Toomey, R. J.; Ryan, J. T.; McEntee, M. F.; McNulty, J.; Evanoff, M. G.; Cuffe, F.; Yoneda, T.; Stowe, J.; Brennan, P. C.
2009-02-01
Introduction In order to prevent specular reflections, many monitor faceplates have features such as tiny dimples on their surface to diffuse ambient light incident on the monitor, however, this "anti-glare" surface may also diffuse the image itself. The purpose of the study was to determine whether the surface characteristics of monitor faceplates influence the detection of pulmonary nodules under low and high ambient lighting conditions. Methods and Materials Separate observer performance studies were conducted at each of two light levels (<1 lux and >250 lux). Twelve examining radiologists with the American Board of Radiology participated in the darker condition and eleven in the brighter condition. All observers read on both smooth "glare" and dimpled "anti-glare" faceplates in a single lighting condition. A counterbalanced methodology was utilized to minimise memory effects. In each reading, observers were presented with thirty chest images in random order, of which half contained a single simulated pulmonary nodule. They were asked to give their confidence that each image did or did not contain a nodule and to mark the suspicious location. ROC analysis was applied to resultant data. Results No statistically significant differences were seen in the trapezoidal area under the ROC curve (AUC), sensitivity, specificity or average time per case at either light level for chest specialists or radiologists from other specialities. Conclusion The characteristics of the faceplate surfaces do not appear to affect detection of pulmonary nodules. Further work into other image types is being conducted.
Toward real-time quantum imaging with a single pixel camera
Lawrie, B. J.; Pooser, R. C.
2013-03-19
In this paper, we present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively pass macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. Finally, in low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imagingmore » with sensitivity below the photon shot noise limit.« less
Volumetric Light-field Encryption at the Microscopic Scale
Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu
2017-01-01
We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale. PMID:28059149
Volumetric Light-field Encryption at the Microscopic Scale
NASA Astrophysics Data System (ADS)
Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu
2017-01-01
We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale.
Full-color stereoscopic single-pixel camera based on DMD technology
NASA Astrophysics Data System (ADS)
Salvador-Balaguer, Eva; Clemente, Pere; Tajahuerce, Enrique; Pla, Filiberto; Lancis, Jesús
2017-02-01
Imaging systems based on microstructured illumination and single-pixel detection offer several advantages over conventional imaging techniques. They are an effective method for imaging through scattering media even in the dynamic case. They work efficiently under low light levels, and the simplicity of the detector makes it easy to design imaging systems working out of the visible spectrum and to acquire multidimensional information. In particular, several approaches have been proposed to record 3D information. The technique is based on sampling the object with a sequence of microstructured light patterns codified onto a programmable spatial light modulator while light intensity is measured with a single-pixel detector. The image is retrieved computationally from the photocurrent fluctuations provided by the detector. In this contribution we describe an optical system able to produce full-color stereoscopic images by using few and simple optoelectronic components. In our setup we use an off-the-shelf digital light projector (DLP) based on a digital micromirror device (DMD) to generate the light patterns. To capture the color of the scene we take advantage of the codification procedure used by the DLP for color video projection. To record stereoscopic views we use a 90° beam splitter and two mirrors, allowing us two project the patterns form two different viewpoints. By using a single monochromatic photodiode we obtain a pair of color images that can be used as input in a 3-D display. To reduce the time we need to project the patterns we use a compressive sampling algorithm. Experimental results are shown.
Green light for quantitative live-cell imaging in plants.
Grossmann, Guido; Krebs, Melanie; Maizel, Alexis; Stahl, Yvonne; Vermeer, Joop E M; Ott, Thomas
2018-01-29
Plants exhibit an intriguing morphological and physiological plasticity that enables them to thrive in a wide range of environments. To understand the cell biological basis of this unparalleled competence, a number of methodologies have been adapted or developed over the last decades that allow minimal or non-invasive live-cell imaging in the context of tissues. Combined with the ease to generate transgenic reporter lines in specific genetic backgrounds or accessions, we are witnessing a blooming in plant cell biology. However, the imaging of plant cells entails a number of specific challenges, such as high levels of autofluorescence, light scattering that is caused by cell walls and their sensitivity to environmental conditions. Quantitative live-cell imaging in plants therefore requires adapting or developing imaging techniques, as well as mounting and incubation systems, such as micro-fluidics. Here, we discuss some of these obstacles, and review a number of selected state-of-the-art techniques, such as two-photon imaging, light sheet microscopy and variable angle epifluorescence microscopy that allow high performance and minimal invasive live-cell imaging in plants. © 2018. Published by The Company of Biologists Ltd.
Daylight characterization through vision-based sensing of lighting conditions in buildings
NASA Astrophysics Data System (ADS)
di Dio, Joseph, III
A new method for describing daylight under unknown weather conditions, as captured in images of a room, is proposed. This method considers pixel brightness information to be a linear combination of diffuse and directional light components, as received by a web cam from the walls and ceiling of an occupied office. The nature of these components in each image is determined by building orientation, room geometry, neighboring structures and the position of the sun. Considering daylight in this manner also allows for an estimation of the sky conditions at a given instant to be made, and presents a means to uncover seasonal trends in the behavior of light simply by monitoring the brightness variations of points on the walls and ceiling. Significantly, this daylight characterization method also allows for an estimation of the illumination level on a target surface to be made from image data. Currently, illumination at a target surface is estimated through the use of a ceiling-mounted photosensor, as part of a lighting control system, in the hopes of achieving a suitable balance between daylight and electrical lighting in a space. Improving the ability of a sensor to estimate the illumination is of great importance to those who wish to minimize unnecessary energy consumption, as a significant percentage of all U.S. electricity is currently consumed by light fixtures. A photosensor detects light that falls on its location, which does not necessarily correspond in a fixed manner to the light level on the target areas that the photosensor is meant to monitor. Additionally, a photosensor cannot discern variations in light distribution across a room, which often occur with daylight. By considering pixel brightness information to be a linear combination of diffuse and directional light components at selected pixels in an image, information about the light reaching these pixels can be extracted from observed patterns of brightness, under different light conditions. In this manner, each pixel provides information about the light field at its corresponding point in the room, and thus each pixel can be considered to behave as if it were a remote photosensor. By using multiple pixel readings in lieu of a single photosensor reading of a given light condition, an improved assessment of the illumination level on a target surface can been achieved. It is shown that on average, the camera-based method was approximately 25% more accurate in estimating illuminance in the test room than was a simulated ceiling-mounted photosensor. It is hoped that the methodology detailed here will aid in the eventual development of a camera-based daylight characterization sensor for use in lighting control systems, so that the potential for enhanced energy savings can be realized.
NASA Astrophysics Data System (ADS)
Ma, Chen; Cheng, Dewen; Xu, Chen; Wang, Yongtian
2014-11-01
Fundus camera is a complex optical system for retinal photography, involving illumination and imaging of the retina. Stray light is one of the most significant problems of fundus camera because the retina is so minimally reflective that back reflections from the cornea and any other optical surface are likely to be significantly greater than the light reflected from the retina. To provide maximum illumination to the retina while eliminating back reflections, a novel design of illumination system used in portable fundus camera is proposed. Internal illumination, in which eyepiece is shared by both the illumination system and the imaging system but the condenser and the objective are separated by a beam splitter, is adopted for its high efficiency. To eliminate the strong stray light caused by corneal center and make full use of light energy, the annular stop in conventional illumination systems is replaced by a fiber-coupled, ring-shaped light source that forms an annular beam. Parameters including size and divergence angle of the light source are specially designed. To weaken the stray light, a polarized light source is used, and an analyzer plate is placed after beam splitter in the imaging system. Simulation results show that the illumination uniformity at the fundus exceeds 90%, and the stray light is within 1%. Finally, a proof-of-concept prototype is developed and retinal photos of an ophthalmophantom are captured. The experimental results show that ghost images and stray light have been greatly reduced to a level that professional diagnostic will not be interfered with.
Development of proton CT imaging system using plastic scintillator and CCD camera
NASA Astrophysics Data System (ADS)
Tanaka, Sodai; Nishio, Teiji; Matsushita, Keiichiro; Tsuneda, Masato; Kabuki, Shigeto; Uesaka, Mitsuru
2016-06-01
A proton computed tomography (pCT) imaging system was constructed for evaluation of the error of an x-ray CT (xCT)-to-WEL (water-equivalent length) conversion in treatment planning for proton therapy. In this system, the scintillation light integrated along the beam direction is obtained by photography using the CCD camera, which enables fast and easy data acquisition. The light intensity is converted to the range of the proton beam using a light-to-range conversion table made beforehand, and a pCT image is reconstructed. An experiment for demonstration of the pCT system was performed using a 70 MeV proton beam provided by the AVF930 cyclotron at the National Institute of Radiological Sciences. Three-dimensional pCT images were reconstructed from the experimental data. A thin structure of approximately 1 mm was clearly observed, with spatial resolution of pCT images at the same level as that of xCT images. The pCT images of various substances were reconstructed to evaluate the pixel value of pCT images. The image quality was investigated with regard to deterioration including multiple Coulomb scattering.
Fast algorithm of low power image reformation for OLED display
NASA Astrophysics Data System (ADS)
Lee, Myungwoo; Kim, Taewhan
2014-04-01
We propose a fast algorithm of low-power image reformation for organic light-emitting diode (OLED) display. The proposed algorithm scales the image histogram in a way to reduce power consumption in OLED display by remapping the gray levels of the pixels in the image based on the fast analysis of the histogram of the input image while maintaining contrast of the image. The key idea is that a large number of gray levels are never used in the images and these gray levels can be effectively exploited to reduce power consumption. On the other hand, to maintain the image contrast the gray level remapping is performed by taking into account the object size in the image to which each gray level is applied, that is, reforming little for the gray levels in the objects of large size. Through experiments with 24 Kodak images, it is shown that our proposed algorithm is able to reduce the power consumption by 10% even with 9% contrast enhancement. Our algorithm runs in a linear time so that it can be applied to moving pictures with high resolution.
Integrated sensor with frame memory and programmable resolution for light adaptive imaging
NASA Technical Reports Server (NTRS)
Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)
2004-01-01
An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.
Towards Guided Underwater Survey Using Light Visual Odometry
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.
2017-02-01
A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.
NASA Astrophysics Data System (ADS)
Boissonnet, Philippe
2013-02-01
The French philosopher M Merleau-Ponty captured the dynamic of perception with his idea of the intertwining of perceiver and perceived. Light is what links them. In the case of holographic images, not only is spatial and colour perception the pure product of light, but this light information is always in the process of self-construction with our eyes, according to our movements and the point of view adopted. According to the aesthetic reception of a work of art, Holographic images vary greatly from those of cinema, photography and even every kind of digital 3D animation. This particular image's status truly makes perceptually apparent the "co-emergence" of light and our gaze. But holography never misleads us with respect to the precarious nature of our perceptions. We have no illusion as to the limits of our empirical understanding of the perceived reality. Holography, like our knowledge of the visible, thus brings to light the phenomenon of reality's "co-constitution" and contributes to a dynamic ontology of perceptual and cognitive processes. The cognitivist Francico Varela defines this as the paradigm of enaction,i which I will adapt and apply to the appearance/disappearance context of holographic images to bring out their affinities on a metaphorical level.
Ferradal, Silvina L; Eggebrecht, Adam T; Hassanpour, Mahlega; Snyder, Abraham Z; Culver, Joseph P
2014-01-15
Diffuse optical imaging (DOI) is increasingly becoming a valuable neuroimaging tool when fMRI is precluded. Recent developments in high-density diffuse optical tomography (HD-DOT) overcome previous limitations of sparse DOI systems, providing improved image quality and brain specificity. These improvements in instrumentation prompt the need for advancements in both i) realistic forward light modeling for accurate HD-DOT image reconstruction, and ii) spatial normalization for voxel-wise comparisons across subjects. Individualized forward light models derived from subject-specific anatomical images provide the optimal inverse solutions, but such modeling may not be feasible in all situations. In the absence of subject-specific anatomical images, atlas-based head models registered to the subject's head using cranial fiducials provide an alternative solution. In addition, a standard atlas is attractive because it defines a common coordinate space in which to compare results across subjects. The question therefore arises as to whether atlas-based forward light modeling ensures adequate HD-DOT image quality at the individual and group level. Herein, we demonstrate the feasibility of using atlas-based forward light modeling and spatial normalization methods. Both techniques are validated using subject-matched HD-DOT and fMRI data sets for visual evoked responses measured in five healthy adult subjects. HD-DOT reconstructions obtained with the registered atlas anatomy (i.e. atlas DOT) had an average localization error of 2.7mm relative to reconstructions obtained with the subject-specific anatomical images (i.e. subject-MRI DOT), and 6.6mm relative to fMRI data. At the group level, the localization error of atlas DOT reconstruction was 4.2mm relative to subject-MRI DOT reconstruction, and 6.1mm relative to fMRI. These results show that atlas-based image reconstruction provides a viable approach to individual head modeling for HD-DOT when anatomical imaging is not available. Copyright © 2013. Published by Elsevier Inc.
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
NASA Astrophysics Data System (ADS)
Wang, Xicheng; Gao, Jiaobo; Wu, Jianghui; Li, Jianjun; Cheng, Hongliang
2017-02-01
Recently, hyperspectral image projectors (HIP) have been developed in the field of remote sensing. For the advanced performance of system-level validation, target detection and hyperspectral image calibration, HIP has great possibility of development in military, medicine, commercial and so on. HIP is based on the digital micro-mirror device (DMD) and projection technology, which is capable to project arbitrary programmable spectra (controlled by PC) into the each pixel of the IUT1 (instrument under test), such that the projected image could simulate realistic scenes that hyperspectral image could be measured during its use and enable system-level performance testing and validation. In this paper, we built a visible hyperspectral image projector also called the visible target simulator with double DMDs, which the first DMD is used to product the selected monochromatic light from the wavelength of 410 to 720 um, and the light come to the other one. Then we use computer to load image of realistic scenes to the second DMD, so that the target condition and background could be project by the second DMD with the selected monochromatic light. The target condition can be simulated and the experiment could be controlled and repeated in the lab, making the detector instrument could be tested in the lab. For the moment, we make the focus on the spectral engine design include the optical system, research of DMD programmable spectrum and the spectral resolution of the selected spectrum. The detail is shown.
High-sensitivity, high-speed continuous imaging system
Watson, Scott A; Bender, III, Howard A
2014-11-18
A continuous imaging system for recording low levels of light typically extending over small distances with high-frame rates and with a large number of frames is described. Photodiode pixels disposed in an array having a chosen geometry, each pixel having a dedicated amplifier, analog-to-digital convertor, and memory, provide parallel operation of the system. When combined with a plurality of scintillators responsive to a selected source of radiation, in a scintillator array, the light from each scintillator being directed to a single corresponding photodiode in close proximity or lens-coupled thereto, embodiments of the present imaging system may provide images of x-ray, gamma ray, proton, and neutron sources with high efficiency.
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
NASA Astrophysics Data System (ADS)
Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.
2013-02-01
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter
NASA Astrophysics Data System (ADS)
Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.
2013-02-01
Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.
NASA Astrophysics Data System (ADS)
Mešić, Vanes; Hajder, Erna; Neumann, Knut; Erceg, Nataša
2016-06-01
Research has shown that students have tremendous difficulties developing a qualitative understanding of wave optics, at all educational levels. In this study, we investigate how three different approaches to visualizing light waves affect students' understanding of wave optics. In the first, the conventional, approach light waves are represented by sinusoidal curves. The second teaching approach includes representing light waves by a series of static images, showing the oscillating electric field vectors at characteristic, subsequent instants of time. Within the third approach phasors are used for visualizing light waves. A total of N =85 secondary school students were randomly assigned to one of the three teaching approaches, each of which lasted a period of four class hours. Students who learned with phasors and students who learned from the series of static images outperformed the students learning according to the conventional approach, i.e., they showed a much better understanding of basic wave optics, as measured by a conceptual survey administered to the students one week after the treatment. Our results suggest that visualizing light waves with phasors or oscillating electric field vectors is a promising approach to developing a deeper understanding of wave optics for students enrolled in conceptual level physics courses.
A novel imaging method for photonic crystal fiber fusion splicer
NASA Astrophysics Data System (ADS)
Bi, Weihong; Fu, Guangwei; Guo, Xuan
2007-01-01
Because the structure of Photonic Crystal Fiber (PCF) is very complex, and it is very difficult that traditional fiber fusion splice obtains optical axial information of PCF. Therefore, we must search for a bran-new optical imaging method to get section information of Photonic Crystal Fiber. Based on complex trait of PCF, a novel high-precision optics imaging system is presented in this article. The system uses a thinned electron-bombarded CCD (EBCCD) which is a kind of image sensor as imaging element, the thinned electron-bombarded CCD can offer low light level performance superior to conventional image intensifier coupled CCD approaches, this high-performance device can provide high contrast high resolution in low light level surveillance imaging; in order to realize precision focusing of image, we use a ultra-highprecision pace motor to adjust position of imaging lens. In this way, we can obtain legible section information of PCF. We may realize further concrete analysis for section information of PCF by digital image processing technology. Using this section information may distinguish different sorts of PCF, compute some parameters such as the size of PCF ventage, cladding structure of PCF and so on, and provide necessary analysis data for PCF fixation, adjustment, regulation, fusion and cutting system.
NASA Astrophysics Data System (ADS)
Mehta, Dalip Singh; Sharma, Anuradha; Dubey, Vishesh; Singh, Veena; Ahmad, Azeem
2016-03-01
We present a single-shot white light interference microscopy for the quantitative phase imaging (QPI) of biological cells and tissues. A common path white light interference microscope is developed and colorful white light interferogram is recorded by three-chip color CCD camera. The recorded white light interferogram is decomposed into the red, green and blue color wavelength component interferograms and processed it to find out the RI for different color wavelengths. The decomposed interferograms are analyzed using local model fitting (LMF)" algorithm developed for reconstructing the phase map from single interferogram. LMF is slightly off-axis interferometric QPI method which is a single-shot method that employs only a single image, so it is fast and accurate. The present method is very useful for dynamic process where path-length changes at millisecond level. From the single interferogram a wavelength-dependent quantitative phase imaging of human red blood cells (RBCs) are reconstructed and refractive index is determined. The LMF algorithm is simple to implement and is efficient in computation. The results are compared with the conventional phase shifting interferometry and Hilbert transform techniques.
Physics-based subsurface visualization of human tissue.
Sharp, Richard; Adams, Jacob; Machiraju, Raghu; Lee, Robert; Crane, Robert
2007-01-01
In this paper, we present a framework for simulating light transport in three-dimensional tissue with inhomogeneous scattering properties. Our approach employs a computational model to simulate light scattering in tissue through the finite element solution of the diffusion equation. Although our model handles both visible and nonvisible wavelengths, we especially focus on the interaction of near infrared (NIR) light with tissue. Since most human tissue is permeable to NIR light, tools to noninvasively image tumors, blood vasculature, and monitor blood oxygenation levels are being constructed. We apply this model to a numerical phantom to visually reproduce the images generated by these real-world tools. Therefore, in addition to enabling inverse design of detector instruments, our computational tools produce physically-accurate visualizations of subsurface structures.
Active confocal imaging for visual prostheses
Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli
2014-01-01
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710
Visualizing photosynthesis through processing of chlorophyll fluorescence images
NASA Astrophysics Data System (ADS)
Daley, Paul F.; Ball, J. Timothy; Berry, Joseph A.; Patzke, Juergen; Raschke, Klaus E.
1990-05-01
Measurements of terrestrial plant photosynthesis frequently exploit sensing of gas exchange from leaves enclosed in gas-tight, climate controlled chambers. These methods are typically slow, and do not resolve variation in photosynthesis below the whole leaf level. A photosynthesis visualization technique is presented that uses images of leaves employing light from chlorophyll (Chl) fluorescence. Images of Chl fluorescence from whole leaves undergoing steady-state photosynthesis, photosynthesis induction, or response to stress agents were digitized during light flashes that saturated photochemical reactions. Use of saturating flashes permitted deconvolution of photochemical energy use from biochemical quenching mechanisms (qN) that dissipate excess excitation energy, otherwise damaging to the light harvesting apparatus. Combination of the digital image frames of variable fluorescence with reference frames obtained from the same leaves when dark-adapted permitted derivation of frames in which grey scale represented the magnitude of qN. Simultaneous measurements with gas-exchange apparatus provided data for non-linear calibration filters for subsequent rendering of grey-scale "images" of photosynthesis. In several experiments significant non-homogeneity of photosynthetic activity was observed following treatment with growth hormones, or shifts in light or humidity, and following infection by virus. The technique provides a rapid, non-invasive probe for stress physiology and plant disease detection.
Silva, Paolo S; Walia, Saloni; Cavallerano, Jerry D; Sun, Jennifer K; Dunn, Cheri; Bursell, Sven-Erik; Aiello, Lloyd M; Aiello, Lloyd Paul
2012-09-01
To compare agreement between diagnosis of clinical level of diabetic retinopathy (DR) and diabetic macular edema (DME) derived from nonmydriatic fundus images using a digital camera back optimized for low-flash image capture (MegaVision) compared with standard seven-field Early Treatment Diabetic Retinopathy Study (ETDRS) photographs and dilated clinical examination. Subject comfort and image acquisition time were also evaluated. In total, 126 eyes from 67 subjects with diabetes underwent Joslin Vision Network nonmydriatic retinal imaging. ETDRS photographs were obtained after pupillary dilation, and fundus examination was performed by a retina specialist. There was near-perfect agreement between MegaVision and ETDRS photographs (κ=0.81, 95% confidence interval [CI] 0.73-0.89) for clinical DR severity levels. Substantial agreement was observed with clinical examination (κ=0.71, 95% CI 0.62-0.80). For DME severity level there was near-perfect agreement with ETDRS photographs (κ=0.92, 95% CI 0.87-0.98) and moderate agreement with clinical examination (κ=0.58, 95% CI 0.46-0.71). The wider MegaVision 45° field led to identification of nonproliferative changes in areas not imaged by the 30° field of ETDRS photos. Field area unique to ETDRS photographs identified proliferative changes not visualized with MegaVision. Mean MegaVision acquisition time was 9:52 min. After imaging, 60% of subjects preferred the MegaVision lower flash settings. When evaluated using a rigorous protocol, images captured using a low-light digital camera compared favorably with ETDRS photography and clinical examination for grading level of DR and DME. Furthermore, these data suggest the importance of more extensive peripheral images and suggest that utilization of wide-field retinal imaging may further improve accuracy of DR assessment.
Barro, Christian; Benkert, Pascal; Disanto, Giulio; Tsagkas, Charidimos; Amann, Michael; Naegelin, Yvonne; Leppert, David; Gobbi, Claudio; Granziera, Cristina; Yaldizli, Özgür; Michalak, Zuzanna; Wuerfel, Jens; Kappos, Ludwig; Parmar, Katrin; Kuhle, Jens
2018-05-30
Neuro-axonal injury is a key factor in the development of permanent disability in multiple sclerosis. Neurofilament light chain in peripheral blood has recently emerged as a biofluid marker reflecting neuro-axonal damage in this disease. We aimed at comparing serum neurofilament light chain levels in multiple sclerosis and healthy controls, to determine their association with measures of disease activity and their ability to predict future clinical worsening as well as brain and spinal cord volume loss. Neurofilament light chain was measured by single molecule array assay in 2183 serum samples collected as part of an ongoing cohort study from 259 patients with multiple sclerosis (189 relapsing and 70 progressive) and 259 healthy control subjects. Clinical assessment, serum sampling and MRI were done annually; median follow-up time was 6.5 years. Brain volumes were quantified by structural image evaluation using normalization of atrophy, and structural image evaluation using normalization of atrophy, cross-sectional, cervical spinal cord volumes using spinal cord image analyser (cordial). Results were analysed using ordinary linear regression models and generalized estimating equation modelling. Serum neurofilament light chain was higher in patients with a clinically isolated syndrome or relapsing remitting multiple sclerosis as well as in patients with secondary or primary progressive multiple sclerosis than in healthy controls (age adjusted P < 0.001 for both). Serum neurofilament light chain above the 90th percentile of healthy controls values was an independent predictor of Expanded Disability Status Scale worsening in the subsequent year (P < 0.001). The probability of Expanded Disability Status Scale worsening gradually increased by higher serum neurofilament light chain percentile category. Contrast enhancing and new/enlarging lesions were independently associated with increased serum neurofilament light chain (17.8% and 4.9% increase per lesion respectively; P < 0.001). The higher the serum neurofilament light chain percentile level, the more pronounced was future brain and cervical spinal volume loss: serum neurofilament light chain above the 97.5th percentile was associated with an additional average loss in brain volume of 1.5% (P < 0.001) and spinal cord volume of 2.5% over 5 years (P = 0.009). Serum neurofilament light chain correlated with concurrent and future clinical and MRI measures of disease activity and severity. High serum neurofilament light chain levels were associated with both brain and spinal cord volume loss. Neurofilament light chain levels are a real-time, easy to measure marker of neuro-axonal injury that is conceptually more comprehensive than brain MRI.
Light-pollution measurement with the Wide-field all-sky image analyzing monitoring system
NASA Astrophysics Data System (ADS)
Vítek, S.
2017-07-01
The purpose of this experiment was to measure light pollution in the capital of Czech Republic, Prague. As a measuring instrument is used calibrated consumer level digital single reflex camera with IR cut filter, therefore, the paper reports results of measuring and monitoring of the light pollution in the wavelength range of 390 - 700 nm, which most affects visual range astronomy. Combining frames of different exposure times made with a digital camera coupled with fish-eye lens allow to create high dynamic range images, contain meaningful values, so such a system can provide absolute values of the sky brightness.
NASA Astrophysics Data System (ADS)
Yu, Haiyan; Fan, Jiulun
2017-12-01
Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods.
Griffiths, J A; Chen, D; Turchetta, R; Royle, G J
2011-03-01
An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 10(3) to 1.6 at a gain of 3.93 × 10(1). The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 10(3), dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.
Characterization study of an intensified complementary metal-oxide-semiconductor active pixel sensor
NASA Astrophysics Data System (ADS)
Griffiths, J. A.; Chen, D.; Turchetta, R.; Royle, G. J.
2011-03-01
An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 103 to 1.6 at a gain of 3.93 × 101. The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 103, dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.
NASA Technical Reports Server (NTRS)
2002-01-01
Retinex Imaging Processing, winner of NASA's 1999 Space Act Award, is commercially available through TruView Imaging Company. With this technology, amateur photographers use their personal computers to improve the brightness, scene contrast, detail, and overall sharpness of images with increased ease. The process was originally developed for remote sensing of the Earth by researchers at Langley Research Center and Science and Technology Corporation (STC). It automatically enhances a digital image in terms of dynamic range compression, color independence from the spectral distribution of the scene illuminant, and color/lightness rendition. As a result, the enhanced digital image is much closer to the scene perceived by the human visual system, under all kinds and levels of lighting variations. TruView believes there are other applications for the software in medical imaging, forensics, security, recognizance, mining, assembly, and other industrial areas.
NASA's MISR Instrument Captures Stereo View of Mountain Fire Near Idyllwild, Calif.
Atmospheric Science Data Center
2016-09-27
... been produced. The image is best viewed with standard "red/blue" 3-D glasses with the red lens over the left eye. The image is oriented ... 2.5 to 3 miles (4 to 5 kilometers) above sea level with very light winds at this time. The image extends from about 34.8 degrees north ...
Low-Light-Level InGaAs focal plane arrays with and without illumination
NASA Astrophysics Data System (ADS)
Macdougal, Michael; Geske, Jon; Wang, Chad; Follman, David
2010-04-01
Short wavelength IR imaging using InGaAs-based FPAs is shown. Aerius demonstrates low dark current in InGaAs detector arrays with 15 μm pixel pitch. The same material is mated with a 640x 512 CTIA-based readout integrated circuit. The resulting FPA is capable of imaging photon fluxes with wavelengths between 1 and 1.6 microns at low light levels. The mean dark current density on the FPAs is extremely low at 0.64 nA/cm2 at 10°C. Noise due to the readout can be reduced from 95 to 57 electrons by using off-chip correlated double sampling (CDS). In addition, Aerius has developed laser arrays that provide flat illumination in scenes that are normally light-starved. The illuminators have 40% wall-plug efficiency and provide speckle-free illumination, provide artifact-free imagery versus conventional laser illuminators.
Sunlight-readable display technology: a dual-use case study
NASA Astrophysics Data System (ADS)
Blanchard, Randall D.
1996-05-01
This paper describes our vision of sunlight readable color display requirements, an alternate technology that offers a high level of performance, and how we implemented it for the military avionics display market. This knowledge base and product development experience was then applied with a comparable level of performance to commercial applications. The successful dual use of this technology for these two diverse markets is presented. Details of the technical commonality and a comparison of the design and performance differences are presented. A basis for specifying the required level of performance for a sunlight readable full color display is discussed. With the objective of providing a high level of image brightness and high ambient light rejection, a display architecture using collimated light is used. The resulting designs of two military cockpit display products, with contrast ratios above 20:1 in sunlight are shown. The performance of a commercial display providing several thousand foot- Lamberts of image brightness is presented.
IR CMOS: near infrared enhanced digital imaging (Presentation Recording)
NASA Astrophysics Data System (ADS)
Pralle, Martin U.; Carey, James E.; Joy, Thomas; Vineis, Chris J.; Palsule, Chintamani
2015-08-01
SiOnyx has demonstrated imaging at light levels below 1 mLux (moonless starlight) at video frame rates with a 720P CMOS image sensor in a compact, low latency camera. Low light imaging is enabled by the combination of enhanced quantum efficiency in the near infrared together with state of the art low noise image sensor design. The quantum efficiency enhancements are achieved by applying Black Silicon, SiOnyx's proprietary ultrafast laser semiconductor processing technology. In the near infrared, silicon's native indirect bandgap results in low absorption coefficients and long absorption lengths. The Black Silicon nanostructured layer fundamentally disrupts this paradigm by enhancing the absorption of light within a thin pixel layer making 5 microns of silicon equivalent to over 300 microns of standard silicon. This results in a demonstrate 10 fold improvements in near infrared sensitivity over incumbent imaging technology while maintaining complete compatibility with standard CMOS image sensor process flows. Applications include surveillance, nightvision, and 1064nm laser see spot. Imaging performance metrics will be discussed. Demonstrated performance characteristics: Pixel size : 5.6 and 10 um Array size: 720P/1.3Mpix Frame rate: 60 Hz Read noise: 2 ele/pixel Spectral sensitivity: 400 to 1200 nm (with 10x QE at 1064nm) Daytime imaging: color (Bayer pattern) Nighttime imaging: moonless starlight conditions 1064nm laser imaging: daytime imaging out to 2Km
The image-forming mirror in the eye of the scallop
NASA Astrophysics Data System (ADS)
Palmer, Benjamin A.; Taylor, Gavin J.; Brumfeld, Vlad; Gur, Dvir; Shemesh, Michal; Elad, Nadav; Osherov, Aya; Oron, Dan; Weiner, Steve; Addadi, Lia
2017-12-01
Scallops possess a visual system comprising up to 200 eyes, each containing a concave mirror rather than a lens to focus light. The hierarchical organization of the multilayered mirror is controlled for image formation, from the component guanine crystals at the nanoscale to the complex three-dimensional morphology at the millimeter level. The layered structure of the mirror is tuned to reflect the wavelengths of light penetrating the scallop’s habitat and is tiled with a mosaic of square guanine crystals, which reduces optical aberrations. The mirror forms images on a double-layered retina used for separately imaging the peripheral and central fields of view. The tiled, off-axis mirror of the scallop eye bears a striking resemblance to the segmented mirrors of reflecting telescopes.
NASA Astrophysics Data System (ADS)
Looper, Jared; Harrison, Melanie; Armato, Samuel G.
2016-03-01
Radiologists often compare sequential radiographs to identify areas of pathologic change; however, this process is prone to error, as human anatomy can obscure the regions of change, causing the radiologists to overlook pathology. Temporal subtraction (TS) images can provide enhanced visualization of regions of change in sequential radiographs and allow radiologists to better detect areas of change in radiographs. Not all areas of change shown in TS images, however, are actual pathology. The purpose of this study was to create a computer-aided diagnostic (CAD) system that identifies which regions of change are caused by pathology and which are caused by misregistration of the radiographs used to create the TS image. The dataset used in this study contained 120 images with 74 pathologic regions on 54 images outlined by an experienced radiologist. High and low ("light" and "dark") gray-level candidate regions were extracted from the images using gray-level thresholding. Then, sampling techniques were used to address the class imbalance problem between "true" and "false" candidate regions. Next, the datasets of light candidate regions, dark candidate regions, and the combined set of light and dark candidate regions were used as training and testing data for classifiers by using five-fold cross validation. Of the classifiers tested (support vector machines, discriminant analyses, logistic regression, and k-nearest neighbors), the support vector machine on the combined candidates using synthetic minority oversampling technique (SMOTE) performed best with an area under the receiver operating characteristic curve value of 0.85, a sensitivity of 85%, and a specificity of 84%.
Chao, Jerry; Ram, Sripad; Ward, E. Sally; Ober, Raimund J.
2014-01-01
The extraction of information from images acquired under low light conditions represents a common task in diverse disciplines. In single molecule microscopy, for example, techniques for superresolution image reconstruction depend on the accurate estimation of the locations of individual particles from generally low light images. In order to estimate a quantity of interest with high accuracy, however, an appropriate model for the image data is needed. To this end, we previously introduced a data model for an image that is acquired using the electron-multiplying charge-coupled device (EMCCD) detector, a technology of choice for low light imaging due to its ability to amplify weak signals significantly above its readout noise floor. Specifically, we proposed the use of a geometrically multiplied branching process to model the EMCCD detector’s stochastic signal amplification. Geometric multiplication, however, can be computationally expensive and challenging to work with analytically. We therefore describe here two approximations for geometric multiplication that can be used instead. The high gain approximation is appropriate when a high level of signal amplification is used, a scenario which corresponds to the typical usage of an EMCCD detector. It is an accurate approximation that is computationally more efficient, and can be used to perform maximum likelihood estimation on EMCCD image data. In contrast, the Gaussian approximation is applicable at all levels of signal amplification, but is only accurate when the initial signal to be amplified is relatively large. As we demonstrate, it can importantly facilitate the analysis of an information-theoretic quantity called the noise coefficient. PMID:25075263
Modular low-light microscope for imaging cellular bioluminescence and radioluminescence
Kim, Tae Jin; Türkcan, Silvan; Pratx, Guillem
2017-01-01
Low-light microscopy methods are receiving increased attention as new applications have emerged. One such application is to allow longitudinal imaging of light-sensitive cells with no phototoxicity and no photobleaching of fluorescent biomarkers. Another application is for imaging signals that are inherently dim and undetectable using standard microscopy, such as bioluminescence, chemiluminescence, or radioluminescence. In this protocol, we provide instructions on how to build a modular low-light microscope (1-4 d) by coupling two microscope objective lenses, back-to-back from each other, using standard optomechanical components. We also provide directions on how to image dim signals such as radioluminescence (1-1.5 h), bioluminescence (∼30 min) and low-excitation fluorescence (∼15 min). In particular, radioluminescence microscopy is explained in detail as it is a newly developed technique, which enables the study of small molecule transport (eg. radiolabeled drugs, metabolic precursors, and nuclear medicine contrast agents) by single cells without perturbing endogenous biochemical processes. In this imaging technique, a scintillator crystal (eg. CdWO4) is placed in close proximity to the radiolabeled cells, where it converts the radioactive decays into optical flashes detectable using a sensitive camera. Using the image reconstruction toolkit provided in this protocol, the flashes can be reconstructed to yield high-resolution image of the radiotracer distribution. With appropriate timing, the three aforementioned imaging modalities may be performed altogether on a population of live cells, allowing the user to perform parallel functional studies of cell heterogeneity at the single-cell level. PMID:28426025
Integrated light and scanning electron microscopy of GFP-expressing cells.
Peddie, Christopher J; Liv, Nalan; Hoogenboom, Jacob P; Collinson, Lucy M
2014-01-01
Integration of light and electron microscopes provides imaging tools in which fluorescent proteins can be localized to cellular structures with a high level of precision. However, until recently, there were few methods that could deliver specimens with sufficient fluorescent signal and electron contrast for dual imaging without intermediate staining steps. Here, we report protocols that preserve green fluorescent protein (GFP) in whole cells and in ultrathin sections of resin-embedded cells, with membrane contrast for integrated imaging. Critically, GFP is maintained in a stable and active state within the vacuum of an integrated light and scanning electron microscope. For light microscopists, additional structural information gives context to fluorescent protein expression in whole cells, illustrated here by analysis of filopodia and focal adhesions in Madin Darby canine kidney cells expressing GFP-Paxillin. For electron microscopists, GFP highlights the proteins of interest within the architectural space of the cell, illustrated here by localization of the conical lipid diacylglycerol to cellular membranes. © 2014 Elsevier Inc. All rights reserved.
Development of a CCD based solar speckle imaging system
NASA Astrophysics Data System (ADS)
Nisenson, Peter; Stachnik, Robert V.; Noyes, Robert W.
1986-02-01
A program to develop software and hardware for the purpose of obtaining high angular resolution images of the solar surface is described. The program included the procurement of a Charge Coupled Devices imaging system; an extensive laboratory and remote site testing of the camera system; the development of a software package for speckle image reconstruction which was eventually installed and tested at the Sacramento Peak Observatory; and experiments of the CCD system (coupled to an image intensifier) for low light level, narrow spectral band solar imaging.
NASA Astrophysics Data System (ADS)
Liu, Songde; Smith, Zach; Xu, Ronald X.
2016-10-01
There is a pressing need for a phantom standard to calibrate medical optical devices. However, 3D printing of tissue-simulating phantom standard is challenged by lacking of appropriate methods to characterize and reproduce surface topography and optical properties accurately. We have developed a structured light imaging system to characterize surface topography and optical properties (absorption coefficient and reduced scattering coefficient) of 3D tissue-simulating phantoms. The system consisted of a hyperspectral light source, a digital light projector (DLP), a CMOS camera, two polarizers, a rotational stage, a translation stage, a motion controller, and a personal computer. Tissue-simulating phantoms with different structural and optical properties were characterized by the proposed imaging system and validated by a standard integrating sphere system. The experimental results showed that the proposed system was able to achieve pixel-level optical properties with a percentage error of less than 11% for absorption coefficient and less than 7% for reduced scattering coefficient for phantoms without surface curvature. In the meanwhile, 3D topographic profile of the phantom can be effectively reconstructed with an accuracy of less than 1% deviation error. Our study demonstrated that the proposed structured light imaging system has the potential to characterize structural profile and optical properties of 3D tissue-simulating phantoms.
Development and field testing of a Light Aircraft Oil Surveillance System (LAOSS)
NASA Technical Reports Server (NTRS)
Burns, W.; Herz, M. J.
1976-01-01
An experimental device consisting of a conventional TV camera with a low light level photo image tube and motor driven polarized filter arrangement was constructed to provide a remote means of discriminating the presence of oil on water surfaces. This polarized light filtering system permitted a series of successive, rapid changes between the vertical and horizontal components of reflected polarized skylight and caused the oil based substances to be more easily observed and identified as a flashing image against a relatively static water surface background. This instrument was flight tested, and the results, with targets of opportunity and more systematic test site data, indicate the potential usefulness of this airborne remote sensing instrument.
[Application of Fourier transform profilometry in 3D-surface reconstruction].
Shi, Bi'er; Lu, Kuan; Wang, Yingting; Li, Zhen'an; Bai, Jing
2011-08-01
With the improvement of system frame and reconstruction methods in fluorescent molecules tomography (FMT), the FMT technology has been widely used as an important experimental tool in biomedical research. It is necessary to get the 3D-surface profile of the experimental object as the boundary constraints of FMT reconstruction algorithms. We proposed a new 3D-surface reconstruction method based on Fourier transform profilometry (FTP) method under the blue-purple light condition. The slice images were reconstructed using proper image processing methods, frequency spectrum analysis and filtering. The results of experiment showed that the method properly reconstructed the 3D-surface of objects and has the mm-level accuracy. Compared to other methods, this one is simple and fast. Besides its well-reconstructed, the proposed method could help monitor the behavior of the object during the experiment to ensure the correspondence of the imaging process. Furthermore, the method chooses blue-purple light section as its light source to avoid the interference towards fluorescence imaging.
The influence of underwater turbulence on optical phase measurements
NASA Astrophysics Data System (ADS)
Redding, Brandon; Davis, Allen; Kirkendall, Clay; Dandridge, Anthony
2016-05-01
Emerging underwater optical imaging and sensing applications rely on phase-sensitive detection to provide added functionality and improved sensitivity. However, underwater turbulence introduces spatio-temporal variations in the refractive index of water which can degrade the performance of these systems. Although the influence of turbulence on traditional, non-interferometric imaging has been investigated, its influence on the optical phase remains poorly understood. Nonetheless, a thorough understanding of the spatio-temporal dynamics of the optical phase of light passing through underwater turbulence are crucial to the design of phase-sensitive imaging and sensing systems. To address this concern, we combined underwater imaging with high speed holography to provide a calibrated characterization of the effects of turbulence on the optical phase. By measuring the modulation transfer function of an underwater imaging system, we were able to calibrate varying levels of optical turbulence intensity using the Simple Underwater Imaging Model (SUIM). We then used high speed holography to measure the temporal dynamics of the optical phase of light passing through varying levels of turbulence. Using this method, we measured the variance in the amplitude and phase of the beam, the temporal correlation of the optical phase, and recorded the turbulence induced phase noise as a function of frequency. By bench marking the effects of varying levels of turbulence on the optical phase, this work provides a basis to evaluate the real-world potential of emerging underwater interferometric sensing modalities.
Development of low-SWaP and low-noise InGaAs detectors
NASA Astrophysics Data System (ADS)
Fraenkel, R.; Berkowicz, E.; Bikov, L.; Elishkov, R.; Giladi, A.; Hirsh, I.; Ilan, E.; Jakobson, C.; Kondrashov, P.; Louzon, E.; Nevo, I.; Pivnik, I.; Tuito, A.; Vasserman, S.
2017-02-01
In recent years SCD has developed InGaAs/InP technology for Short-Wave Infrared (SWIR) imaging. The first product, Cardinal 640, has a 640×512 (VGA) format at 15μm pitch, and more than two thousand units have already been delivered to customers. Recently we have also introduced Cardinal 1280 which is an SXGA array with 10μm pitch aimed for long-range high end platforms [1]. One of the big challenges facing the SWIR technology is its proliferation to widespread low cost and low SWaP applications, specifically Low Light Level (LLL) and Image Intensifier (II) replacements. In order to achieve this goal we have invested and combined efforts in several design and development directions: 1. Optimization of the InGaAs pixel array, reducing the dark current below 2fA at 20° C in order to save TEC cooling power under harsh light and environmental conditions. 2. Design of a new "Low Noise" ROIC targeting 15e noise floor and improved active imaging capabilities 3. Design of compact, low SWaP and low cost packages. In this context we have developed 2 types of packages: a non-hermetic package with thermo-electric cooler (TEC) and a hermetic TEC-Less ceramic package. 4. Development of efficient TEC-Less algorithms for optimal imaging at both day-light and low light level conditions. The result of these combined efforts is a compact low SWaP detector that provides equivalent performance to Gen III image intensifier under starlight conditions. In this paper we will present results from lab and field experiments that will support this claim.
Large-format InGaAs focal plane arrays for SWIR imaging
NASA Astrophysics Data System (ADS)
Hood, Andrew D.; MacDougal, Michael H.; Manzo, Juan; Follman, David; Geske, Jonathan C.
2012-06-01
FLIR Electro Optical Components will present our latest developments in large InGaAs focal plane arrays, which are used for low light level imaging in the short wavelength infrared (SWIR) regime. FLIR will present imaging from their latest small pitch (15 μm) focal plane arrays in VGA and High Definition (HD) formats. FLIR will present characterization of the FPA including dark current measurements as well as the use of correlated double sampling to reduce read noise. FLIR will show imagery as well as FPA-level characterization data.
Azlan, C A; Ng, K H; Anandan, S; Nizam, M S
2006-09-01
Illuminance level in the softcopy image viewing room is a very important factor to optimize productivity in radiological diagnosis. In today's radiological environment, the illuminance measurements are normally done during the quality control procedure and performed annually. Although the room is equipped with dimmer switches, radiologists are not able to decide the level of illuminance according to the standards. The aim of this study is to develop a simple real-time illuminance detector system to assist the radiologists in deciding an adequate illuminance level during radiological image viewing. The system indicates illuminance in a very simple visual form by using light emitting diodes. By employing the device in the viewing room, illuminance level can be monitored and adjusted effectively.
Light field imaging and application analysis in THz
NASA Astrophysics Data System (ADS)
Zhang, Hongfei; Su, Bo; He, Jingsuo; Zhang, Cong; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin
2018-01-01
The light field includes the direction information and location information. Light field imaging can capture the whole light field by single exposure. The four-dimensional light field function model represented by two-plane parameter, which is proposed by Levoy, is adopted in the light field. Acquisition of light field is based on the microlens array, camera array and the mask. We calculate the dates of light-field to synthetize light field image. The processing techniques of light field data include technology of refocusing rendering, technology of synthetic aperture and technology of microscopic imaging. Introducing the technology of light field imaging into THz, the efficiency of 3D imaging is higher than that of conventional THz 3D imaging technology. The advantages compared with visible light field imaging include large depth of field, wide dynamic range and true three-dimensional. It has broad application prospects.
NASA Technical Reports Server (NTRS)
Pain, Bedabrata; Yang, Guang; Ortiz, Monico; Wrigley, Christopher; Hancock, Bruce; Cunningham, Thomas
2000-01-01
Noise in photodiode-type CMOS active pixel sensors (APS) is primarily due to the reset (kTC) noise at the sense node, since it is difficult to implement in-pixel correlated double sampling for a 2-D array. Signal integrated on the photodiode sense node (SENSE) is calculated by measuring difference between the voltage on the column bus (COL) - before and after the reset (RST) is pulsed. Lower than kTC noise can be achieved with photodiode-type pixels by employing "softreset" technique. Soft-reset refers to resetting with both drain and gate of the n-channel reset transistor kept at the same potential, causing the sense node to be reset using sub-threshold MOSFET current. However, lowering of noise is achieved only at the expense higher image lag and low-light-level non-linearity. In this paper, we present an analysis to explain the noise behavior, show evidence of degraded performance under low-light levels, and describe new pixels that eliminate non-linearity and lag without compromising noise.
Photon Limited Images and Their Restoration
1976-03-01
arises from noise inherent in the detected image data. In the first part of this report a model is developed which can be used to mathematically and...statistically describe an image detected at low light levels. This rodel serves to clarify some basic properties of photon noise , and provides a basis...for the analysi.s of image restoration. In the second part the problem of linear least-square restoration of imagery limited by photon noise is
SU-G-IeP4-06: Feasibility of External Beam Treatment Field Verification Using Cherenkov Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Black, P; Na, Y; Wuu, C
2016-06-15
Purpose: Cherenkov light emission has been shown to correlate with ionizing radiation (IR) dose delivery in solid tissue. In order to properly correlate Cherenkov light images with real time dose delivery in a patient, we must account for geometric and intensity distortions arising from observation angle, as well as the effect of monitor units (MU) and field size on Cherenkov light emission. To test the feasibility of treatment field verification, we first focused on Cherenkov light emission efficiency based on MU and known field size (FS). Methods: Cherenkov light emission was captured using a PI-MAX4 intensified charge coupled device(ICCD) systemmore » (Princeton Instruments), positioned at a fixed angle of 40° relative to the beam central axis. A Varian TrueBeam linear accelerator (linac) was operated at 6MV and 600MU/min to deliver an Anterior-Posterior beam to a 5cm thick block phantom positioned at 100cm Source-to-Surface-Distance(SSD). FS of 10×10, 5×5, and 2×2cm{sup 2} were used. Before beam delivery projected light field images were acquired, ensuring that geometric distortions were consistent when measuring Cherenkov field discrepancies. Cherenkov image acquisition was triggered by linac target current. 500 frames were acquired for each FS. Composite images were created through summation of frames and background subtraction. MU per image was calculated based on linac pulse delay of 2.8ms. Cherenkov and projected light FS were evaluated using ImageJ software. Results: Mean Cherenkov FS discrepancies compared to light field were <0.5cm for 5.6, 2.8, and 8.6 MU for 10×10, 5×5, and 2×2cm{sup 2} FS, respectably. Discrepancies were reduced with increasing field size and MU. We predict a minimum of 100 frames is needed for reliable confirmation of delivered FS. Conclusion: Current discrepancies in Cherenkov field sizes are within a usable range to confirm treatment delivery in standard and respiratory gated clinical scenarios at MU levels appropriate to standard MLC position segments.« less
Complete erasing of ghost images on computed radiography plates and role of deeply trapped electrons
NASA Astrophysics Data System (ADS)
Ohuchi-Yoshida, Hiroko; Kondo, Yasuhiro
2011-12-01
Computed radiography (CR) plates made of europium-doped Ba(Sr)FBr(I) were simultaneously exposed to filtered ultraviolet light and visible light in order to erase ghost images, i.e., latent image that is unerasable with visible light (LIunVL) and reappearing one, which are particularly observed in plates irradiated with a high dose and/or cumulatively over-irradiated. CR samples showing LIunVLs were prepared by irradiating three different types of CR plates (Agfa ADC MD10, Kodak Directview Mammo EHRM2, and Fuji ST-VI) with 50 kV X-ray beams in the dose range 8.1 mGy-8.0 Gy. After the sixth round of simultaneous 6 h exposures to filtered ultraviolet light and visible light, all the LIunVLs in the three types of CR plates were erased to the same level as in an unirradiated plate and no latent images reappeared after storage at 0 °C for 14 days. With conventional exposure to visible light, LIunVLs consistently remained in all types of CR plates irradiated with higher doses of X-rays and latent images reappeared in the Agfa M10 plates after storage at 0 °C. Electrons trapped in deep centers cause LIunVLs and they can be erased by simultaneous exposures to filtered ultraviolet light and visible light. To study electrons in deep centers, the absorption spectra were examined in all types of irradiated CR plates by using polychromatic ultraviolet light from a deep-ultraviolet lamp. It was found that deep centers showed a dominant peak in the absorption spectra at around 324 nm for the Agfa M10 and Kodak EHRM2 plates, and at around 320 nm for the Fuji ST-VI plate, in each case followed by a few small peaks. The peak heights were dose-dependent for all types of CR samples, suggesting that the number of electrons trapped in deep centers increases with the irradiation dose.
Wang, Le; Zong, Shenfei; Wang, Zhuyuan; Lu, Ju; Chen, Chen; Zhang, Ruohu; Cui, Yiping
2018-07-13
Single molecule localization microscopy (SMLM) is a powerful tool for imaging biological targets at the nanoscale. In this report, we present SMLM imaging of telomeres and centromeres using fluorescence in situ hybridization (FISH). The FISH probes were fabricated by decorating CdSSe/ZnS quantum dots (QDs) with telomere or centromere complementary DNA strands. SMLM imaging experiments using commercially available peptide nucleic acid (PNA) probes labeled with organic fluorophores were also conducted to demonstrate the advantages of using QDs FISH probes. Compared with the PNA probes, the QDs probes have the following merits. First, the fluorescence blinking of QDs can be realized in aqueous solution or PBS buffer without thiol, which is a key buffer component for organic fluorophores' blinking. Second, fluorescence blinking of the QDs probe needs only one excitation light (i.e. 405 nm). While fluorescence blinking of the organic fluorophores usually requires two illumination lights, that is, the activation light (i.e. 405 nm) and the imaging light. Third, the high quantum yield, multiple switching times and a good optical stability make the QDs more suitable for long-term imaging. The localization precision achieved in telomeres and centromeres imaging experiments is about 30 nm, which is far beyond the diffraction limit. SMLM has enabled new insights into telomeres or centromeres on the molecular level, and it is even possible to determine the length of telomere and become a potential technique for telomere-related investigation.
NASA Astrophysics Data System (ADS)
Wang, Le; Zong, Shenfei; Wang, Zhuyuan; Lu, Ju; Chen, Chen; Zhang, Ruohu; Cui, Yiping
2018-07-01
Single molecule localization microscopy (SMLM) is a powerful tool for imaging biological targets at the nanoscale. In this report, we present SMLM imaging of telomeres and centromeres using fluorescence in situ hybridization (FISH). The FISH probes were fabricated by decorating CdSSe/ZnS quantum dots (QDs) with telomere or centromere complementary DNA strands. SMLM imaging experiments using commercially available peptide nucleic acid (PNA) probes labeled with organic fluorophores were also conducted to demonstrate the advantages of using QDs FISH probes. Compared with the PNA probes, the QDs probes have the following merits. First, the fluorescence blinking of QDs can be realized in aqueous solution or PBS buffer without thiol, which is a key buffer component for organic fluorophores’ blinking. Second, fluorescence blinking of the QDs probe needs only one excitation light (i.e. 405 nm). While fluorescence blinking of the organic fluorophores usually requires two illumination lights, that is, the activation light (i.e. 405 nm) and the imaging light. Third, the high quantum yield, multiple switching times and a good optical stability make the QDs more suitable for long-term imaging. The localization precision achieved in telomeres and centromeres imaging experiments is about 30 nm, which is far beyond the diffraction limit. SMLM has enabled new insights into telomeres or centromeres on the molecular level, and it is even possible to determine the length of telomere and become a potential technique for telomere-related investigation.
NASA Astrophysics Data System (ADS)
Sandri, Paolo; Fineschi, Silvano; Romoli, Marco; Taccola, Matteo; Landini, Federico; Da Deppo, Vania; Naletto, Giampiero; Morea, Danilo; Naughton, Denis; Antonucci, Ester
2018-01-01
The modeling of the scattering phenomena for the multielement telescope for imaging and spectroscopy (METIS) coronagraph on board the European Space Agency Solar Orbiter is reported. METIS is an inverted occultation coronagraph including two optical paths: the broadband imaging of the full corona in linearly polarized visible-light (580 to 640 nm) and the narrow-band imaging of the full corona in the ultraviolet Lyman-α (121.6 nm). METIS will have the unique opportunity of observing the solar outer atmosphere as close to the Sun as 0.28 AU and from up to 35 deg out-of-ecliptic. The stray-light simulations performed on the UV and VL channels of the METIS analyzing the contributors of surface microroughness, particulate contamination, cosmetic defects, and diffraction are reported. The results obtained with the nonsequential modality of Zemax OpticStudio are compared with two different approaches: the Monte Carlo ray trace with Advanced Systems Analysis Program (ASAP®) and a semianalytical model. The results obtained with the three independently developed approaches are in considerable agreement and show compliance to the requirement of stray-light level for both the UV and VL channels.
Wide-field fundus imaging with trans-palpebral illumination.
Toslak, Devrim; Thapa, Damber; Chen, Yanjun; Erol, Muhammet Kazim; Paul Chan, R V; Yao, Xincheng
2017-01-28
In conventional fundus imaging devices, transpupillary illumination is used for illuminating the inside of the eye. In this method, the illumination light is directed into the posterior segment of the eye through the cornea and passes the pupillary area. As a result of sharing the pupillary area for the illumination beam and observation path, pupil dilation is typically necessary for wide-angle fundus examination, and the field of view is inherently limited. An alternative approach is to deliver light from the sclera. It is possible to image a wider retinal area with transcleral-illumination. However, the requirement of physical contact between the illumination probe and the sclera is a drawback of this method. We report here trans-palpebral illumination as a new method to deliver the light through the upper eyelid (palpebra). For this study, we used a 1.5 mm diameter fiber with a warm white LED light source. To illuminate the inside of the eye, the fiber illuminator was placed at the location corresponding to the pars plana region. A custom designed optical system was attached to a digital camera for retinal imaging. The optical system contained a 90 diopter ophthalmic lens and a 25 diopter relay lens. The ophthalmic lens collected light coming from the posterior of the eye and formed an aerial image between the ophthalmic and relay lenses. The aerial image was captured by the camera through the relay lens. An adequate illumination level was obtained to capture wide angle fundus images within ocular safety limits, defined by the ISO 15004-2: 2007 standard. This novel trans-palpebral illumination approach enables wide-angle fundus photography without eyeball contact and pupil dilation.
Development of a PET/Cerenkov-light hybrid imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Hamamura, Fuka; Kato, Katsuhiko
2014-09-15
Purpose: Cerenkov-light imaging is a new molecular imaging technology that detects visible photons from high-speed electrons using a high sensitivity optical camera. However, the merit of Cerenkov-light imaging remains unclear. If a PET/Cerenkov-light hybrid imaging system were developed, the merit of Cerenkov-light imaging would be clarified by directly comparing these two imaging modalities. Methods: The authors developed and tested a PET/Cerenkov-light hybrid imaging system that consists of a dual-head PET system, a reflection mirror located above the subject, and a high sensitivity charge coupled device (CCD) camera. The authors installed these systems inside a black box for imaging the Cerenkov-light.more » The dual-head PET system employed a 1.2 × 1.2 × 10 mm{sup 3} GSO arranged in a 33 × 33 matrix that was optically coupled to a position sensitive photomultiplier tube to form a GSO block detector. The authors arranged two GSO block detectors 10 cm apart and positioned the subject between them. The Cerenkov-light above the subject is reflected by the mirror and changes its direction to the side of the PET system and is imaged by the high sensitivity CCD camera. Results: The dual-head PET system had a spatial resolution of ∼1.2 mm FWHM and sensitivity of ∼0.31% at the center of the FOV. The Cerenkov-light imaging system's spatial resolution was ∼275μm for a {sup 22}Na point source. Using the combined PET/Cerenkov-light hybrid imaging system, the authors successfully obtained fused images from simultaneously acquired images. The image distributions are sometimes different due to the light transmission and absorption in the body of the subject in the Cerenkov-light images. In simultaneous imaging of rat, the authors found that {sup 18}F-FDG accumulation was observed mainly in the Harderian gland on the PET image, while the distribution of Cerenkov-light was observed in the eyes. Conclusions: The authors conclude that their developed PET/Cerenkov-light hybrid imaging system is useful to evaluate the merits and the limitations of Cerenkov-light imaging in molecular imaging research.« less
Compressive light field imaging
NASA Astrophysics Data System (ADS)
Ashok, Amit; Neifeld, Mark A.
2010-04-01
Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
High resolution Cerenkov light imaging of induced positron distribution in proton therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Fujii, Kento; Morishita, Yuki
2014-11-01
Purpose: In proton therapy, imaging of the positron distribution produced by fragmentation during or soon after proton irradiation is a useful method to monitor the proton range. Although positron emission tomography (PET) is typically used for this imaging, its spatial resolution is limited. Cerenkov light imaging is a new molecular imaging technology that detects the visible photons that are produced from high-speed electrons using a high sensitivity optical camera. Because its inherent spatial resolution is much higher than PET, the authors can measure more precise information of the proton-induced positron distribution with Cerenkov light imaging technology. For this purpose, theymore » conducted Cerenkov light imaging of induced positron distribution in proton therapy. Methods: First, the authors evaluated the spatial resolution of our Cerenkov light imaging system with a {sup 22}Na point source for the actual imaging setup. Then the transparent acrylic phantoms (100 × 100 × 100 mm{sup 3}) were irradiated with two different proton energies using a spot scanning proton therapy system. Cerenkov light imaging of each phantom was conducted using a high sensitivity electron multiplied charge coupled device (EM-CCD) camera. Results: The Cerenkov light’s spatial resolution for the setup was 0.76 ± 0.6 mm FWHM. They obtained high resolution Cerenkov light images of the positron distributions in the phantoms for two different proton energies and made fused images of the reference images and the Cerenkov light images. The depths of the positron distribution in the phantoms from the Cerenkov light images were almost identical to the simulation results. The decay curves derived from the region-of-interests (ROIs) set on the Cerenkov light images revealed that Cerenkov light images can be used for estimating the half-life of the radionuclide components of positrons. Conclusions: High resolution Cerenkov light imaging of proton-induced positron distribution was possible. The authors conclude that Cerenkov light imaging of proton-induced positron is promising for proton therapy.« less
Polarimetric imaging of retinal disease by polarization sensitive SLO
NASA Astrophysics Data System (ADS)
Miura, Masahiro; Elsner, Ann E.; Iwasaki, Takuya; Goto, Hiroshi
2015-03-01
Polarimetry imaging is used to evaluate different features of the macular disease. Polarimetry images were recorded using a commercially- available polarization-sensitive scanning laser opthalmoscope at 780 nm (PS-SLO, GDx-N). From data sets of PS-SLO, we computed average reflectance image, depolarized light images, and ratio-depolarized light images. The average reflectance image is the grand mean of all input polarization states. The depolarized light image is the minimum of crossed channel. The ratio-depolarized light image is a ratio between the average reflectance image and depolarized light image, and was used to compensate for variation of brightness. Each polarimetry image is compared with the autofluorescence image at 800 nm (NIR-AF) and autofluorescence image at 500 nm (SW-AF). We evaluated four eyes with geographic atrophy in age related macular degeneration, one eye with retinal pigment epithelium hyperplasia, and two eyes with chronic central serous chorioretinopathy. Polarization analysis could selectively emphasize different features of the retina. Findings in ratio depolarized light image had similarities and differences with NIR-AF images. Area of hyper-AF in NIR-AF images showed high intensity areas in the ratio depolarized light image, representing melanin accumulation. Areas of hypo-AF in NIR-AF images showed low intensity areas in the ratio depolarized light images, representing melanin loss. Drusen were high-intensity areas in the ratio depolarized light image, but NIR-AF images was insensitive to the presence of drusen. Unlike NIR-AF images, SW-AF images showed completely different features from the ratio depolarized images. Polarization sensitive imaging is an effective tool as a non-invasive assessment of macular disease.
A Wide Dynamic Range Tapped Linear Array Image Sensor
NASA Astrophysics Data System (ADS)
Washkurak, William D.; Chamberlain, Savvas G.; Prince, N. Daryl
1988-08-01
Detectors for acousto-optic signal processing applications require fast transient response as well as wide dynamic range. There are two major choices of detectors: conductive or integration mode. Conductive mode detectors have an initial transient period before they reach then' i equilibrium state. The duration of 1 his period is dependent on light level as well as detector capacitance. At low light levels a conductive mode detector is very slow; response time is typically on the order of milliseconds. Generally. to obtain fast transient response an integrating mode detector is preferred. With integrating mode detectors. the dynamic range is determined by the charge storage capability of the tran-sport shift registers and the noise level of the image sensor. The conventional net hod used to improve dynamic range is to increase the shift register charge storage capability. To achieve a dynamic range of fifty thousand assuming two hundred noise equivalent electrons, a charge storage capability of ten million electrons would be required. In order to accommodate this amount of charge. unrealistic shift registers widths would be required. Therefore, with an integrating mode detector it is difficult to achieve a dynamic range of over four orders of magnitude of input light intensity. Another alternative is to solve the problem at the photodetector aml not the shift, register. DALSA's wide dynamic range detector utilizes an optimized, ion implant doped, profiled MOSFET photodetector specifically designed for wide dynamic range. When this new detector operates at high speed and at low light levels the photons are collected and stored in an integrating fashion. However. at bright light levels where transient periods are short, the detector switches into a conductive mode. The light intensity is logarithmically compressed into small charge packets, easily carried by the CCD shift register. As a result of the logarithmic conversion, dynamic ranges of over six orders of magnitide are obtained. To achieve the short integration times necessary in acousto-optic applications. t he wide dynamic range detector has been implemented into a tapped array architecture with eight outputs and 256 photoelements. Operation of each 01)1,1)111 at 16 MHz yields detector integration times of 2 micro-seconds. Buried channel two phase CCD shift register technology is utilized to minimize image sensor noise improve video output rates and increase ease of operation.
High visibility temporal ghost imaging with classical light
NASA Astrophysics Data System (ADS)
Liu, Jianbin; Wang, Jingjing; Chen, Hui; Zheng, Huaibin; Liu, Yanyan; Zhou, Yu; Li, Fu-li; Xu, Zhuo
2018-03-01
High visibility temporal ghost imaging with classical light is possible when superbunching pseudothermal light is employed. In the numerical simulation, the visibility of temporal ghost imaging with pseudothermal light, equaling (4 . 7 ± 0 . 2)%, can be increased to (75 ± 8)% in the same scheme with superbunching pseudothermal light. The reasons for that the retrieved images are different for superbunching pseudothermal light with different values of degree of second-order coherence are discussed in detail. It is concluded that high visibility and high quality temporal ghost image can be obtained by collecting sufficient number of data points. The results are helpful to understand the difference between ghost imaging with classical light and entangled photon pairs. The superbunching pseudothermal light can be employed to improve the image quality in ghost imaging applications.
Method of fabricating a 3-dimensional tool master
Bonivert, William D.; Hachman, John T.
2002-01-01
The invention is a method for the fabrication of an imprint tool master. The process begins with a metallic substrate. A layer of photoresist is placed onto the metallic substrate and a image pattern mask is then aligned to the mask. The mask pattern has opaque portions that block exposure light and "open" or transparent portions which transmit exposure light. The photoresist layer is then exposed to light transmitted through the "open" portions of the first image pattern mask and the mask is then removed. A second layer of photoresist then can be placed onto the first photoresist layer and a second image pattern mask may be placed on the second layer of photoresist. The second layer of photoresist is exposed to light, as before, and the second mask removed. The photoresist layers are developed simultaneously to produce a multi-level master mandrel upon which a conductive film is formed. A tool master can now be formed onto the conductive film. An imprint tool is then produced from the tool master. In one embodiment, nickel is electroplated onto the tool master to produce a three-dimensional imprint tool.
Smart Image Enhancement Process
NASA Technical Reports Server (NTRS)
Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)
2012-01-01
Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.
Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks
NASA Astrophysics Data System (ADS)
Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie
2017-03-01
Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.
Exposure of tropical ecosystems to artificial light at night: Brazil as a case study.
Freitas, Juliana Ribeirão de; Bennie, Jon; Mantovani, Waldir; Gaston, Kevin J
2017-01-01
Artificial nighttime lighting from streetlights and other sources has a broad range of biological effects. Understanding the spatial and temporal levels and patterns of this lighting is a key step in determining the severity of adverse effects on different ecosystems, vegetation, and habitat types. Few such analyses have been conducted, particularly for regions with high biodiversity, including the tropics. We used an intercalibrated version of the Defense Meteorological Satellite Program's Operational Linescan System (DMSP/OLS) images of stable nighttime lights to determine what proportion of original and current Brazilian vegetation types are experiencing measurable levels of artificial light and how this has changed in recent years. The percentage area affected by both detectable light and increases in brightness ranged between 0 and 35% for native vegetation types, and between 0 and 25% for current vegetation (i.e. including agriculture). The most heavily affected areas encompassed terrestrial coastal vegetation types (restingas and mangroves), Semideciduous Seasonal Forest, and Mixed Ombrophilous Forest. The existing small remnants of Lowland Deciduous and Semideciduous Seasonal Forests and of Campinarana had the lowest exposure levels to artificial light. Light pollution has not often been investigated in developing countries but our data show that it is an environmental concern.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teymurazyan, A.; Rowlands, J. A.; Thunder Bay Regional Research Institute
2014-04-15
Purpose: Electronic Portal Imaging Devices (EPIDs) have been widely used in radiation therapy and are still needed on linear accelerators (Linacs) equipped with kilovoltage cone beam CT (kV-CBCT) or MRI systems. Our aim is to develop a new high quantum efficiency (QE) Čerenkov Portal Imaging Device (CPID) that is quantum noise limited at dose levels corresponding to a single Linac pulse. Methods: Recently a new concept of CPID for MV x-ray imaging in radiation therapy was introduced. It relies on Čerenkov effect for x-ray detection. The proposed design consisted of a matrix of optical fibers aligned with the incident x-raysmore » and coupled to an active matrix flat panel imager (AMFPI) for image readout. A weakness of such design is that too few Čerenkov light photons reach the AMFPI for each incident x-ray and an AMFPI with an avalanche gain is required in order to overcome the readout noise for portal imaging application. In this work the authors propose to replace the optical fibers in the CPID with light guides without a cladding layer that are suspended in air. The air between the light guides takes on the role of the cladding layer found in a regular optical fiber. Since air has a significantly lower refractive index (∼1 versus 1.38 in a typical cladding layer), a much superior light collection efficiency is achieved. Results: A Monte Carlo simulation of the new design has been conducted to investigate its feasibility. Detector quantities such as quantum efficiency (QE), spatial resolution (MTF), and frequency dependent detective quantum efficiency (DQE) have been evaluated. The detector signal and the quantum noise have been compared to the readout noise. Conclusions: Our studies show that the modified new CPID has a QE and DQE more than an order of magnitude greater than that of current clinical systems and yet a spatial resolution similar to that of current low-QE flat-panel based EPIDs. Furthermore it was demonstrated that the new CPID does not require an avalanche gain in the AMFPI and is quantum noise limited at dose levels corresponding to a single Linac pulse.« less
Detection of defects on apple using B-spline lighting correction method
NASA Astrophysics Data System (ADS)
Li, Jiangbo; Huang, Wenqian; Guo, Zhiming
To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.
Line scanning system for direct digital chemiluminescence imaging of DNA sequencing blots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karger, A.E.; Weiss, R.; Gesteland, R.F.
A cryogenically cooled charge-coupled device (CCD) camera equipped with an area CCD array is used in a line scanning system for low-light-level imaging of chemiluminescent DNA sequencing blots. Operating the CCD camera in time-delayed integration (TDI) mode results in continuous data acquisition independent of the length of the CCD array. Scanning is possible with a resolution of 1.4 line pairs/mm at the 50% level of the modulation transfer function. High-sensitivity, low-light-level scanning of chemiluminescent direct-transfer electrophoresis (DTE) DNA sequencing blots is shown. The detection of DNA fragments on the blot involves DNA-DNA hybridization with oligonucleotide-alkaline phosphatase conjugate and 1,2-dioxetane-based chemiluminescence.more » The width of the scan allows the recording of up to four sequencing reactions (16 lanes) on one scan. The scan speed of 52 cm/h used for the sequencing blots corresponds to a data acquisition rate of 384 pixels/s. The chemiluminescence detection limit on the scanned images is 3.9 [times] 10[sup [minus]18] mol of plasmid DNA. A conditional median filter is described to remove spikes caused by cosmic ray events from the CCD images. 39 refs., 9 refs.« less
Image analysis applied to luminescence microscopy
NASA Astrophysics Data System (ADS)
Maire, Eric; Lelievre-Berna, Eddy; Fafeur, Veronique; Vandenbunder, Bernard
1998-04-01
We have developed a novel approach to study luminescent light emission during migration of living cells by low-light imaging techniques. The equipment consists in an anti-vibration table with a hole for a direct output under the frame of an inverted microscope. The image is directly captured by an ultra low- light level photon-counting camera equipped with an image intensifier coupled by an optical fiber to a CCD sensor. This installation is dedicated to measure in a dynamic manner the effect of SF/HGF (Scatter Factor/Hepatocyte Growth Factor) both on activation of gene promoter elements and on cell motility. Epithelial cells were stably transfected with promoter elements containing Ets transcription factor-binding sites driving a luciferase reporter gene. Luminescent light emitted by individual cells was measured by image analysis. Images of luminescent spots were acquired with a high aperture objective and time exposure of 10 - 30 min in photon-counting mode. The sensitivity of the camera was adjusted to a high value which required the use of a segmentation algorithm dedicated to eliminate the background noise. Hence, image segmentation and treatments by mathematical morphology were particularly indicated in these experimental conditions. In order to estimate the orientation of cells during their migration, we used a dedicated skeleton algorithm applied to the oblong spots of variable intensities emitted by the cells. Kinetic changes of luminescent sources, distance and speed of migration were recorded and then correlated with cellular morphological changes for each spot. Our results highlight the usefulness of the mathematical morphology to quantify kinetic changes in luminescence microscopy.
Pinhas, Alexander; Dubow, Michael; Shah, Nishit; Chui, Toco Y.; Scoles, Drew; Sulai, Yusufu N.; Weitz, Rishard; Walsh, Joseph B.; Carroll, Joseph; Dubra, Alfredo; Rosen, Richard B.
2013-01-01
The adaptive optics scanning light ophthalmoscope (AOSLO) allows visualization of microscopic structures of the human retina in vivo. In this work, we demonstrate its application in combination with oral and intravenous (IV) fluorescein angiography (FA) to the in vivo visualization of the human retinal microvasculature. Ten healthy subjects ages 20 to 38 years were imaged using oral (7 and/or 20 mg/kg) and/or IV (500 mg) fluorescein. In agreement with current literature, there were no adverse effects among the patients receiving oral fluorescein while one patient receiving IV fluorescein experienced some nausea and heaving. We determined that all retinal capillary beds can be imaged using clinically accepted fluorescein dosages and safe light levels according to the ANSI Z136.1-2000 maximum permissible exposure. As expected, the 20 mg/kg oral dose showed higher image intensity for a longer period of time than did the 7 mg/kg oral and the 500 mg IV doses. The increased resolution of AOSLO FA, compared to conventional FA, offers great opportunity for studying physiological and pathological vascular processes. PMID:24009994
An Indium Gallium Arsenide Visible/SWIR Focal Plane Array for Low Light Level Imaging
1999-08-01
Abstract unclassified Limitation of Abstract unlimited Number of Pages 13 1.0 INTRODUCTION Military uses for the long-wave infrared ( LWIR ) and mid...applications.1,2 There are many military imaging applications becoming apparent in the SWIR band that are not possible in the MWIR or LWIR . Some of the...image is of the raw, uncorrected video output. The dark current has not been subtracted not has any gain nonuniformity been corrected. In the image of
Research study on stellar X-ray imaging experiment, volume 1
NASA Technical Reports Server (NTRS)
Wilson, H. H.; Vanspeybroeck, L. P.
1972-01-01
The use of microchannel plates as focal plane readout devices and the evaluation of mirrors for X-ray telescopes applied to stellar X-ray imaging is discussed. The microchannel plate outputs were either imaged on a phosphor screen which was viewed by a low light level vidicon or on a wire array which was read out by digitally processing the output of a charge division network attached to the wires. A service life test which was conducted on two image intensifiers is described.
A versatile clearing agent for multi-modal brain imaging
Costantini, Irene; Ghobril, Jean-Pierre; Di Giovanna, Antonino Paolo; Mascaro, Anna Letizia Allegra; Silvestri, Ludovico; Müllenbroich, Marie Caroline; Onofri, Leonardo; Conti, Valerio; Vanzi, Francesco; Sacconi, Leonardo; Guerrini, Renzo; Markram, Henry; Iannello, Giulio; Pavone, Francesco Saverio
2015-01-01
Extensive mapping of neuronal connections in the central nervous system requires high-throughput µm-scale imaging of large volumes. In recent years, different approaches have been developed to overcome the limitations due to tissue light scattering. These methods are generally developed to improve the performance of a specific imaging modality, thus limiting comprehensive neuroanatomical exploration by multi-modal optical techniques. Here, we introduce a versatile brain clearing agent (2,2′-thiodiethanol; TDE) suitable for various applications and imaging techniques. TDE is cost-efficient, water-soluble and low-viscous and, more importantly, it preserves fluorescence, is compatible with immunostaining and does not cause deformations at sub-cellular level. We demonstrate the effectiveness of this method in different applications: in fixed samples by imaging a whole mouse hippocampus with serial two-photon tomography; in combination with CLARITY by reconstructing an entire mouse brain with light sheet microscopy and in translational research by imaging immunostained human dysplastic brain tissue. PMID:25950610
An abuttable CCD imager for visible and X-ray focal plane arrays
NASA Technical Reports Server (NTRS)
Burke, Barry E.; Mountain, Robert W.; Harrison, David C.; Bautz, Marshall W.; Doty, John P.
1991-01-01
A frame-transfer silicon charge-coupled-device (CCD) imager has been developed that can be closely abutted to other imagers on three sides of the imaging array. It is intended for use in multichip arrays. The device has 420 x 420 pixels in the imaging and frame-store regions and is constructed using a three-phase triple-polysilicon process. Particular emphasis has been placed on achieving low-noise charge detection for low-light-level imaging in the visible and maximum energy resolution for X-ray spectroscopic applications. Noise levels of 6 electrons at 1-MHz and less than 3 electrons at 100-kHz data rates have been achieved. Imagers have been fabricated on 1000-Ohm-cm material to maximize quantum efficiency and minimize split events in the soft X-ray regime.
Scanned Image Projection System Employing Intermediate Image Plane
NASA Technical Reports Server (NTRS)
DeJong, Christian Dean (Inventor); Hudman, Joshua M. (Inventor)
2014-01-01
In imaging system, a spatial light modulator is configured to produce images by scanning a plurality light beams. A first optical element is configured to cause the plurality of light beams to converge along an optical path defined between the first optical element and the spatial light modulator. A second optical element is disposed between the spatial light modulator and a waveguide. The first optical element and the spatial light modulator are arranged such that an image plane is created between the spatial light modulator and the second optical element. The second optical element is configured to collect the diverging light from the image plane and collimate it. The second optical element then delivers the collimated light to a pupil at an input of the waveguide.
2003-07-25
NASA's Galaxy Evolution Explorer photographed this ultraviolet color blowup of the Groth Deep Image on June 22 and June 23, 2003. Hundreds of galaxies are detected in this portion of the image, and the faint red galaxies are believed to be 6 billion light years away. The white boxes show the location of these distant galaxies, of which more than a 100 can be detected in this image. NASA astronomers expect to detect 10,000 such galaxies after extrapolating to the full image at a deeper exposure level. http://photojournal.jpl.nasa.gov/catalog/PIA04626
Testing optimum viewing conditions for mammographic image displays.
Waynant, R W; Chakrabarti, K; Kaczmarek, R A; Dagenais, I
1999-05-01
The viewbox luminance and viewing room light level are important parameters in a medical film display, but these parameters have not had much attention. Spatial variations and too much room illumination can mask real signal or create the false perception of a signal. This presentation looks at how scotopic light sources and dark-adapted radiologists may identify more real diseases.
A Novel Image Compression Algorithm for High Resolution 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2014-06-01
This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.
Guiselini, Monalisa Jacob; Deana, Alessandro Melo; de Fátima Teixeira da Silva, Daniela; Koshoji, Nelson Hideyoshi; Mesquita-Ferrari, Raquel Agnelli; do Vale, Katia Llanos; Mascaro, Marcelo Betti; de Moraes, Simone Aleksandra; Bussadori, Sandra Kalil; Fernandes, Kristianne Porta Santos
2017-06-01
Bone tissue anatomy, density and porosity vary among subjects in different phases of life and even within areas of a single specimen. The optical characteristics of changes in bone tissue are analyzed based on these properties. Photobiomodulation has been used to improve bone healing after surgery or fractures. Thus, knowledge on light propagation is of considerable importance to the obtainment of successful clinical outcomes. This study determines light penetration and distribution in human maxillary and mandibular bones in three different regions (anterior, middle, and posterior). A HeNe laser (633nm) irradiated maxillary and mandibular bones in the cervical-apical direction. The light propagation and scattering pattern were acquired and the grey level of the images was analyzed. Three-dimensional plots of the intensity profile and attenuation profiles were created. Differences in optical properties were found between the mandibular and maxillary bones. The maxilla attenuated more light than the mandible at all sites, leading to a shallower penetration depth. Our results provide initial information on the behavior of the propagation of red laser on alveolar bone using an optical method. Copyright © 2017 Elsevier B.V. All rights reserved.
Quality and noise measurements in mobile phone video capture
NASA Astrophysics Data System (ADS)
Petrescu, Doina; Pincenti, John
2011-02-01
The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.
NASA Technical Reports Server (NTRS)
1997-01-01
Clouds and hazes at various altitudes within the dynamic Jovian atmosphere are revealed by multi-color imaging taken by the Near-Infrared Mapping Spectrometer (NIMS) onboard the Galileo spacecraft. These images were taken during the second orbit (G2) on September 5, 1996 from an early-morning vantage point 2.1 million kilometers (1.3 million miles) above Jupiter. They show the planet's appearance as viewed at various near-infrared wavelengths, with distinct differences due primarily to variations in the altitudes and opacities of the cloud systems. The top left and right images, taken at 1.61 microns and 2.73 microns respectively, show relatively clear views of the deep atmosphere, with clouds down to a level about three times the atmospheric pressure at the Earth's surface.
By contrast, the middle image in top row, taken at 2.17 microns, shows only the highest altitude clouds and hazes. This wavelength is severely affected by the absorption of light by hydrogen gas, the main constituent of Jupiter's atmosphere. Therefore, only the Great Red Spot, the highest equatorial clouds, a small feature at mid-northern latitudes, and thin, high photochemical polar hazes can be seen. In the lower left image, at 3.01 microns, deeper clouds can be seen dimly against gaseous ammonia and methane absorption. In the lower middle image, at 4.99 microns, the light observed is the planet's own indigenous heat from the deep, warm atmosphere.The false color image (lower right) succinctly shows various cloud and haze levels seen in the Jovian atmosphere. This image indicates the temperature and altitude at which the light being observed is produced. Thermally-rich red areas denote high temperatures from photons in the deep atmosphere leaking through minimal cloud cover; green denotes cool temperatures of the tropospheric clouds; blue denotes cold of the upper troposphere and lower stratosphere. The polar regions appear purplish, because small-particle hazes allow leakage and reflectivity, while yellowish regions at temperate latitudes may indicate tropospheric clouds with small particles which also allow leakage. A mix of high and low-altitude aerosols causes the aqua appearance of the Great Red Spot and equatorial region.The Jet Propulsion Laboratory manages the Galileo mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web Galileo mission home page at http://galileo.jpl.nasa.gov.Fiber optic spectroscopic digital imaging sensor and method for flame properties monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelepouga, Serguei A; Rue, David M; Saveliev, Alexei V
2011-03-15
A system for real-time monitoring of flame properties in combustors and gasifiers which includes an imaging fiber optic bundle having a light receiving end and a light output end and a spectroscopic imaging system operably connected with the light output end of the imaging fiber optic bundle. Focusing of the light received by the light receiving end of the imaging fiber optic bundle by a wall disposed between the light receiving end of the fiber optic bundle and a light source, which wall forms a pinhole opening aligned with the light receiving end.
Near infrared and visible face recognition based on decision fusion of LBP and DCT features
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-03-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.
Study on real-time images compounded using spatial light modulator
NASA Astrophysics Data System (ADS)
Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang
2007-01-01
Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.
Laser-based study of geometrical optics at school level
NASA Astrophysics Data System (ADS)
Garg, Amit; Dhingra, Vishal; Sharma, Reena; Mittal, Ankit; Tiwadi, Raman; Chakravarty, Pratik
2011-10-01
Students at the school level from grade 7 to 12 are taught various concepts of geometrical optics but with little hands-on activities. Light propagation through different media, image formation using lenses and mirrors under different conditions and application of basic principles to characterization of lenses, mirrors and other instruments has been a subject which although fascinates students but due to lack of suitable demonstrating setups, students find difficulty in understanding these concepts and hence unable to appreciate the importance of such concepts in various useful scientific apparatus, day to day life, instruments and devices. Therefore, students tend to cram various concepts related to geometrical optics instead of understanding them. As part of the extension activity in the University Grants Commission major research project "Investigating science hands-on to promote innovation and research at undergraduate level" and University of Delhi at Acharya Narendra Dev College SPIE student chapter, students working under this optics outreach programme have demonstrated various experiments on geometrical optics using a five beam laser ray box and various optical components like different types of mirrors, lenses, prisms, optical fibers etc. The various hands-on activities includes demonstrations on laws of reflection, image formation using plane, concave and convex mirrors, mirror formula, total internal reflection, light propagation in an optical fiber, laws of refraction, image formation using concave and convex lenses and combination of these lenses, lens formula, light propagation through prisms, dispersion in prism, defects in eye- Myopia and hypermetropia. Subjects have been evaluated through pre and post tests in order to measure the improvement in their level of understanding.
Sugimura, Daisuke; Kobayashi, Suguru; Hamamoto, Takayuki
2017-11-01
Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.
MR-eyetracker: a new method for eye movement recording in functional magnetic resonance imaging.
Kimmig, H; Greenlee, M W; Huethe, F; Mergner, T
1999-06-01
We present a method for recording saccadic and pursuit eye movements in the magnetic resonance tomograph designed for visual functional magnetic resonance imaging (fMRI) experiments. To reliably classify brain areas as pursuit or saccade related it is important to carefully measure the actual eye movements. For this purpose, infrared light, created outside the scanner by light-emitting diodes (LEDs), is guided via optic fibers into the head coil and onto the eye of the subject. Two additional fiber optical cables pick up the light reflected by the iris. The illuminating and detecting cables are mounted in a plastic eyepiece that is manually lowered to the level of the eye. By means of differential amplification, we obtain a signal that covaries with the horizontal position of the eye. Calibration of eye position within the scanner yields an estimate of eye position with a resolution of 0.2 degrees at a sampling rate of 1000 Hz. Experiments are presented that employ echoplanar imaging with 12 image planes through visual, parietal and frontal cortex while subjects performed saccadic and pursuit eye movements. The distribution of BOLD (blood oxygen level dependent) responses is shown to depend on the type of eye movement performed. Our method yields high temporal and spatial resolution of the horizontal component of eye movements during fMRI scanning. Since the signal is purely optical, there is no interaction between the eye movement signals and the echoplanar images. This reasonably priced eye tracker can be used to control eye position and monitor eye movements during fMRI.
Passive lighting responsive three-dimensional integral imaging
NASA Astrophysics Data System (ADS)
Lou, Yimin; Hu, Juanmei
2017-11-01
A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.
A novel method for detecting light source for digital images forensic
NASA Astrophysics Data System (ADS)
Roy, A. K.; Mitra, S. K.; Agrawal, R.
2011-06-01
Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.
NASA Astrophysics Data System (ADS)
Peller, Joseph A.; Ceja, Nancy K.; Wawak, Amanda J.; Trammell, Susan R.
2018-02-01
Polarized light imaging and optical spectroscopy can be used to distinguish between healthy and diseased tissue. In this study, the design and testing of a single-pixel hyperspectral imaging system that uses differences in the polarization of light reflected from tissue to differentiate between healthy and thermally damaged tissue is discussed. Thermal lesions were created in porcine skin (n = 8) samples using an IR laser. The damaged regions were clearly visible in the polarized light hyperspectral images. Reflectance hyperspectral and white light imaging was also obtained for all tissue samples. Sizes of the thermally damaged regions as measured via polarized light hyperspectral imaging are compared to sizes of these regions as measured in the reflectance hyperspectral images and white light images. Good agreement between the sizes measured by all three imaging modalities was found. Hyperspectral polarized light imaging can differentiate between healthy and damaged tissue. Possible applications of this imaging system include determination of tumor margins during cancer surgery or pre-surgical biopsy.
Device for wavelength-selective imaging
Frangioni, John V.
2010-09-14
An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.
Martín, Helena; Sánchez del Río, Margarita; de Silanes, Carlos López; Álvarez-Linera, Juan; Hernández, Juan Antonio; Pareja, Juan A
2011-01-01
The brain of migraineurs is hyperexcitable, particularly the occipital cortex, which is probably hypersensitive to light. Photophobia or hypersensitivity to light may be accounted for by an increased excitability of trigeminal, the visual pathways, and the occipital cortex. To study light sensitivity and photophobia by assessing the response to light stimuli with functional magnetic resonance imaging-blood oxygenation level dependent (fMRI-BOLD) of the occipital cortex in migraineurs and in controls. Also, to try to decipher the contribution of the occipital cortex to photophobia and whether the cortical reactivity of migraineurs may be part of a constitutional (defensive) mechanism or represents an acquired (sensitization) phenomenon. Nineteen patients with migraine (7 with aura and 12 without aura) and 19 controls were studied with fMRI-BOLD during 4 increasing light intensities. Eight axial image sections of 0.5 cm that covered the occipital cortex were acquired for each intensity. We measured the extension and the intensity of activation for every light stimuli. Photophobia was estimated according to a 0 to 3 semiquantitative scale of light discomfort. Migraineurs had a significantly higher number of fMRI-activated voxels at low (320.4 for migraineurs [SD = 253.9] and 164.3 for controls [SD = 102.7], P = .027) and medium-low luminance levels (501.2 for migraineurs [SD = 279.5] and 331.1 for controls [SD = 194.3], P = .034) but not at medium-high (579.5 for migraineurs [SD = 201.4] and 510.2 for controls [SD = 239.5], P = .410) and high light stimuli (496.2 for migraineurs [SD = 216.2] and 394.7 for controls [SD = 240], P = .210). No differences were found with respect to the voxel activation intensity (amplitude of the BOLD wave) between migraineurs and controls (8.98 [SD = 2.58] vs 7.99 [SD = 2.57], P = .25; 10.82 [SD = 3.27] vs 9.81 [SD = 3.19], P = .31; 11.90 [SD = 3.18] vs 11.06 [SD = 2.56], P = .62; 11.45 [SD = 2.65] vs 10.25 [SD = 2.22], P = .16). Light discomfort was higher in the group of migraineurs at all the intensities tested, but there was no correlation with the number of activated voxels in the occipital cortex and photophobia. Repetitive light stimuli failed to demonstrate a lack of habituation in migraineurs. Migraineurs during interictal periods showed hyperxcitability of the visual cortex with a wider photoresponsive area, the underlying mechanism probably being dual: constitutional-defensive and acquired-sensitizating. © 2011 American Headache Society.
Imaging polarimetry and retinal blood vessel quantification at the epiretinal membrane
Miura, Masahiro; Elsner, Ann E.; Cheney, Michael C.; Usui, Masahiko; Iwasaki, Takuya
2007-01-01
We evaluated a polarimetry method to enhance retinal blood vessels masked by the epiretinal membrane. Depolarized light images were computed by removing the polarization retaining light reaching the instrument and were compared with parallel polarized light images, average reflectance images, and the corresponding images at 514 nm. Contrasts were computed for retinal vessel profiles for arteries and veins. Contrasts were higher in the 514 nm images in normal eyes but higher in the depolarized light image in the eyes with epiretinal membranes. Depolarized light images were useful for examining the retinal vasculature in the presence of retinal disease. PMID:17429490
Los Alamos Fires From Landsat 7
NASA Technical Reports Server (NTRS)
2002-01-01
On May 9, 2000, the Landsat 7 satellite acquired an image of the area around Los Alamos, New Mexico. The Landsat 7 satellite acquired this image from 427 miles in space through its sensor called the Enhanced Thematic Mapper Plus (ETM+). Evident within the imagery is a view of the ongoing Cerro Grande fire near the town of Los Alamos and the Los Alamos National Laboratory. Combining the high-resolution (30 meters per pixel in this scene) imaging capacity of ETM+ with its multi-spectral capabilities allows scientists to penetrate the smoke plume and see the structure of the fire on the surface. Notice the high-level of detail in the infrared image (bottom), in which burn scars are clearly distinguished from the hotter smoldering and flaming parts of the fire. Within this image pair several features are clearly visible, including the Cerro Grande fire and smoke plume, the town of Los Alamos, the Los Alamos National Laboratory and associated property, and Cerro Grande peak. Combining ETM+ channels 7, 4, and 2 (one visible and two infrared channels) results in a false color image where vegetation appears as bright to dark green (bottom image). Forested areas are generally dark green while herbaceous vegetation is light green. Rangeland or more open areas appear pink to light purple. Areas with extensive pavement or urban development appear light blue or white to purple. Less densely-developed residential areas appear light green and golf courses are very bright green. The areas recently burned appear black. Dark red to bright red patches, or linear features within the burned area, are the hottest and possibly actively burning areas of the fire. The fire is spreading downslope and the front of the fire is readily detectable about 2 kilometers to the west and south of Los Alamos. Combining ETM+ channels 3, 2, and 1 provides a true-color image of the greater Los Alamos region (top image). Vegetation is generally dark to medium green. Forested areas are very dark green while herbaceous vegetation is medium green. Rangeland or more open areas appear as tan or light brown. Areas with extensive pavement or urban development appear white to light green. Less densely-developed residential areas appear medium green and golf courses are medium green. The fires and areas recently burned are obscured by smoke plumes which are white to light blue. Landsat 7 data are archived and available from EDC. Image by Rob Simmon, Earth Observatory, NASA Goddard Space Flight Center. Data courtesy Randy McKinley, EROS Data Center (EDC)
Imaging arrangement and microscope
Pertsinidis, Alexandros; Chu, Steven
2015-12-15
An embodiment of the present invention is an imaging arrangement that includes imaging optics, a fiducial light source, and a control system. In operation, the imaging optics separate light into first and second tight by wavelength and project the first and second light onto first and second areas within first and second detector regions, respectively. The imaging optics separate fiducial light from the fiducial light source into first and second fiducial light and project the first and second fiducial light onto third and fourth areas within the first and second detector regions, respectively. The control system adjusts alignment of the imaging optics so that the first and second fiducial light projected onto the first and second detector regions maintain relatively constant positions within the first and second detector regions, respectively. Another embodiment of the present invention is a microscope that includes the imaging arrangement.
Compact reflective imaging spectrometer utilizing immersed gratings
Chrisp, Michael P [Danville, CA
2006-05-09
A compact imaging spectrometer comprising an entrance slit for directing light, a first mirror that receives said light and reflects said light, an immersive diffraction grating that diffracts said light, a second mirror that focuses said light, and a detector array that receives said focused light. The compact imaging spectrometer can be utilized for remote sensing imaging spectrometers where size and weight are of primary importance.
Scanning computed confocal imager
George, John S.
2000-03-14
There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.
Effect of experimental glaucoma on the non-image forming visual system.
de Zavalía, Nuria; Plano, Santiago A; Fernandez, Diego C; Lanzani, María Florencia; Salido, Ezequiel; Belforte, Nicolás; Sarmiento, María I Keller; Golombek, Diego A; Rosenstein, Ruth E
2011-06-01
Glaucoma is a leading cause of blindness worldwide, characterized by retinal ganglion cell degeneration and damage to the optic nerve. We investigated the non-image forming visual system in an experimental model of glaucoma in rats induced by weekly injections of chondroitin sulphate (CS) in the eye anterior chamber. Animals were unilaterally or bilaterally injected with CS or vehicle for 6 or 10 weeks. In the retinas from eyes injected with CS, a similar decrease in melanopsin and Thy-1 levels was observed. CS injections induced a similar decrease in the number of melanopsin-containing cells and superior collicular retinal ganglion cells. Experimental glaucoma induced a significant decrease in the afferent pupil light reflex. White light significantly decreased nocturnal pineal melatonin content in control and glaucomatous animals, whereas blue light decreased this parameter in vehicle- but not in CS-injected animals. A significant decrease in light-induced c-Fos expression in the suprachiasmatic nuclei was observed in glaucomatous animals. General rhythmicity and gross entrainment appear to be conserved, but glaucomatous animals exhibited a delayed phase angle with respect to lights off and a significant increase in the percentage of diurnal activity. These results indicate the glaucoma induced significant alterations in the non-image forming visual system. © 2011 The Authors. Journal of Neurochemistry © 2011 International Society for Neurochemistry.
Project SunSHINE: A Student Based Solar Research Program
NASA Astrophysics Data System (ADS)
Donahue, R.
2000-12-01
Eastchester Middle School (NY) is currently conducting an ongoing, interdisciplinary solar research program entitled Project SunSHINE, for Students Help Investigate Nature in Eastchester. Students are to determine how ultraviolet and visible light levels vary throughout the year at the school's geographic location, and to ascertain if any measured variations correlate to daily weather conditions or sunspot activity. The educational goal is to provide students the opportunity to conduct original and meaningful scientific research, while learning to work collaboratively with peers and teachers in accordance with national mathematics, science and technology standards. Project SunSHINE requires the student researchers to employ a number of technologies to collect and analyze data, including light sensors, astronomical imaging software, an onsite AirWatch Weather Station, Internet access to retrieve daily solar images from the National Solar Observatory's Kitt Peak Vacuum Telescope, and two wide field telescopes for live sunspot observations. The program has been integrated into the science, mathematics, health and computer technology classes. Solar and weather datasets are emailed weekly to physicist Dr. Gil Yanow of the Jet Propulsion Laboratory for inclusion in his global study of light levels. Dr. Yanow credited the Project SunSHINE student researchers last year for the discovery of an inverse relationship between relative humidity and ultraviolet light levels. The Journal News Golden Apple Awards named Project SunSHINE the 1999 New York Wired Applied Technology Award winner. This honor recognizes the year's outstanding educational technology program at both the elementary and secondary level, and included a grant of \\$20,000 to the research program. Teacher training and image processing software for Project SunSHINE has been supplied by The Use of Astronomy in Research Based Science Education (RBSE), a Teacher Enhancement Program funded by the National Science Foundation and conducted at the facilities of the National Optical Astronomy Observatory in Tucson, Arizona.
Ultra-high resolution of radiocesium distribution detection based on Cherenkov light imaging
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Ogata, Yoshimune; Kawachi, Naoki; Suzui, Nobuo; Yin, Yong-Gen; Fujimaki, Shu
2015-03-01
After the nuclear disaster in Fukushima, radiocesium contamination became a serious scientific concern and research of its effects on plants increased. In such plant studies, high resolution images of radiocesium are required without contacting the subjects. Cherenkov light imaging of beta radionuclides has inherently high resolution and is promising for plant research. Since 137Cs and 134Cs emit beta particles, Cherenkov light imaging will be useful for the imaging of radiocesium distribution. Consequently, we developed and tested a Cherenkov light imaging system. We used a high sensitivity cooled charge coupled device (CCD) camera (Hamamatsu Photonics, ORCA2-ER) for imaging Cherenkov light from 137Cs. A bright lens (Xenon, F-number: 0.95, lens diameter: 25 mm) was mounted on the camera and placed in a black box. With a 100-μm 137Cs point source, we obtained 220-μm spatial resolution in the Cherenkov light image. With a 1-mm diameter, 320-kBq 137Cs point source, the source was distinguished within 2-s. We successfully obtained Cherenkov light images of a plant whose root was dipped in a 137Cs solution, radiocesium-containing samples as well as line and character phantom images with our imaging system. Cherenkov light imaging is promising for the high resolution imaging of radiocesium distribution without contacting the subject.
Xin, Zhaowei; Wei, Dong; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2018-02-19
Light-field imaging is a crucial and straightforward way of measuring and analyzing surrounding light worlds. In this paper, a dual-polarized light-field imaging micro-system based on a twisted nematic liquid-crystal microlens array (TN-LCMLA) for direct three-dimensional (3D) observation is fabricated and demonstrated. The prototyped camera has been constructed by integrating a TN-LCMLA with a common CMOS sensor array. By switching the working state of the TN-LCMLA, two orthogonally polarized light-field images can be remapped through the functioned imaging sensors. The imaging micro-system in conjunction with the electric-optical microstructure can be used to perform polarization and light-field imaging, simultaneously. Compared with conventional plenoptic cameras using liquid-crystal microlens array, the polarization-independent light-field images with a high image quality can be obtained in the arbitrary polarization state selected. We experimentally demonstrate characters including a relatively wide operation range in the manipulation of incident beams and the multiple imaging modes, such as conventional two-dimensional imaging, light-field imaging, and polarization imaging. Considering the obvious features of the TN-LCMLA, such as very low power consumption, providing multiple imaging modes mentioned, simple and low-cost manufacturing, the imaging micro-system integrated with this kind of liquid-crystal microstructure driven electrically presents the potential capability of directly observing a 3D object in typical scattering media.
Weekenstroo, Harm H A; Cornelissen, Bart M W; Bernelot Moens, Hein J
2015-06-01
Nailfold capillaroscopy is a non-invasive and safe technique for the analysis of microangiopathologies. Imaging quality of widely used simple videomicroscopes is poor. The use of green illumination instead of the commonly used white light may improve contrast. The aim of the study was to compare the effect of green illumination with white illumination, regarding capillary density, the number of microangiopathologies, and sensitivity and specificity for systemic sclerosis. Five rheumatologists have evaluated 80 images; 40 images acquired with green light, and 40 images acquired with white light. A larger number of microangiopathologies were found in images acquired with green light than in images acquired with white light. This results in slightly higher sensitivity with green light in comparison with white light, without reducing the specificity. These findings suggest that green instead of white illumination may facilitate evaluation of capillaroscopic images obtained with a low-cost digital videomicroscope.
Phoenix's Laser Beam in Action on Mars
NASA Technical Reports Server (NTRS)
2008-01-01
[figure removed for brevity, see original site] Click on image to view the animation The Surface Stereo Imager camera aboard NASA's Phoenix Mars Lander acquired a series of images of the laser beam in the Martian night sky. Bright spots in the beam are reflections from ice crystals in the low level ice-fog. The brighter area at the top of the beam is due to enhanced scattering of the laser light in a cloud. The Canadian-built lidar instrument emits pulses of laser light and records what is scattered back. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Ghost imaging via optical parametric amplification
NASA Astrophysics Data System (ADS)
Li, Hong-Guo; Zhang, De-Jian; Xu, De-Qin; Zhao, Qiu-Li; Wang, Sen; Wang, Hai-Bo; Xiong, Jun; Wang, Kaige
2015-10-01
We investigate theoretically and experimentally thermal light ghost imaging where the light transmitted through the object as the seed light is amplified by an optical parametric amplifier (OPA). In conventional lens imaging systems with OPA, the spectral bandwidth of OPA dominates the image resolution. Theoretically, we prove that in ghost imaging via optical parametric amplification (GIOPA) the bandwidth of OPA will not affect the image resolution. The experimental results show that for weak seed light the image quality in GIOPA is better than that of conventional ghost imaging. Our work may be valuable in remote sensing with ghost imaging technique, where the light passed through the object is weak after a long-distance propagation.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Perceptual transparency from image deformation.
Kawabe, Takahiro; Maruya, Kazushi; Nishida, Shin'ya
2015-08-18
Human vision has a remarkable ability to perceive two layers at the same retinal locations, a transparent layer in front of a background surface. Critical image cues to perceptual transparency, studied extensively in the past, are changes in luminance or color that could be caused by light absorptions and reflections by the front layer, but such image changes may not be clearly visible when the front layer consists of a pure transparent material such as water. Our daily experiences with transparent materials of this kind suggest that an alternative potential cue of visual transparency is image deformations of a background pattern caused by light refraction. Although previous studies have indicated that these image deformations, at least static ones, play little role in perceptual transparency, here we show that dynamic image deformations of the background pattern, which could be produced by light refraction on a moving liquid's surface, can produce a vivid impression of a transparent liquid layer without the aid of any other visual cues as to the presence of a transparent layer. Furthermore, a transparent liquid layer perceptually emerges even from a randomly generated dynamic image deformation as long as it is similar to real liquid deformations in its spatiotemporal frequency profile. Our findings indicate that the brain can perceptually infer the presence of "invisible" transparent liquids by analyzing the spatiotemporal structure of dynamic image deformation, for which it uses a relatively simple computation that does not require high-level knowledge about the detailed physics of liquid deformation.
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-01
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-10
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.
Fluorescence image-guided photodynamic therapy of cancer cells using a scanning fiber endoscope
NASA Astrophysics Data System (ADS)
Woldetensae, Mikias H.; Kirshenbaum, Mark R.; Kramer, Greg M.; Zhang, Liang; Seibel, Eric J.
2013-03-01
A scanning fiber endoscope (SFE) and the cancer biomarker 5-aminolevulinic acid (5-ALA) were used to fluorescently detect and destroy superficial cancerous lesions, while experimenting with different dosimetry levels for concurrent or sequential imaging and laser therapy. The 1.6-mm diameter SFE was used to fluorescently image a confluent monolayer of A549 human lung cancer cells from culture, previously administered with 5 mM solution of 5-ALA for 4 hours. Twenty hours after therapy, cell cultures were stained to distinguish between living and dead cells using a laser scanning confocal microscope. To determine relative dosimetry for photodynamic therapy (PDT), 405-nm laser illumination was varied from 1 to 5 minutes with power varying from 5 to 18 mW, chosen to compare equal amounts of energy delivered to the cell culture. The SFE produced 500-line images of fluorescence at 15 Hz using the red detection channel centered at 635 nm. The results show that PDT of A549 cancer cell monolayers using 405nm light for imaging and 5-ALAinduced PpIX therapy was possible using the same SFE system. Increased duration and power of laser illumination produced an increased area of cell death upon live/dead staining. The ultrathin and flexible SFE was able to direct PDT using wide-field fluorescence imaging of a monolayer of cultured cancer cells after uptaking 5-ALA. The correlation between light intensity and duration of PDT was measured. Increased length of exposure and decreased light intensity yields larger areas of cell death than decreased length of exposure with increased light intensity.
Neural correlates of the popular music phenomenon: evidence from functional MRI and PET imaging.
Chen, Qiaozhen; Zhang, Ying; Hou, Haifeng; Du, Fenglei; Wu, Shuang; Chen, Lin; Shen, Yehua; Chao, Fangfang; Chung, June-Key; Zhang, Hong; Tian, Mei
2017-06-01
Music can induce different emotions. However, its neural mechanism remains unknown. The aim of this study was to use functional magnetic resonance imaging (fMRI) and position emission tomography (PET) imaging for mapping of neural changes under the most popular music in healthy volunteers. Blood-oxygen-level-dependent (BOLD) fMRI and monoamine receptor PET imaging with 11 C-N-methylspiperone ( 11 C-NMSP) were conducted under the popular music Gangnam Style and light music A Comme Amour in healthy subjects. PET and fMRI images were analyzed by using the Statistical Parametric Mapping software (SPM). Significantly increased fMRI BOLD signals were found in the bilateral superior temporal cortices, left cerebellum, left putamen and right thalamus cortex. Monoamine receptor availability was increased significantly in the left superior temporal gyrus and left putamen, but decreased in the bilateral superior occipital cortices under the Gangnam Style compared with the light music condition. Significant positive correlation was found between 11 C-NMSP binding and fMRI BOLD signals in the left temporal cortex. Furthermore, increased 11 C-NMSP binding in the left putamen was positively correlated with the mood arousal level score under the Gangnam Style condition. Popular music Gangnam Style can arouse pleasure experience and strong emotional response. The left putamen is positively correlated with the mood arousal level score under the Gangnam Style condition. Our results revealed characteristic patterns of brain activity associated with Gangnam Style, and may also provide more general insights into the music-induced emotional processing.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-13
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-01
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797
Multi-channel medical imaging system
Frangioni, John V
2013-12-31
A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in the subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.
Multi-channel medical imaging system
Frangioni, John V.
2016-05-03
A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.
In-orbit performance of the soft X-ray imaging system aboard Hitomi (ASTRO-H)
NASA Astrophysics Data System (ADS)
Nakajima, Hiroshi; Maeda, Yoshitomo; Uchida, Hiroyuki; Tanaka, Takaaki; Tsunemi, Hiroshi; Hayashida, Kiyoshi; Tsuru, Takeshi G.; Dotani, Tadayasu; Nagino, Ryo; Inoue, Shota; Ozaki, Masanobu; Tomida, Hiroshi; Natsukari, Chikara; Ueda, Shutaro; Mori, Koji; Yamauchi, Makoto; Hatsukade, Isamu; Nishioka, Yusuke; Sakata, Miho; Beppu, Tatsuhiko; Honda, Daigo; Nobukawa, Masayoshi; Hiraga, Junko S.; Kohmura, Takayoshi; Murakami, Hiroshi; Nobukawa, Kumiko K.; Bamba, Aya; Doty, John P.; Iizuka, Ryo; Sato, Toshiki; Kurashima, Sho; Nakaniwa, Nozomi; Asai, Ryota; Ishida, Manadu; Mori, Hideyuki; Soong, Yang; Okajima, Takashi; Serlemitsos, Peter; Tawara, Yuzuru; Mitsuishi, Ikuyuki; Ishibashi, Kazunori; Tamura, Keisuke; Hayashi, Takayuki; Furuzawa, Akihiro; Sugita, Satoshi; Miyazawa, Takuya; Awaki, Hisamitsu; Miller, Eric D.; Yamaguchi, Hiroya
2018-03-01
We describe the in-orbit performance of the soft X-ray imaging system consisting of the Soft X-ray Telescope and the Soft X-ray Imager aboard Hitomi. Verification and calibration of imaging and spectroscopic performance are carried out, making the best use of the limited data of less than three weeks. Basic performance, including a large field of view of {38^' }} × {38^' }}, is verified with the first-light image of the Perseus cluster of galaxies. Amongst the small number of observed targets, the on-minus-off pulse image for the out-of-time events of the Crab pulsar enables us to measure the half-power diameter of the telescope as ˜{1 {^'.} 3}. The average energy resolution measured with the onboard calibration source events at 5.89 keV is 179 ± 3 eV in full width at half maximum. Light leak and crosstalk issues affected the effective exposure time and the effective area, respectively, because all the observations were performed before optimizing an observation schedule and the parameters for the dark-level calculation. Screening the data affected by these two issues, we measure the background level to be 5.6 × 10-6 counts s-1 arcmin-2 cm-2 in the energy band of 5-12 keV, which is seven times lower than that of the Suzaku XIS-BI.
NASA Astrophysics Data System (ADS)
Boccara, A. Claude; Mordon, Serge
2015-10-01
In re-listening to the lectures of Charles Townes shortly after the invention of the laser (e.g., in the Boston Science Museum), one can already have a realistic vision of the potentialities of this new tool in the field of medical therapy, as evidenced by the use of the laser in ophthalmology to cure retinal detachment in the 1960's. Since then, applications have flourished in the domain of therapy. We will thus illustrate here only some of the main fields of application of medical lasers. On the opposite, the use of lasers in medical imaging is, with one exception in ophthalmology, still at the development level. It is becoming a diagnostic tool in addition to high performance imaging facilities that are often very expensive (such as CT scan, Magnetic Resonance Imaging (MRI) and nuclear imaging). Even if progress is sometimes slow, one can now image with light inside the human body, in spite of the strong scattering of light by tissues, in the same way as a pathologist sees surgical specimens.
Nano-imaging enabled via self-assembly
McLeod, Euan; Ozcan, Aydogan
2014-01-01
SUMMARY Imaging object details with length scales below approximately 200 nm has been historically difficult for conventional microscope objective lenses because of their inability to resolve features smaller than one-half the optical wavelength. Here we review some of the recent approaches to surpass this limit by harnessing self-assembly as a fabrication mechanism. Self-assembly can be used to form individual nano- and micro-lenses, as well as to form extended arrays of such lenses. These lenses have been shown to enable imaging with resolutions as small as 50 nm half-pitch using visible light, which is well below the Abbe diffraction limit. Furthermore, self-assembled nano-lenses can be used to boost contrast and signal levels from small nano-particles, enabling them to be detected relative to background noise. Finally, alternative nano-imaging applications of self-assembly are discussed, including three-dimensional imaging, enhanced coupling from light-emitting diodes, and the fabrication of contrast agents such as quantum dots and nanoparticles. PMID:25506387
Geradts, Z J; Bijhold, J; Hermsen, R; Murtagh, F
2001-06-01
On the market several systems exist for collecting spent ammunition data for forensic investigation. These databases store images of cartridge cases and the marks on them. Image matching is used to create hit lists that show which marks on a cartridge case are most similar to another cartridge case. The research in this paper is focused on the different methods of feature selection and pattern recognition that can be used for optimizing the results of image matching. The images are acquired by side light images for the breech face marks and by ring light for the firing pin impression. For these images a standard way of digitizing the images used. For the side light images and ring light images this means that the user has to position the cartridge case in the same position according to a protocol. The positioning is important for the sidelight, since the image that is obtained of a striation mark depends heavily on the angle of incidence of the light. In practice, it appears that the user positions the cartridge case with +/-10 degrees accuracy. We tested our algorithms using 49 cartridge cases of 19 different firearms, where the examiner determined that they were shot with the same firearm. For testing, these images were mixed with a database consisting of approximately 4900 images that were available from the Drugfire database of different calibers.In cases where the registration and the light conditions among those matching pairs was good, a simple computation of the standard deviation of the subtracted gray levels, delivered the best-matched images. For images that were rotated and shifted, we have implemented a "brute force" way of registration. The images are translated and rotated until the minimum of the standard deviation of the difference is found. This method did not result in all relevant matches in the top position. This is caused by the effect that shadows and highlights are compared in intensity. Since the angle of incidence of the light will give a different intensity profile, this method is not optimal. For this reason a preprocessing of the images was required. It appeared that the third scale of the "à trous" wavelet transform gives the best results in combination with brute force. Matching the contents of the images is less sensitive to the variation of the lighting. The problem with the brute force method is however that the time for calculation for 49 cartridge cases to compare between them, takes over 1 month of computing time on a Pentium II-computer with 333MHz. For this reason a faster approach is implemented: correlation in log polar coordinates. This gave similar results as the brute force calculation, however it was computed in 24h for a complete database with 4900 images.A fast pre-selection method based on signatures is carried out that is based on the Kanade Lucas Tomasi (KLT) equation. The positions of the points computed with this method are compared. In this way, 11 of the 49 images were in the top position in combination with the third scale of the à trous equation. It depends however on the light conditions and the prominence of the marks if correct matches are found in the top ranked position. All images were retrieved in the top 5% of the database. This method takes only a few minutes for the complete database if, and can be optimized for comparison in seconds if the location of points are stored in files. For further improvement, it is useful to have the refinement in which the user selects the areas that are relevant on the cartridge case for their marks. This is necessary if this cartridge case is damaged and other marks that are not from the firearm appear on it.
Compact Refractive Imaging Spectrometer Designs Utilizing Immersed Gratings
Lerner, Scott A.; Bennett, Charles L.; Bixler, Jay V.; Kuzmenko, Paul J.; Lewis, Isabella T.
2005-07-26
A compact imaging spectrometer comprising an entrance slit for directing light, a first means for receiving the light and focusing the light, an immersed diffraction grating that receives the light from the first means and defracts the light, a second means for receiving the light from the immersed diffraction grating and focusing the light, and an image plane that receives the light from the second means
Restoration of uneven illumination in light sheet microscopy images.
Uddin, Mohammad Shorif; Lee, Hwee Kuan; Preibisch, Stephan; Tomancak, Pavel
2011-08-01
Light microscopy images suffer from poor contrast due to light absorption and scattering by the media. The resulting decay in contrast varies exponentially across the image along the incident light path. Classical space invariant deconvolution approaches, while very effective in deblurring, are not designed for the restoration of uneven illumination in microscopy images. In this article, we present a modified radiative transfer theory approach to solve the contrast degradation problem of light sheet microscopy (LSM) images. We confirmed the effectiveness of our approach through simulation as well as real LSM images.
Exposure of tropical ecosystems to artificial light at night: Brazil as a case study
Bennie, Jon; Mantovani, Waldir; Gaston, Kevin J.
2017-01-01
Artificial nighttime lighting from streetlights and other sources has a broad range of biological effects. Understanding the spatial and temporal levels and patterns of this lighting is a key step in determining the severity of adverse effects on different ecosystems, vegetation, and habitat types. Few such analyses have been conducted, particularly for regions with high biodiversity, including the tropics. We used an intercalibrated version of the Defense Meteorological Satellite Program’s Operational Linescan System (DMSP/OLS) images of stable nighttime lights to determine what proportion of original and current Brazilian vegetation types are experiencing measurable levels of artificial light and how this has changed in recent years. The percentage area affected by both detectable light and increases in brightness ranged between 0 and 35% for native vegetation types, and between 0 and 25% for current vegetation (i.e. including agriculture). The most heavily affected areas encompassed terrestrial coastal vegetation types (restingas and mangroves), Semideciduous Seasonal Forest, and Mixed Ombrophilous Forest. The existing small remnants of Lowland Deciduous and Semideciduous Seasonal Forests and of Campinarana had the lowest exposure levels to artificial light. Light pollution has not often been investigated in developing countries but our data show that it is an environmental concern. PMID:28178352
NASA Astrophysics Data System (ADS)
Hosono, Satsuki; Sato, Shun; Ishida, Akane; Suzuki, Yo; Inohara, Daichi; Nogo, Kosuke; Abeygunawardhana, Pradeep K.; Suzuki, Satoru; Nishiyama, Akira; Wada, Kenji; Ishimaru, Ichiro
2015-07-01
For blood glucose level measurement of dialysis machines, we proposed AAA-battery-size ATR (Attenuated total reflection) Fourier spectroscopy in middle infrared light region. The proposed one-shot Fourier spectroscopic imaging is a near-common path and spatial phase-shift interferometer with high time resolution. Because numerous number of spectral data that is 60 (= camera frame rare e.g. 60[Hz]) multiplied by pixel number could be obtained in 1[sec.], statistical-averaging improvement realize high-accurate spectral measurement. We evaluated the quantitative accuracy of our proposed method for measuring glucose concentration in near-infrared light region with liquid cells. We confirmed that absorbance at 1600[nm] had high correlations with glucose concentrations (correlation coefficient: 0.92). But to measure whole-blood, complex light phenomenon caused from red blood cells, that is scattering and multiple reflection or so, deteriorate spectral data. Thus, we also proposed the ultrasound-assisted spectroscopic imaging that traps particles at standing-wave node. Thus, if ATR prism is oscillated mechanically, anti-node area is generated around evanescent light field on prism surface. By elimination complex light phenomenon of red blood cells, glucose concentration in whole-blood will be quantify with high accuracy. In this report, we successfully trapped red blood cells in normal saline solution with ultrasonic standing wave (frequency: 2[MHz]).
Treating presbyopia without spectacles
NASA Astrophysics Data System (ADS)
Xu, Renfeng
Both multifocal optics and small pupils can increase the depth of focus (DoF) of presbyopes. This thesis will evaluate some of the unique challenges faced by each of these two strategies. First, there is no single spherical refracting lens that can focus all parts of the pupil of an aberrated eye. What is the objective and subjective spherical refractive error (Rx) for such an eye, and does it vary with the amount of primary SA? Using both computational modeling and psychophysical methods, we found that high levels of positive Seidel SA caused both objective and subjective refractions to become myopic. Significantly, this refractive shift varied with stimulus spatial frequency and subjective criterion. Second, although secondary SA can dramatically expand DoF, we show that this is mostly due to the lower order components within this polynomial, which can also change spherical Rx. Also, the r6 term that defines secondary SA actually narrows rather than expands DoF, when in the presence of the r4 term within Z60. Finally, as retinal illuminance drops, neural thresholds are elevated due to increased problems of photon noise. We asked if the gains in near and distant vision of presbyopes anticipated at high light levels would be cancelled or even reversed at low light levels because of the additional reduction in retinal illuminance contributed by small pupils. We found that when light levels are > 2 cd/m2, a small pupil with a diameter of 2--3mm improves near image quality, near visual acuity, and near reading speed without significant loss of distance image quality and distance vision. This result gains added significance because we also showed that low light level text in the urban environment always has luminance levels > 2 cd/m2. In conclusion, both small pupils and multifocal optics face significant challenges as near vision aids for presbyopes. However, some of the confounding effects of elevated SA levels are avoided by using small pupils to expand DoF, which can provide improved near and distance vision at most light levels encountered while reading.
2004-03-19
Bands and spots in Saturn's atmosphere, including a dark band south of the equator with a scalloped border, are visible in this image from the Cassini-Huygens spacecraft. The narrow angle camera took the image in blue light on Feb. 29, 2004. The distance to Saturn was 59.9 million kilometers (37.2 million miles). The image scale is 359 kilometers (223 miles) per pixel. Three of Saturn's moons are seen in the image: Enceladus (499 kilometers, or 310 miles across) at left; Mimas (398 kilometers, or 247 miles across) left of Saturn's south pole; and Rhea (1,528 kilometers, or 949 miles across) at lower right. The imaging team enhanced the brightness of the moons to aid visibility. The BL1 broadband spectral filter (centered at 451 nanometers) allows Cassini to "see" light in a part of the spectrum visible as the color blue to human eyes. Scientist can combine images made with this filter with those taken with red and green filters to create full-color composites. Scientists can also assess cloud heights by combining images from the blue filter with images taken in other spectral regions. For example, the bright clouds that form the equatorial zone are the highest in altitude and have pressures at their tops of about one quarter of Earth's atmospheric pressure at sea level. The cloud tops at middle latitudes are lower in altitude and have higher pressures of about half that found at sea level. Analysis of Saturn images like this one will be extremely useful to researchers assessing cloud altitudes during the Cassini-Huygens mission. http://photojournal.jpl.nasa.gov/catalog/PIA05383
Dual light field and polarization imaging using CMOS diffractive image sensors.
Jayasuriya, Suren; Sivaramakrishnan, Sriram; Chuang, Ellen; Guruaribam, Debashree; Wang, Albert; Molnar, Alyosha
2015-05-15
In this Letter we present, to the best of our knowledge, the first integrated CMOS image sensor that can simultaneously perform light field and polarization imaging without the use of external filters or additional optical elements. Previous work has shown how photodetectors with two stacks of integrated metal gratings above them (called angle sensitive pixels) diffract light in a Talbot pattern to capture four-dimensional light fields. We show, in addition to diffractive imaging, that these gratings polarize incoming light and characterize the response of these sensors to polarization and incidence angle. Finally, we show two applications of polarization imaging: imaging stress-induced birefringence and identifying specular reflections in scenes to improve light field algorithms for these scenes.
Imaging Polarimetry in Central Serous Chorioretinopathy
MIURA, MASAHIRO; ELSNER, ANN E.; WEBER, ANKE; CHENEY, MICHAEL C.; OSAKO, MASAHIRO; USUI, MASAHIKO; IWASAKI, TAKUYA
2006-01-01
PURPOSE To evaluate a noninvasive technique to detect the leakage point of central serous chorioretinopathy (CSR), using a polarimetry method. DESIGN Prospective cohort study. METHODS SETTING Institutional practice. PATIENTS We examined 30 eyes of 30 patients with CSR. MAIN OUTCOME MEASURES Polarimetry images were recorded using the GDx-N (Laser Diagnostic Technologies). We computed four images that differed in their polarization content: a depolarized light image, an average reflectance image, a parallel polarized light image, and a birefringence image. Each polarimetry image was compared with abnormalities seen on fluorescein angiography. RESULTS In all eyes, leakage area could be clearly visualized as a bright area in the depolarized light images. Michelson contrasts for the leakage areas were 0.58 ± 0.28 in the depolarized light images, 0.17 ± 0.11 in the average reflectance images, 0.09 ± 0.09 in the parallel polarized light images, and 0.11 ± 0.21 in the birefringence images from the same raw data. Michelson contrasts in depolarized light images were significantly higher than for the other three images (P < .0001, for all tests, paired t test). The fluid accumulated in the retina was well-visualized in the average and parallel polarized light images. CONCLUSIONS Polarization-sensitive imaging could readily localize the leakage point and area of fluid in CSR. This may assist with the rapid, noninvasive assessment of CSR. PMID:16376644
Henstrand, John M.; McCue, Kent F.; Brink, Kent; Handa, Avtar K.; Herrmann, Klaus M.; Conn, Eric E.
1992-01-01
Light and fungal elicitor induce mRNA encoding 3-deoxy-d-arabino-heptulosonate 7-phosphate (DAHP) synthase in suspension cultured cells of parsley (Petroselinum crispum L.). The kinetics and dose response of mRNA accumulation were similar for DAHP synthase and phenylalanine ammonia-lyase (PAL). Six micrograms of elicitor from Phytophthora megasperma f. glycinia gave a detectable induction within 1 hour. Induction of DAHP synthase and PAL mRNAs by light was transient, reaching maximal levels at 4 hours and returning to pretreatment levels after 24 hours. Our data suggest that either light or fungal elicitor transcriptionally activate DAHP synthase. A coordinate regulation for key enzymes in the synthesis of primary and secondary metabolites is indicated. ImagesFigure 1 PMID:16668708
Solid-state Image Sensor with Focal-plane Digital Photon-counting Pixel Array
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Pain, Bedabrata
1997-01-01
A solid-state focal-plane imaging system comprises an NxN array of high gain. low-noise unit cells. each unit cell being connected to a different one of photovoltaic detector diodes, one for each unit cell, interspersed in the array for ultra low level image detection and a plurality of digital counters coupled to the outputs of the unit cell by a multiplexer(either a separate counter for each unit cell or a row of N of counters time shared with N rows of digital counters). Each unit cell includes two self-biasing cascode amplifiers in cascade for a high charge-to-voltage conversion gain (greater than 1mV/e(-)) and an electronic switch to reset input capacitance to a reference potential in order to be able to discriminate detection of an incident photon by the photoelectron (e(-))generated in the detector diode at the input of the first cascode amplifier in order to count incident photons individually in a digital counter connected to the output of the second cascade amplifier. Reseting the input capacitance and initiating self-biasing of the amplifiers occurs every clock cycle of an integratng period to enable ultralow light level image detection by the may of photovoltaic detector diodes under such ultralow light level conditions that the photon flux will statistically provide only a single photon at a time incident on anyone detector diode during any clock cycle.
Noda, Naoki; Kamimura, Shinji
2008-02-01
With conventional light microscopy, precision in the measurement of the displacement of a specimen depends on the signal-to-noise ratio when we measure the light intensity of magnified images. This implies that, for the improvement of precision, getting brighter images and reducing background light noise are both inevitably required. For this purpose, we developed a new optics for laser dark-field illumination. For the microscopy, we used a laser beam and a pair of axicons (conical lenses) to get an optimal condition for dark-field observations. The optics was applied to measuring two dimensional microbead displacements with subnanometer precision. The bandwidth of our detection system overall was 10 kHz. Over most of this bandwidth, the observed noise level was as small as 0.1 nm/radicalHz.
NASA Astrophysics Data System (ADS)
Jing, X.; Shao, X.; Cao, C.; Fu, X.
2013-12-01
Night-time light imagery offers a unique view of the Earth's surface. In the past, the nighttime light data collected by the DMSP-OLS sensors have been used as efficient means to correlate with the global socio-economic activities. With the launch of Suomi National Polar-orbiting Partnership (S-NPP) satellite in October 2011, the Day Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard S-NPP represents a major advancement in night time imaging capabilities because it surpassed its predecessor DMSP-OLS in radiometric accuracy, spatial resolution, and geometric quality. In this paper, we compared the performance of DNB image and DMSP image in correlating regional socio-economic activities and analyzed the leading causes for the differences. The correlation coefficients between the socio-economic variables such as population, regional GDP etc. and the characteristic variables derived from the night time light images of DNB and DMSP at provincial level in China were computed as performance metrics for comparison. In general, the correlation between DNB data and socio-economic data is better than that of DMSP data. To explain the difference in the correlation, we further analyzed the effects of several factors such as radiometric saturation and quantization of DMSP data, low spatial resolution, different data acquisition times between DNB and DMSP images, and difference in the transformation used in converting digital number (DN) value to radiance.
Bushong, Eric A; Johnson, Donald D; Kim, Keun-Young; Terada, Masako; Hatori, Megumi; Peltier, Steven T; Panda, Satchidananda; Merkle, Arno; Ellisman, Mark H
2015-02-01
The recently developed three-dimensional electron microscopic (EM) method of serial block-face scanning electron microscopy (SBEM) has rapidly established itself as a powerful imaging approach. Volume EM imaging with this scanning electron microscopy (SEM) method requires intense staining of biological specimens with heavy metals to allow sufficient back-scatter electron signal and also to render specimens sufficiently conductive to control charging artifacts. These more extreme heavy metal staining protocols render specimens light opaque and make it much more difficult to track and identify regions of interest (ROIs) for the SBEM imaging process than for a typical thin section transmission electron microscopy correlative light and electron microscopy study. We present a strategy employing X-ray microscopy (XRM) both for tracking ROIs and for increasing the efficiency of the workflow used for typical projects undertaken with SBEM. XRM was found to reveal an impressive level of detail in tissue heavily stained for SBEM imaging, allowing for the identification of tissue landmarks that can be subsequently used to guide data collection in the SEM. Furthermore, specific labeling of individual cells using diaminobenzidine is detectable in XRM volumes. We demonstrate that tungsten carbide particles or upconverting nanophosphor particles can be used as fiducial markers to further increase the precision and efficiency of SBEM imaging.
Bushong, Eric A.; Johnson, Donald D.; Kim, Keun-Young; Terada, Masako; Hatori, Megumi; Peltier, Steven T.; Panda, Satchidananda; Merkle, Arno; Ellisman, Mark H.
2015-01-01
The recently developed three-dimensional electron microscopic (EM) method of serial block-face scanning electron microscopy (SBEM) has rapidly established itself as a powerful imaging approach. Volume EM imaging with this scanning electron microscopy (SEM) method requires intense staining of biological specimens with heavy metals to allow sufficient back-scatter electron signal and also to render specimens sufficiently conductive to control charging artifacts. These more extreme heavy metal staining protocols render specimens light opaque and make it much more difficult to track and identify regions of interest (ROIs) for the SBEM imaging process than for a typical thin section transmission electron microscopy correlative light and electron microscopy study. We present a strategy employing X-ray microscopy (XRM) both for tracking ROIs and for increasing the efficiency of the workflow used for typical projects undertaken with SBEM. XRM was found to reveal an impressive level of detail in tissue heavily stained for SBEM imaging, allowing for the identification of tissue landmarks that can be subsequently used to guide data collection in the SEM. Furthermore, specific labeling of individual cells using diaminobenzidine is detectable in XRM volumes. We demonstrate that tungsten carbide particles or upconverting nanophosphor particles can be used as fiducial markers to further increase the precision and efficiency of SBEM imaging. PMID:25392009
Methods for CT automatic exposure control protocol translation between scanner platforms.
McKenney, Sarah E; Seibert, J Anthony; Lamba, Ramit; Boone, John M
2014-03-01
An imaging facility with a diverse fleet of CT scanners faces considerable challenges when propagating CT protocols with consistent image quality and patient dose across scanner makes and models. Although some protocol parameters can comfortably remain constant among scanners (eg, tube voltage, gantry rotation time), the automatic exposure control (AEC) parameter, which selects the overall mA level during tube current modulation, is difficult to match among scanners, especially from different CT manufacturers. Objective methods for converting tube current modulation protocols among CT scanners were developed. Three CT scanners were investigated, a GE LightSpeed 16 scanner, a GE VCT scanner, and a Siemens Definition AS+ scanner. Translation of the AEC parameters such as noise index and quality reference mAs across CT scanners was specifically investigated. A variable-diameter poly(methyl methacrylate) phantom was imaged on the 3 scanners using a range of AEC parameters for each scanner. The phantom consisted of 5 cylindrical sections with diameters of 13, 16, 20, 25, and 32 cm. The protocol translation scheme was based on matching either the volumetric CT dose index or image noise (in Hounsfield units) between two different CT scanners. A series of analytic fit functions, corresponding to different patient sizes (phantom diameters), were developed from the measured CT data. These functions relate the AEC metric of the reference scanner, the GE LightSpeed 16 in this case, to the AEC metric of a secondary scanner. When translating protocols between different models of CT scanners (from the GE LightSpeed 16 reference scanner to the GE VCT system), the translation functions were linear. However, a power-law function was necessary to convert the AEC functions of the GE LightSpeed 16 reference scanner to the Siemens Definition AS+ secondary scanner, because of differences in the AEC functionality designed by these two companies. Protocol translation on the basis of quantitative metrics (volumetric CT dose index or measured image noise) is feasible. Protocol translation has a dependency on patient size, especially between the GE and Siemens systems. Translation schemes that preserve dose levels may not produce identical image quality. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Plane wave analysis of coherent holographic image reconstruction by phase transfer (CHIRPT).
Field, Jeffrey J; Winters, David G; Bartels, Randy A
2015-11-01
Fluorescent imaging plays a critical role in a myriad of scientific endeavors, particularly in the biological sciences. Three-dimensional imaging of fluorescent intensity often requires serial data acquisition, that is, voxel-by-voxel collection of fluorescent light emitted throughout the specimen with a nonimaging single-element detector. While nonimaging fluorescence detection offers some measure of scattering robustness, the rate at which dynamic specimens can be imaged is severely limited. Other fluorescent imaging techniques utilize imaging detection to enhance collection rates. A notable example is light-sheet fluorescence microscopy, also known as selective-plane illumination microscopy, which illuminates a large region within the specimen and collects emitted fluorescent light at an angle either perpendicular or oblique to the illumination light sheet. Unfortunately, scattering of the emitted fluorescent light can cause blurring of the collected images in highly turbid biological media. We recently introduced an imaging technique called coherent holographic image reconstruction by phase transfer (CHIRPT) that combines light-sheet-like illumination with nonimaging fluorescent light detection. By combining the speed of light-sheet illumination with the scattering robustness of nonimaging detection, CHIRPT is poised to have a dramatic impact on biological imaging, particularly for in vivo preparations. Here we present the mathematical formalism for CHIRPT imaging under spatially coherent illumination and present experimental data that verifies the theoretical model.
Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera
NASA Astrophysics Data System (ADS)
Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li
2014-09-01
With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.
Selections from 2017: Image Processing with AstroImageJ
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-12-01
Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry with interactive light curve fitting:plot light curves of a star in real timeCitationKaren A. Collins et al 2017 AJ 153 77. doi:10.3847/1538-3881/153/2/77
Natural image statistics mediate brightness 'filling in'.
Dakin, Steven C; Bex, Peter J
2003-11-22
Although the human visual system can accurately estimate the reflectance (or lightness) of surfaces under enormous variations in illumination, two equiluminant grey regions can be induced to appear quite different simply by placing a light-dark luminance transition between them. This illusion, the Craik-Cornsweet-O'Brien (CCOB) effect, has been taken as evidence for a low-level 'filling-in' mechanism subserving lightness perception. Here, we present evidence that the mechanism responsible for the CCOB effect operates not via propagation of a neural signal across space but by amplification of the low spatial frequency (SF) structure of the image. We develop a simple computational model that relies on the statistics of natural scenes actively to reconstruct the image that is most likely to have caused an observed series of responses across SF channels. This principle is tested psychophysically by deriving classification images (CIs) for subjects' discrimination of the contrast polarity of CCOB stimuli masked with noise. CIs resemble 'filled-in' stimuli; i.e. observers rely on portions of the stimuli that contain no information per se but that correspond closely to the reported perceptual completion. As predicted by the model, the filling-in process is contingent on the presence of appropriate low SF structure.
NASA Astrophysics Data System (ADS)
Simone, Gabriele; Cordone, Roberto; Serapioni, Raul Paolo; Lecca, Michela
2017-05-01
Retinex theory estimates the human color sensation at any observed point by correcting its color based on the spatial arrangement of the colors in proximate regions. We revise two recent path-based, edge-aware Retinex implementations: Termite Retinex (TR) and Energy-driven Termite Retinex (ETR). As the original Retinex implementation, TR and ETR scan the neighborhood of any image pixel by paths and rescale their chromatic intensities by intensity levels computed by reworking the colors of the pixels on the paths. Our interest in TR and ETR is due to their unique, content-based scanning scheme, which uses the image edges to define the paths and exploits a swarm intelligence model for guiding the spatial exploration of the image. The exploration scheme of ETR has been showed to be particularly effective: its paths are local minima of an energy functional, designed to favor the sampling of image pixels highly relevant to color sensation. Nevertheless, since its computational complexity makes ETR poorly practicable, here we present a light version of it, named Light Energy-driven TR, and obtained from ETR by implementing a modified, optimized minimization procedure and by exploiting parallel computing.
NASA Astrophysics Data System (ADS)
Nogo, Kosuke; Mori, Keita; Qi, Wei; Hosono, Satsuki; Kawashima, Natsumi; Nishiyama, Akira; Wada, Kenji; Ishimaru, Ichiro
2016-03-01
We proposed the ultrasonic-assisted spectroscopic imaging for the realization of blood-glucose-level monitoring during dialytic therapy. Optical scattering and absorption caused by blood cells deteriorate the detection accuracy of glucose dissolved in plasma. Ultrasonic standing waves can agglomerate blood cells at nodes. In contrast, around anti-node regions, the amount of transmitted light increases because relatively clear plasma appears due to decline the number of blood cells. Proposed method can disperse the transmitted light of plasma without time-consuming pretreatment such as centrifugation. To realize the thumb-size glucose sensor which can be easily attached to dialysis tubes, an ultrasonic standing wave generator and a spectroscopic imager are required to be small. Ultrasonic oscillators are ∅30[mm]. A drive circuit of oscillators, which now size is 41×55×45[mm], is expected to become small. The trial apparatus of proposed one-shot Fourier spectroscopic imager, whose size is 30×30×48[mm], also can be little-finger size in principal. In the experiment, we separated the suspension mixed water and micro spheres (Θ10[mm) into particles and liquid regions with the ultrasonic standing wave (frequency: 2[MHz]). Furthermore, the spectrum of transmitted light through the suspension could be obtained in visible light regions with a white LED.
NASA Astrophysics Data System (ADS)
Valiya Peedikakkal, Liyana; Cadby, Ashley
2017-02-01
Localization based super resolution images of a biological sample is generally achieved by using high power laser illumination with long exposure time which unfortunately increases photo-toxicity of a sample, making super resolution microscopy, in general, incompatible with live cell imaging. Furthermore, the limitation of photobleaching reduces the ability to acquire time lapse images of live biological cells using fluorescence microscopy. Digital Light Processing (DLP) technology can deliver light at grey scale levels by flickering digital micromirrors at around 290 Hz enabling highly controlled power delivery to samples. In this work, Digital Micromirror Device (DMD) is implemented in an inverse Schiefspiegler telescope setup to control the power and pattern of illumination for super resolution microscopy. We can achieve spatial and temporal patterning of illumination by controlling the DMD pixel by pixel. The DMD allows us to control the power and spatial extent of the laser illumination. We have used this to show that we can reduce the power delivered to the sample to allow for longer time imaging in one area while achieving sub-diffraction STORM imaging in another using higher power densities.
Computer-aided light sheet flow visualization using photogrammetry
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1994-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.
Computer-Aided Light Sheet Flow Visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Computer-aided light sheet flow visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Masella, Benjamin D; Williams, David R; Fischer, William S; Rossi, Ethan A; Hunter, Jennifer J
2014-05-20
Many retinal imaging instruments use infrared wavelengths to reduce the risk of light damage. However, we have discovered that exposure to infrared illumination causes a long-lasting reduction in infrared autofluorescence (IRAF). We have characterized the dependence of this effect on radiant exposure and investigated its origin. A scanning laser ophthalmoscope was used to obtain IRAF images from two macaques before and after exposure to 790-nm light (15-450 J/cm(2)). Exposures were performed with either raster-scanning or uniform illumination. Infrared autofluorescence images also were obtained in two humans exposed to 790-nm light in a separate study. Humans were assessed with direct ophthalmoscopy, Goldmann visual fields, multifocal ERG, and photopic microperimetry to determine whether these measures revealed any effects in the exposed locations. A significant decrease in IRAF after exposure to infrared light was seen in both monkeys and humans. In monkeys, the magnitude of this reduction increased with retinal radiant exposure. Partial recovery was seen at 1 month, with full recovery within 21 months. Consistent with a photochemical origin, IRAF decreases caused by either raster-scanning or uniform illumination were not significantly different. We were unable to detect any effect of the light exposure with any measure other than IRAF imaging. We cannot exclude the possibility that changes could be detected with more sensitive tests or longer follow-up. This long-lasting effect of infrared illumination in both humans and monkeys occurs at exposure levels four to five times below current safety limits. The photochemical basis for this phenomenon remains unknown. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
NASA Astrophysics Data System (ADS)
Myllylä, Teemu S.; Sorvoja, Hannu S. S.; Nikkinen, Juha; Tervonen, Osmo; Kiviniemi, Vesa; Myllylä, Risto A.
2011-07-01
Our goal is to provide a cost-effective method for examining human tissue, particularly the brain, by the simultaneous use of functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS). Due to its compatibility requirements, MRI poses a demanding challenge for NIRS measurements. This paper focuses particularly on presenting the instrumentation and a method for the non-invasive measurement of NIR light absorbed in human tissue during MR imaging. One practical method to avoid disturbances in MR imaging involves using long fibre bundles to enable conducting the measurements at some distance from the MRI scanner. This setup serves in fact a dual purpose, since also the NIRS device will be less disturbed by the MRI scanner. However, measurements based on long fibre bundles suffer from light attenuation. Furthermore, because one of our primary goals was to make the measuring method as cost-effective as possible, we used high-power light emitting diodes instead of more expensive lasers. The use of LEDs, however, limits the maximum output power which can be extracted to illuminate the tissue. To meet these requirements, we improved methods of emitting light sufficiently deep into tissue. We also show how to measure NIR light of a very small power level that scatters from the tissue in the MRI environment, which is characterized by strong electromagnetic interference. In this paper, we present the implemented instrumentation and measuring method and report on test measurements conducted during MRI scanning. These measurements were performed in MRI operating rooms housing 1.5 Tesla-strength closed MRI scanners (manufactured by GE) in the Dept. of Diagnostic Radiology at the Oulu University Hospital.
Motion of glossy objects does not promote separation of lighting and surface colour
2017-01-01
The surface properties of an object, such as texture, glossiness or colour, provide important cues to its identity. However, the actual visual stimulus received by the eye is determined by both the properties of the object and the illumination. We tested whether operational colour constancy for glossy objects (the ability to distinguish changes in spectral reflectance of the object, from changes in the spectrum of the illumination) was affected by rotational motion of either the object or the light source. The different chromatic and geometric properties of the specular and diffuse reflections provide the basis for this discrimination, and we systematically varied specularity to control the available information. Observers viewed animations of isolated objects undergoing either lighting or surface-based spectral transformations accompanied by motion. By varying the axis of rotation, and surface patterning or geometry, we manipulated: (i) motion-related information about the scene, (ii) relative motion between the surface patterning and the specular reflection of the lighting, and (iii) image disruption caused by this motion. Despite large individual differences in performance with static stimuli, motion manipulations neither improved nor degraded performance. As motion significantly disrupts frame-by-frame low-level image statistics, we infer that operational constancy depends on a high-level scene interpretation, which is maintained in all conditions. PMID:29291113
Light-Driven Nano-oscillators for Label-Free Single-Molecule Monitoring of MicroRNA.
Chen, Zixuan; Peng, Yujiao; Cao, Yue; Wang, Hui; Zhang, Jian-Rong; Chen, Hong-Yuan; Zhu, Jun-Jie
2018-06-13
Here, we present a mapping tool based on individual light-driven nano-oscillators for label-free single-molecule monitoring of microRNA. This design uses microRNA as a single-molecule damper for nano-oscillators by forming a rigid dual-strand structure in the gap between nano-oscillators and the immobilized surface. The ultrasensitive detection is attributed to comparable dimensions of the gap and microRNA. A developed surface plasmon-coupled scattering imaging technology enables us to directly measure the real-time gap distance vibration of multiple nano-oscillators with high accuracy and fast dynamics. High-level and low-level states of the oscillation amplitude indicate melting and hybridization statuses of microRNA. Lifetimes of two states reveal that the hybridization rate of microRNA is determined by the three-dimensional diffusion. This imaging technique contributes application potentials in a single-molecule detection and nanomechanics study.
Performance characterization of structured light-based fingerprint scanner
NASA Astrophysics Data System (ADS)
Hassebrook, Laurence G.; Wang, Minghao; Daley, Raymond C.
2013-05-01
Our group believes that the evolution of fingerprint capture technology is in transition to include 3-D non-contact fingerprint capture. More specifically we believe that systems based on structured light illumination provide the highest level of depth measurement accuracy. However, for these new technologies to be fully accepted by the biometric community, they must be compliant with federal standards of performance. At present these standards do not exist for this new biometric technology. We propose and define a set of test procedures to be used to verify compliance with the Federal Bureau of Investigation's image quality specification for Personal Identity Verification single fingerprint capture devices. The proposed test procedures include: geometric accuracy, lateral resolution based on intensity or depth, gray level uniformity and flattened fingerprint image quality. Several 2-D contact analogies, performance tradeoffs and optimization dilemmas are evaluated and proposed solutions are presented.
Human vision is attuned to the diffuseness of natural light
Morgenstern, Yaniv; Geisler, Wilson S.; Murray, Richard F.
2014-01-01
All images are highly ambiguous, and to perceive 3-D scenes, the human visual system relies on assumptions about what lighting conditions are most probable. Here we show that human observers' assumptions about lighting diffuseness are well matched to the diffuseness of lighting in real-world scenes. We use a novel multidirectional photometer to measure lighting in hundreds of environments, and we find that the diffuseness of natural lighting falls in the same range as previous psychophysical estimates of the visual system's assumptions about diffuseness. We also find that natural lighting is typically directional enough to override human observers' assumption that light comes from above. Furthermore, we find that, although human performance on some tasks is worse in diffuse light, this can be largely accounted for by intrinsic task difficulty. These findings suggest that human vision is attuned to the diffuseness levels of natural lighting conditions. PMID:25139864
Theory and analysis of a large field polarization imaging system with obliquely incident light.
Lu, Xiaotian; Jin, Weiqi; Li, Li; Wang, Xia; Qiu, Su; Liu, Jing
2018-02-05
Polarization imaging technology provides information about not only the irradiance of a target but also the polarization degree and angle of polarization, which indicates extensive application potential. However, polarization imaging theory is based on paraxial optics. When a beam of obliquely incident light passes an analyser, the direction of light propagation is not perpendicular to the surface of the analyser and the applicability of the traditional paraxial optical polarization imaging theory is challenged. This paper investigates a theoretical model of a polarization imaging system with obliquely incident light and establishes a polarization imaging transmission model with a large field of obliquely incident light. In an imaging experiment with an integrating sphere light source and rotatable polarizer, the polarization imaging transmission model is verified and analysed for two cases of natural light and linearly polarized light incidence. Although the results indicate that the theoretical model is consistent with the experimental results, the theoretical model distinctly differs from the traditional paraxial approximation model. The results prove the accuracy and necessity of the theoretical model and the theoretical guiding significance for theoretical and systematic research of large field polarization imaging.
System-level analysis and design for RGB-NIR CMOS camera
NASA Astrophysics Data System (ADS)
Geelen, Bert; Spooren, Nick; Tack, Klaas; Lambrechts, Andy; Jayapala, Murali
2017-02-01
This paper presents system-level analysis of a sensor capable of simultaneously acquiring both standard absorption based RGB color channels (400-700nm, 75nm FWHM), as well as an additional NIR channel (central wavelength: 808 nm, FWHM: 30nm collimated light). Parallel acquisition of RGB and NIR info on the same CMOS image sensor is enabled by monolithic pixel-level integration of both a NIR pass thin film filter and NIR blocking filters for the RGB channels. This overcomes the need for a standard camera-level NIR blocking filter to remove the NIR leakage present in standard RGB absorption filters from 700-1000nm. Such a camera-level NIR blocking filter would inhibit the acquisition of the NIR channel on the same sensor. Thin film filters do not operate in isolation. Rather, their performance is influenced by the system context in which they operate. The spectral distribution of light arriving at the photo diode is shaped a.o. by the illumination spectral profile, optical component transmission characteristics and sensor quantum efficiency. For example, knowledge of a low quantum efficiency (QE) of the CMOS image sensor above 800nm may reduce the filter's blocking requirements and simplify the filter structure. Similarly, knowledge of the incoming light angularity as set by the objective lens' F/# and exit pupil location may be taken into account during the thin film's optimization. This paper demonstrates how knowledge of the application context can facilitate filter design and relax design trade-offs and presents experimental results.
NASA Astrophysics Data System (ADS)
Jo, Youngju; Jung, Jaehwang; Lee, Jee Woong; Shin, Della; Park, Hyunjoo; Nam, Ki Tae; Park, Ji-Ho; Park, Yongkeun
2014-05-01
Two-dimensional angle-resolved light scattering maps of individual rod-shaped bacteria are measured at the single-cell level. Using quantitative phase imaging and Fourier transform light scattering techniques, the light scattering patterns of individual bacteria in four rod-shaped species (Bacillus subtilis, Lactobacillus casei, Synechococcus elongatus, and Escherichia coli) are measured with unprecedented sensitivity in a broad angular range from -70° to 70°. The measured light scattering patterns are analyzed along the two principal axes of rod-shaped bacteria in order to systematically investigate the species-specific characteristics of anisotropic light scattering. In addition, the cellular dry mass of individual bacteria is calculated and used to demonstrate that the cell-to-cell variations in light scattering within bacterial species is related to the cellular dry mass and growth.
Utilizing Light-field Imaging Technology in Neurosurgery.
Chen, Brian R; Buchanan, Ian A; Kellis, Spencer; Kramer, Daniel; Ohiorhenuan, Ifije; Blumenfeld, Zack; Grisafe Ii, Dominic J; Barbaro, Michael F; Gogia, Angad S; Lu, James Y; Chen, Beverly B; Lee, Brian
2018-04-10
Traditional still cameras can only focus on a single plane for each image while rendering everything outside of that plane out of focus. However, new light-field imaging technology makes it possible to adjust the focus plane after an image has already been captured. This technology allows the viewer to interactively explore an image with objects and anatomy at varying depths and clearly focus on any feature of interest by selecting that location during post-capture viewing. These images with adjustable focus can serve as valuable educational tools for neurosurgical residents. We explore the utility of light-field cameras and review their strengths and limitations compared to other conventional types of imaging. The strength of light-field images is the adjustable focus, as opposed to the fixed-focus of traditional photography and video. A light-field image also is interactive by nature, as it requires the viewer to select the plane of focus and helps with visualizing the three-dimensional anatomy of an image. Limitations include the relatively low resolution of light-field images compared to traditional photography and video. Although light-field imaging is still in its infancy, there are several potential uses for the technology to complement traditional still photography and videography in neurosurgical education.
Utilizing Light-field Imaging Technology in Neurosurgery
Chen, Brian R; Kellis, Spencer; Kramer, Daniel; Ohiorhenuan, Ifije; Blumenfeld, Zack; Grisafe II, Dominic J; Barbaro, Michael F; Gogia, Angad S; Lu, James Y; Chen, Beverly B; Lee, Brian
2018-01-01
Traditional still cameras can only focus on a single plane for each image while rendering everything outside of that plane out of focus. However, new light-field imaging technology makes it possible to adjust the focus plane after an image has already been captured. This technology allows the viewer to interactively explore an image with objects and anatomy at varying depths and clearly focus on any feature of interest by selecting that location during post-capture viewing. These images with adjustable focus can serve as valuable educational tools for neurosurgical residents. We explore the utility of light-field cameras and review their strengths and limitations compared to other conventional types of imaging. The strength of light-field images is the adjustable focus, as opposed to the fixed-focus of traditional photography and video. A light-field image also is interactive by nature, as it requires the viewer to select the plane of focus and helps with visualizing the three-dimensional anatomy of an image. Limitations include the relatively low resolution of light-field images compared to traditional photography and video. Although light-field imaging is still in its infancy, there are several potential uses for the technology to complement traditional still photography and videography in neurosurgical education. PMID:29888163
Apparatus and method for a light direction sensor
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2011-01-01
The present invention provides a light direction sensor for determining the direction of a light source. The system includes an image sensor; a spacer attached to the image sensor, and a pattern mask attached to said spacer. The pattern mask has a slit pattern that as light passes through the slit pattern it casts a diffraction pattern onto the image sensor. The method operates by receiving a beam of light onto a patterned mask, wherein the patterned mask as a plurality of a slit segments. Then, diffusing the beam of light onto an image sensor and determining the direction of the light source.
Eum, Juneyong; Kwak, Jina; Kim, Hee Joung; Ki, Seoyoung; Lee, Kooyeon; Raslan, Ahmed A.; Park, Ok Kyu; Chowdhury, Md Ashraf Uddin; Her, Song; Kee, Yun; Kwon, Seung-Hae; Hwang, Byung Joon
2016-01-01
Environmental contamination by trinitrotoluene is of global concern due to its widespread use in military ordnance and commercial explosives. Despite known long-term persistence in groundwater and soil, the toxicological profile of trinitrotoluene and other explosive wastes have not been systematically measured using in vivo biological assays. Zebrafish embryos are ideal model vertebrates for high-throughput toxicity screening and live in vivo imaging due to their small size and transparency during embryogenesis. Here, we used Single Plane Illumination Microscopy (SPIM)/light sheet microscopy to assess the developmental toxicity of explosive-contaminated water in zebrafish embryos and report 2,4,6-trinitrotoluene-associated developmental abnormalities, including defects in heart formation and circulation, in 3D. Levels of apoptotic cell death were higher in the actively developing tissues of trinitrotoluene-treated embryos than controls. Live 3D imaging of heart tube development at cellular resolution by light-sheet microscopy revealed trinitrotoluene-associated cardiac toxicity, including hypoplastic heart chamber formation and cardiac looping defects, while the real time PCR (polymerase chain reaction) quantitatively measured the molecular changes in the heart and blood development supporting the developmental defects at the molecular level. Identification of cellular toxicity in zebrafish using the state-of-the-art 3D imaging system could form the basis of a sensitive biosensor for environmental contaminants and be further valued by combining it with molecular analysis. PMID:27869673
Mathematical model of a DIC position sensing system within an optical trap
NASA Astrophysics Data System (ADS)
Wulff, Kurt D.; Cole, Daniel G.; Clark, Robert L.
2005-08-01
The quantitative study of displacements and forces of motor proteins and processes that occur at the microscopic level and below require a high level of sensitivity. For optical traps, two techniques for position sensing have been accepted and used quite extensively: quadrant photodiodes and an interferometric position sensing technique based on DIC imaging. While quadrant photodiodes have been studied in depth and mathematically characterized, a mathematical characterization of the interferometric position sensor has not been presented to the authors' knowledge. The interferometric position sensing method works off of the DIC imaging capabilities of a microscope. Circularly polarized light is sent into the microscope and the Wollaston prism used for DIC imaging splits the beam into its orthogonal components, displacing them by a set distance determined by the user. The distance between the axes of the beams is set so the beams overlap at the specimen plane and effectively share the trapped microsphere. A second prism then recombines the light beams and the exiting laser light's polarization is measured and related to position. In this paper we outline the mathematical characterization of a microsphere suspended in an optical trap using a DIC position sensing method. The sensitivity of this mathematical model is then compared to the QPD model. The mathematical model of a microsphere in an optical trap can serve as a calibration curve for an experimental setup.
Image processing and data reduction of Apollo low light level photographs
NASA Technical Reports Server (NTRS)
Alvord, G. C.
1975-01-01
The removal of the lens induced vignetting from a selected sample of the Apollo low light level photographs is discussed. The methods used were developed earlier. A study of the effect of noise on vignetting removal and the comparability of the Apollo 35mm Nikon lens vignetting was also undertaken. The vignetting removal was successful to about 10% photometry, and noise has a severe effect on the useful photometric output data. Separate vignetting functions must be used for different flights since the vignetting function varies from camera to camera in size and shape.
Advances in photon counting for bioluminescence
NASA Astrophysics Data System (ADS)
Ingle, Martin B.; Powell, Ralph
1998-11-01
Photon counting systems were originally developed for astronomy, initially by the astronomical community. However, a major application area is in the study of luminescent probes in living plants, fishes and cell cultures. For these applications, it has been necessary to develop camera system capability at very low light levels -- a few photons occasionally -- and also at reasonably high light levels to enable the systems to be focused and to collect quality images of the object under study. The paper presents new data on MTF at extremely low photon flux and conventional ICCD illumination, counting efficiency and dark noise as a function of temperature.
Research of spectacle frame measurement system based on structured light method
NASA Astrophysics Data System (ADS)
Guan, Dong; Chen, Xiaodong; Zhang, Xiuda; Yan, Huimin
2016-10-01
Automatic eyeglass lens edging system is now widely used to automatically cut and polish the uncut lens based on the spectacle frame shape data which is obtained from the spectacle frame measuring machine installed on the system. The conventional approach to acquire the frame shape data works in the contact scanning mode with a probe tracing around the groove contour of the spectacle frame which requires a sophisticated mechanical and numerical control system. In this paper, a novel non-contact optical measuring method based on structured light to measure the three dimensional (3D) data of the spectacle frame is proposed. First we focus on the processing approach solving the problem of deterioration of the structured light stripes caused by intense specular reflection on the frame surface. The techniques of bright-dark bi-level fringe projecting, multiple exposuring and high dynamic range imaging are introduced to obtain a high-quality image of structured light stripes. Then, the Gamma transform and median filtering are applied to enhance image contrast. In order to get rid of background noise from the image and extract the region of interest (ROI), an auxiliary lighting system of special design is utilized to help effectively distinguish between the object and the background. In addition, a morphological method with specific morphological structure-elements is adopted to remove noise between stripes and boundary of the spectacle frame. By further fringe center extraction and depth information acquisition through the method of look-up table, the 3D shape of the spectacle frame is recovered.
NASA Astrophysics Data System (ADS)
Li, David S.; Yoon, Soon Joon; Matula, Thomas J.; O'Donnell, Matthew; Pozzo, Lilo D.
2017-03-01
A new light and sound sensitive nanoemulsion contrast agent is presented. The agents feature a low boiling point liquid perfluorocarbon core and a broad light spectrum absorbing polypyrrole (PPy) polymer shell. The PPy coated nanoemulsions can reversibly convert from liquid to gas phase upon cavitation of the liquid perfluorocarbon core. Cavitation can be initiated using a sufficiently high intensity acoustic pulse or from heat generation due to light absorption from a laser pulse. The emulsions can be made between 150 and 350 nm in diameter and PPy has a broad optical absorption covering both the visible spectrum and extending into the near-infrared spectrum (peak absorption 1053 nm). The size, structure, and optical absorption properties of the PPy coated nanoemulsions were characterized and compared to PPy nanoparticles (no liquid core) using dynamic light scattering, ultraviolet-visible spectrophotometry, transmission electron microscopy, and small angle X-ray scattering. The cavitation threshold and signal intensity were measured as a function of both acoustic pressure and laser fluence. Overlapping simultaneous transmission of an acoustic and laser pulse can significantly reduce the activation energy of the contrast agents to levels lower than optical or acoustic activation alone. We also demonstrate that simultaneous light and sound cavitation of the agents can be used in a new sono-photoacoustic imaging method, which enables greater sensitivity than traditional photoacoustic imaging.
Frangioni, John V [Wayland, MA
2012-07-24
A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remains in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may also employ dyes or other fluorescent substances associated with antibodies, antibody fragments, or ligands that accumulate within a region of diagnostic significance. In one embodiment, the system provides an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide that is used to capture images. In another embodiment, the system is configured for use in open surgical procedures by providing an operating area that is closed to ambient light. More broadly, the systems described herein may be used in imaging applications where a visible light image may be usefully supplemented by an image formed from fluorescent emissions from a fluorescent substance that marks areas of functional interest.
System and Method for Scan Range Gating
NASA Technical Reports Server (NTRS)
Lindemann, Scott (Inventor); Zuk, David M. (Inventor)
2017-01-01
A system for scanning light to define a range gated signal includes a pulsed coherent light source that directs light into the atmosphere, a light gathering instrument that receives the light modified by atmospheric backscatter and transfers the light onto an image plane, a scanner that scans collimated light from the image plane to form a range gated signal from the light modified by atmospheric backscatter, a control circuit that coordinates timing of a scan rate of the scanner and a pulse rate of the pulsed coherent light source so that the range gated signal is formed according to a desired range gate, an optical device onto which an image of the range gated signal is scanned, and an interferometer to which the image of the range gated signal is directed by the optical device. The interferometer is configured to modify the image according to a desired analysis.
High Resolution Imaging from the Stratosphere: Atmospheric Seeing and Tether Dynamics
NASA Technical Reports Server (NTRS)
Ford, Holland
2003-01-01
A balloon-borne telescope that is capable of imaging planets orbiting nearby stars requires that the flatness and tilt of the wavefront of the light entering that telescope meet certain stringent conditions. The atmosphere through which the light propagates distorts the wavefront due to turbulence in the atmosphere and due to the disturbances caused by the balloon itself The magnitude of these effects may be estimated, but no direct measurements have been made at the level of precision necessary for designing a telescope as demanding as we envision. Therefore, under this grant we carried out a study of techniques that could be used to make an in situ measurement of the distortion of the optical wavefront.
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Suzuki, Mayumi; Kato, Katsuhiko; Watabe, Tadashi; Ikeda, Hayato; Kanai, Yasukazu; Ogata, Yoshimune; Hatazawa, Jun
2016-09-01
Although iodine 131 (I-131) is used for radionuclide therapy, high resolution images are difficult to obtain with conventional gamma cameras because of the high energy of I-131 gamma photons (364 keV). Cerenkov-light imaging is a possible method for beta emitting radionuclides, and I-131 (606 MeV maximum beta energy) is a candidate to obtain high resolution images. We developed a high energy gamma camera system for I-131 radionuclide and combined it with a Cerenkov-light imaging system to form a gamma-photon/Cerenkov-light hybrid imaging system to compare the simultaneously measured images of these two modalities. The high energy gamma imaging detector used 0.85-mm×0.85-mm×10-mm thick GAGG scintillator pixels arranged in a 44×44 matrix with a 0.1-mm thick reflector and optical coupled to a Hamamatsu 2 in. square position sensitive photomultiplier tube (PSPMT: H12700 MOD). The gamma imaging detector was encased in a 2 cm thick tungsten shield, and a pinhole collimator was mounted on its top to form a gamma camera system. The Cerenkov-light imaging system was made of a high sensitivity cooled CCD camera. The Cerenkov-light imaging system was combined with the gamma camera using optical mirrors to image the same area of the subject. With this configuration, we simultaneously imaged the gamma photons and the Cerenkov-light from I-131 in the subjects. The spatial resolution and sensitivity of the gamma camera system for I-131 were respectively 3 mm FWHM and 10 cps/MBq for the high sensitivity collimator at 10 cm from the collimator surface. The spatial resolution of the Cerenkov-light imaging system was 0.64 mm FWHM at 10 cm from the system surface. Thyroid phantom and rat images were successfully obtained with the developed gamma-photon/Cerenkov-light hybrid imaging system, allowing direct comparison of these two modalities. Our developed gamma-photon/Cerenkov-light hybrid imaging system will be useful to evaluate the advantages and disadvantages of these two modalities.
Stray light characteristics of the diffractive telescope system
NASA Astrophysics Data System (ADS)
Liu, Dun; Wang, Lihua; Yang, Wei; Wu, Shibin; Fan, Bin; Wu, Fan
2018-02-01
Diffractive telescope technology is an innovation solution in construction of large light-weight space telescope. However, the nondesign orders of diffractive optical elements (DOEs) may affect the imaging performance as stray light. To study the stray light characteristics of a diffractive telescope, a prototype was developed and its stray light analysis model was established. The stray light characteristics including ghost, point source transmittance, and veiling glare index (VGI) were analyzed. During the star imaging test of the prototype, the ghost images appeared around the star image as the exposure time of the charge-coupled device improving, consistent with the simulation results. The test result of VGI was 67.11%, slightly higher than the calculated value 57.88%. The study shows that the same order diffraction of the diffractive primary lens and correcting DOE is the main factor that causes ghost images. The stray light sources outside the field of view can illuminate the image plane through nondesign orders diffraction of the primary lens and contributes to more than 90% of the stray light flux on the image plane. In summary, it is expected that these works will provide some guidance for optimizing the imaging performance of diffractive telescopes.
Interaction of Light and Ethylene on Stem Gravitropism
NASA Technical Reports Server (NTRS)
Harrison, Marcia A.
1996-01-01
The major objective of this study was to evaluate light-regulated ethylene production during gravitropic bending in etiolated pea stems. Previous investigations indicated that ethylene production increases after gravistimulation and is associated with the later (counter-reactive) phase of bending. Additionally, changes in the counter-reaction and locus of curvature during gravitropism are greatly influenced by red light and ethylene production. Ethylene production may be regulated by the levels of available precursor (1-aminocyclopropane-l-carboxylic acid, ACC) via its synthesis, conjugation to malonyl-ACC or glutamyl-ACC, or oxidation to ethylene. The regulation of ethylene production by quantifying ACC and conjugated ACC levels in gravistimulated pea stemswas examined. Also measured was the changes in protein and enzyme activity associated with gravitropic curvature by electrophoretic and spectrophotometric techniques. An image analysis system was used to visualize and quantify enzymatic activity and transcriptional products in gravistimulated and red-light treated etiolated pea stem tissues.
Shaded Relief of Rio Sao Francisco, Brazil
2000-02-14
This topographic image acquired by SRTM shows an area south of the Sao Francisco River in Brazil. The scrub forest terrain shows relief of about 400 meters (1300 feet). Areas such as these are difficult to map by traditional methods because of frequent cloud cover and local inaccessibility. This region has little topographic relief, but even subtle changes in topography have far-reaching effects on regional ecosystems. The image covers an area of 57 km x 79 km and represents one quarter of the 225 km SRTM swath. Colors range from dark blue at water level to white and brown at hill tops. The terrain features that are clearly visible in this image include tributaries of the Sao Francisco, the dark-blue branch-like features visible from top right to bottom left, and on the left edge of the image, and hills rising up from the valley floor. The San Francisco River is a major source of water for irrigation and hydroelectric power. Mapping such regions will allow scientists to better understand the relationships between flooding cycles, forestation and human influences on ecosystems. This shaded relief image was generated using topographic data from the Shuttle Radar Topography Mission. A computer-generated artificial light source illuminates the elevation data to produce a pattern of light and shadows. Slopes facing the light appear bright, while those facing away are shaded. On flatter surfaces, the pattern of light and shadows can reveal subtle features in the terrain. Shaded relief maps are commonly used in applications such as geologic mapping and land use planning. http://photojournal.jpl.nasa.gov/catalog/PIA02700
NASA Astrophysics Data System (ADS)
Marchand, Paul J.; Bouwens, Arno; Shamaei, Vincent; Nguyen, David; Extermann, Jerome; Bolmont, Tristan; Lasser, Theo
2016-03-01
Magnetic Resonance Imaging has revolutionised our understanding of brain function through its ability to image human cerebral structures non-invasively over the entire brain. By exploiting the different magnetic properties of oxygenated and deoxygenated blood, functional MRI can indirectly map areas undergoing neural activation. Alongside the development of fMRI, powerful statistical tools have been developed in an effort to shed light on the neural pathways involved in processing of sensory and cognitive information. In spite of the major improvements made in fMRI technology, the obtained spatial resolution of hundreds of microns prevents MRI in resolving and monitoring processes occurring at the cellular level. In this regard, Optical Coherence Microscopy is an ideal instrumentation as it can image at high spatio-temporal resolution. Moreover, by measuring the mean and the width of the Doppler spectra of light scattered by moving particles, OCM allows extracting the axial and lateral velocity components of red blood cells. The ability to assess quantitatively total blood velocity, as opposed to classical axial velocity Doppler OCM, is of paramount importance in brain imaging as a large proportion of cortical vascular is oriented perpendicularly to the optical axis. We combine here quantitative blood flow imaging with extended-focus Optical Coherence Microscopy and Statistical Parametric Mapping tools to generate maps of stimuli-evoked cortical hemodynamics at the capillary level.
Detecting Exoplanets with the New Worlds Observer: The Problem of Exozodiacal Dust
NASA Technical Reports Server (NTRS)
Roberge, A.; Noecker, M. C.; Glassman, T. M.; Oakley, P.; Turnbull, M. C.
2009-01-01
Dust coming from asteroids and comets will strongly affect direct imaging and characterization of terrestrial planets in the Habitable Zones of nearby stars. Such dust in the Solar System is called the zodiacal dust (or 'zodi' for short). Higher levels of similar dust are seen around many nearby stars, confined in disks called debris disks. Future high-contrast images of an Earth-like exoplanet will very likely be background-limited by light scattered of both the local Solar System zodi and the circumstellar dust in the extrasolar system (the exozodiacal dust). Clumps in the exozodiacal dust, which are expected in planet-hosting systems, may also be a source of confusion. Here we discuss the problems associated with imaging an Earth-like planet in the presence of unknown levels of exozodiacal dust. Basic formulae for the exoplanet imaging exposure time as function of star, exoplanet, zodi, exozodi, and telescope parameters will be presented. To examine the behavior of these formulae, we apply them to the New Worlds Observer (NWO) mission. NWO is a proposed 4-meter UV/optical/near-IR telescope, with a free flying starshade to suppress the light from a nearby star and achieve the high contrast needed for detection and characterization of a terrestrial planet in the star's Habitable Zone. We find that NWO can accomplish its science goals even if exozodiacal dust levels are typically much higher than the Solar System zodi level. Finally, we highlight a few additional problems relating to exozodiacal dust that have yet to be solved.
Reduction of background clutter in structured lighting systems
Carlson, Jeffrey J.; Giles, Michael K.; Padilla, Denise D.; Davidson, Jr., Patrick A.; Novick, David K.; Wilson, Christopher W.
2010-06-22
Methods for segmenting the reflected light of an illumination source having a characteristic wavelength from background illumination (i.e. clutter) in structured lighting systems can comprise pulsing the light source used to illuminate a scene, pulsing the light source synchronously with the opening of a shutter in an imaging device, estimating the contribution of background clutter by interpolation of images of the scene collected at multiple spectral bands not including the characteristic wavelength and subtracting the estimated background contribution from an image of the scene comprising the wavelength of the light source and, placing a polarizing filter between the imaging device and the scene, where the illumination source can be polarized in the same orientation as the polarizing filter. Apparatus for segmenting the light of an illumination source from background illumination can comprise an illuminator, an image receiver for receiving images of multiple spectral bands, a processor for calculations and interpolations, and a polarizing filter.
Reduction of image-based ADI-to-AEI overlay inconsistency with improved algorithm
NASA Astrophysics Data System (ADS)
Chen, Yen-Liang; Lin, Shu-Hong; Chen, Kai-Hsiung; Ke, Chih-Ming; Gau, Tsai-Sheng
2013-04-01
In image-based overlay (IBO) measurement, the measurement quality of various measurement spectra can be judged by quality indicators and also the ADI-to-AEI similarity to determine the optimum light spectrum. However we found some IBO measured results showing erroneous indication of wafer expansion from the difference between the ADI and the AEI maps, even after their measurement spectra were optimized. To reduce this inconsistency, an improved image calculation algorithm is proposed in this paper. Different gray levels composed of inner- and outer-box contours are extracted to calculate their ADI overlay errors. The symmetry of intensity distribution at the thresholds dictated by a range of gray levels is used to determine the particular gray level that can minimize the ADI-to-AEI overlay inconsistency. After this improvement, the ADI is more similar to AEI with less expansion difference. The same wafer was also checked by the diffraction-based overlay (DBO) tool to verify that there is no physical wafer expansion. When there is actual wafer expansion induced by large internal stress, both the IBO and the DBO measurements indicate similar expansion results. The scanning white-light interference microscope was used to check the variation of wafer warpage during the ADI and AEI stages. It predicts a similar trend with the overlay difference map, confirming the internal stress.
Three-dimensional wide-field pump-probe structured illumination microscopy
Kim, Yang-Hyo; So, Peter T.C.
2017-01-01
We propose a new structured illumination scheme for achieving depth resolved wide-field pump-probe microscopy with sub-diffraction limit resolution. By acquiring coherent pump-probe images using a set of 3D structured light illumination patterns, a 3D super-resolution pump-probe image can be reconstructed. We derive the theoretical framework to describe the coherent image formation and reconstruction scheme for this structured illumination pump-probe imaging system and carry out numerical simulations to investigate its imaging performance. The results demonstrate a lateral resolution improvement by a factor of three and providing 0.5 µm level axial optical sectioning. PMID:28380860
Investigation of the low-level modulated light action
NASA Astrophysics Data System (ADS)
Antonov, Sergei N.; Sotnikov, V. N.; Koreneva, L. G.
1994-07-01
Now there exists no clear complete knowledge about mechanisms and pathways by which low level laser bioactivation works. Modulated laser light action has been investigated two new ways: dynamical infrared thermography and computing image of living brain. These ways permit observation in real time laser action on peripheral blood flow, reflex reactions to functional probes, thermoregulation mechanisms as well as brain electrical activity changes of humans. We have designed a universal apparatus which produced all regimes of the output laser light. It has a built-in He-Ne laser with an acousto-optic modulator and an infrared GaAs laser. The device provided spatial combination of both the light beams and permitted us to irradiate an object both separately and simultaneously. This research shows that the most effective frequencies range from several to dozens of hertz. The duty factor and frequency scanning are also important. On the basis of these results in Russian clinics new treatment methods using modulated light are applied in practical neurology, gynecology, etc.
Non-uniform refractive index field measurement based on light field imaging technique
NASA Astrophysics Data System (ADS)
Du, Xiaokun; Zhang, Yumin; Zhou, Mengjie; Xu, Dong
2018-02-01
In this paper, a method for measuring the non-uniform refractive index field based on the light field imaging technique is proposed. First, the light field camera is used to collect the four-dimensional light field data, and then the light field data is decoded according to the light field imaging principle to obtain image sequences with different acquisition angles of the refractive index field. Subsequently PIV (Particle Image Velocimetry) technique is used to extract ray offset of each image. Finally, the distribution of non-uniform refractive index field can be calculated by inversing the deflection of light rays. Compared with traditional optical methods which require multiple optical detectors from multiple angles to synchronously collect data, the method proposed in this paper only needs a light field camera and shoot once. The effectiveness of the method has been verified by the experiment which quantitatively measures the distribution of the refractive index field above the flame of the alcohol lamp.
Multi-band Emission Light Curves of Jupiter: Insights on Brown Dwarfs and Directly Imaged Exoplanets
NASA Astrophysics Data System (ADS)
Zhang, Xi; Ge, Huazhi; Orton, Glenn S.; Fletcher, Leigh N.; Sinclair, James; Fernandes, Joshua; Momary, Thomas W.; Kasaba, Yasumasa; Sato, Takao M.; Fujiyoshi, Takuya
2016-10-01
Many brown dwarfs exhibit significant infrared flux variability (e.g., Artigau et al. 2009, ApJ, 701, 1534; Radigan et al. 2012, ApJ, 750, 105), ranging from several to twenty percent of the brightness. Current hypotheses include temperature variations, cloud holes and patchiness, and cloud height and thickness variations (e.g., Apai et al. 2013, ApJ, 768, 121; Robinson and Marley 2014, ApJ, 785, 158; Zhang and Showman 2014, ApJ, 788, L6). Some brown dwarfs show phase shifts in the light curves among different wavelengths (e.g., Buenzli et al. 2012, ApJ, 760, L31; Yang et al. 2016, arXiv:1605.02708), indicating vertical variations of the cloud distribution. The current observational technique can barely detect the brightness changes on the surfaces of nearby brown dwarfs (Crossfield et al. 2014, Nature, 505, 654) let alone resolve detailed weather patterns that cause the flux variability. The infrared emission maps of Jupiter might shed light on this problem. Using COMICS at Subaru Telescope, VISIR at Very Large Telescope (VLT) and NASA's Infrared Telescope Facility (IRTF), we obtained infrared images of Jupiter over several nights at multiple wavelengths that are sensitive to several pressure levels from the stratosphere to the deep troposphere below the ammonia clouds. The rotational maps and emission light curves are constructed. The individual pixel brightness varies up to a hundred percent level and the variation of the full-disk brightness is around several percent. Both the shape and amplitude of the light curves are significantly distinct at different wavelengths. Variation of light curves at different epochs and phase shift among different wavelengths are observed. We will present principle component analysis to identify dominant emission features such as stable vortices, cloud holes and eddies in the belts and zones and strong emissions in the aurora region. A radiative transfer model is used to simulate those features to get a more quantitative understanding. This work provides rich insights on the relationship between observed light curves and weather on brown dwarfs and perhaps on directly imaged exoplanets in the future.
Tavladoraki, Paraskevi; Kloppstech, Klaus; Argyroudi-Akoyunoglou, Joan
1989-01-01
The mRNA coding for light-harvesting complex of PSII (LHC-II) apoprotein is present in etiolated bean (Phaseolus vulgaris L.) leaves; its level is low in 5-day-old leaves, increases about 3 to 4 times in 9- to 13-day-old leaves, and decreases thereafter. A red light pulse induces an increase in LHC-II mRNA level, which is reversed by far red light, in all ages of the etiolated tissue tested. The phytochrome-controlled initial increase of LHC-II mRNA level is higher in 9- and 13-day-old than in 5- and 17-day-old bean leaves. The amount of LHC-II mRNA, accumulated in the dark after a red light pulse, oscillates rhythmically with a period of about 24 hours. This rhythm is also observed in continuous white light and in the dark following exposure to continuous white light, and persists for at least 70 hours. A second red light pulse, applied 36 hours after initiation of the rhythm, induces a phase-shift, which is prevented by far red light immediately following the second red light pulse. A persistent, but gradually reduced, far red reversibility of the red light-induced increase in LHC-II mRNA level is observed. In contrast, far red reversibility of the red light-induced clock setting is only observed when far red follows immediately the red light. It is concluded that (a) the light-induced LHC-II mRNA accumulation follows an endogenous, circadian rhythm, for the appearance of which a red light pulse is sufficient, (b) the circadian oscillator is under phytochrome control, and (c) a stable Pfr form, which exists for several hours, is responsible for sustaining LHC-II gene transcription. Images Figure 1 Figure 2 Figure 8 PMID:16666825
Simultaneous acquisition of differing image types
Demos, Stavros G
2012-10-09
A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.
Optical-thermal light-tissue interactions during photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gould, Taylor; Wang, Quanzeng; Pfefer, T. Joshua
2014-03-01
Photoacoustic imaging (PAI) has grown rapidly as a biomedical imaging technique in recent years, with key applications in cancer diagnosis and oximetry. In spite of these advances, the literature provides little insight into thermal tissue interactions involved in PAI. To elucidate these basic phenomena, we have developed, validated, and implemented a three-dimensional numerical model of tissue photothermal (PT) response to repetitive laser pulses. The model calculates energy deposition, fluence distributions, transient temperature and damage profiles in breast tissue with blood vessels and generalized perfusion. A parametric evaluation of these outputs vs. vessel diameter and depth, optical beam diameter, wavelength, and irradiance, was performed. For a constant radiant exposure level, increasing beam diameter led to a significant increase in subsurface heat generation rate. Increasing vessel diameter resulted in two competing effects - reduced mean energy deposition in the vessel due to light attenuation and greater thermal superpositioning due to reduced thermal relaxation. Maximum temperatures occurred either at the surface or in subsurface regions of the dermis, depending on vessel geometry and position. Results are discussed in terms of established exposure limits and levels used in prior studies. While additional experimental and numerical study is needed, numerical modeling represents a powerful tool for elucidating the effect of PA imaging devices on biological tissue.
Study of Colour Model for Segmenting Mycobacterium Tuberculosis in Sputum Images
NASA Astrophysics Data System (ADS)
Kurniawardhani, A.; Kurniawan, R.; Muhimmah, I.; Kusumadewi, S.
2018-03-01
One of method to diagnose Tuberculosis (TB) disease is sputum test. The presence and number of Mycobacterium tuberculosis (MTB) in sputum are identified. The presence of MTB can be seen under light microscope. Before investigating through stained light microscope, the sputum samples are stained using Ziehl-Neelsen (ZN) stain technique. Because there is no standard procedure in staining, the appearance of sputum samples may vary either in background colour or contrast level. It increases the difficulty in segmentation stage of automatic MTB identification. Thus, this study investigated the colour models to look for colour channels of colour model that can segment MTB well in different stained conditions. The colour models will be investigated are each channel in RGB, HSV, CIELAB, YCbCr, and C-Y colour model and the clustering algorithm used is k-Means. The sputum image dataset used in this study is obtained from community health clinic in a district in Indonesia. The size of each image was set to 1600x1200 pixels which is having variation in number of MTB, background colour, and contrast level. The experiment result indicates that in all image conditions, blue, hue, Cr, and Ry colour channel can be used to segment MTB in one cluster well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Naizhuo; Zhou, Yuyu; Samson, Eric L.
The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, andmore » population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.« less
DSouza, Alisha V.; Lin, Huiyun; Henderson, Eric R.; Samkoe, Kimberley S.; Pogue, Brian W.
2016-01-01
Abstract. There is growing interest in using fluorescence imaging instruments to guide surgery, and the leading options for open-field imaging are reviewed here. While the clinical fluorescence-guided surgery (FGS) field has been focused predominantly on indocyanine green (ICG) imaging, there is accelerated development of more specific molecular tracers. These agents should help advance new indications for which FGS presents a paradigm shift in how molecular information is provided for resection decisions. There has been a steady growth in commercially marketed FGS systems, each with their own differentiated performance characteristics and specifications. A set of desirable criteria is presented to guide the evaluation of instruments, including: (i) real-time overlay of white-light and fluorescence images, (ii) operation within ambient room lighting, (iii) nanomolar-level sensitivity, (iv) quantitative capabilities, (v) simultaneous multiple fluorophore imaging, and (vi) ergonomic utility for open surgery. In this review, United States Food and Drug Administration 510(k) cleared commercial systems and some leading premarket FGS research systems were evaluated to illustrate the continual increase in this performance feature base. Generally, the systems designed for ICG-only imaging have sufficient sensitivity to ICG, but a fraction of the other desired features listed above, with both lower sensitivity and dynamic range. In comparison, the emerging research systems targeted for use with molecular agents have unique capabilities that will be essential for successful clinical imaging studies with low-concentration agents or where superior rejection of ambient light is needed. There is no perfect imaging system, but the feature differences among them are important differentiators in their utility, as outlined in the data and tables here. PMID:27533438
NASA Astrophysics Data System (ADS)
DSouza, Alisha V.; Lin, Huiyun; Henderson, Eric R.; Samkoe, Kimberley S.; Pogue, Brian W.
2016-08-01
There is growing interest in using fluorescence imaging instruments to guide surgery, and the leading options for open-field imaging are reviewed here. While the clinical fluorescence-guided surgery (FGS) field has been focused predominantly on indocyanine green (ICG) imaging, there is accelerated development of more specific molecular tracers. These agents should help advance new indications for which FGS presents a paradigm shift in how molecular information is provided for resection decisions. There has been a steady growth in commercially marketed FGS systems, each with their own differentiated performance characteristics and specifications. A set of desirable criteria is presented to guide the evaluation of instruments, including: (i) real-time overlay of white-light and fluorescence images, (ii) operation within ambient room lighting, (iii) nanomolar-level sensitivity, (iv) quantitative capabilities, (v) simultaneous multiple fluorophore imaging, and (vi) ergonomic utility for open surgery. In this review, United States Food and Drug Administration 510(k) cleared commercial systems and some leading premarket FGS research systems were evaluated to illustrate the continual increase in this performance feature base. Generally, the systems designed for ICG-only imaging have sufficient sensitivity to ICG, but a fraction of the other desired features listed above, with both lower sensitivity and dynamic range. In comparison, the emerging research systems targeted for use with molecular agents have unique capabilities that will be essential for successful clinical imaging studies with low-concentration agents or where superior rejection of ambient light is needed. There is no perfect imaging system, but the feature differences among them are important differentiators in their utility, as outlined in the data and tables here.
Development of binary image masks for TPF-C and ground-based AO coronagraphs
NASA Astrophysics Data System (ADS)
Ge, Jian; Crepp, Justin; Vanden Heuvel, Andrew; Miller, Shane; McDavitt, Dan; Kravchenko, Ivan; Kuchner, Marc
2006-06-01
We report progress on the development of precision binary notch-filter focal plane coronagraphic masks for directly imaging Earth-like planets at visible wavelengths with the Terrestrial Planet Finder Coronagraph (TPF-C), and substellar companions at near infrared wavelengths from the ground with coronagraphs coupled to high-order adaptive optics (AO) systems. Our recent theoretical studies show that 8th-order image masks (Kuchner, Crepp & Ge 2005, KCG05) are capable of achieving unlimited dynamic range in an ideal optical system, while simultaneously remaining relatively insensitive to low-spatial-frequency optical aberrations, such as tip/tilt errors, defocus, coma, astigmatism, etc. These features offer a suite of advantages for the TPF-C by relaxing many control and stability requirements, and can also provide resistance to common practical problems associated with ground-based observations; for example, telescope flexure and low-order errors left uncorrected by the AO system due to wavefront sensor-deformable mirror lag time can leak light at significant levels. Our recent lab experiments show that prototype image masks can generate contrast levels on the order of 2x10 -6 at 3 λ/D and 6x10 -7 at 10 λ/D without deformable mirror correction using monochromatic light (Crepp et al. 2006), and that this contrast is limited primarily by light scattered by imperfections in the optics and extra diffraction created by mask construction errors. These experiments also indicate that the tilt and defocus sensitivities of high-order masks follow the theoretical predictions of Shaklan and Green 2005. In this paper, we discuss these topics as well as review our progress on developing techniques for fabricating a new series of image masks that are "free-standing", as such construction designs may alleviate some of the (mostly chromatic) problems associated with masks that rely on glass substrates for mechanical support. Finally, results obtained from our AO coronagraph simulations are provided in the last section. In particular, we find that: (i) apodized masks provide deeper contrast than hard-edge masks when the image quality exceeds 80% Strehl ratio (SR), (ii) above 90% SR, 4th-order band-limited masks provide higher off-axis throughput than Gaussian masks when generating comparable contrast levels, and (iii) below ~90% SR, hard-edge masks may be better suited for high contrast imaging, since they are less susceptible to tip/tilt alignment errors.
Schlesinger, R.; Bianchi, F.; Blumstengel, S.; Christodoulou, C.; Ovsyannikov, R.; Kobin, B.; Moudgil, K.; Barlow, S.; Hecht, S.; Marder, S.R.; Henneberger, F.; Koch, N.
2015-01-01
The fundamental limits of inorganic semiconductors for light emitting applications, such as holographic displays, biomedical imaging and ultrafast data processing and communication, might be overcome by hybridization with their organic counterparts, which feature enhanced frequency response and colour range. Innovative hybrid inorganic/organic structures exploit efficient electrical injection and high excitation density of inorganic semiconductors and subsequent energy transfer to the organic semiconductor, provided that the radiative emission yield is high. An inherent obstacle to that end is the unfavourable energy level offset at hybrid inorganic/organic structures, which rather facilitates charge transfer that quenches light emission. Here, we introduce a technologically relevant method to optimize the hybrid structure's energy levels, here comprising ZnO and a tailored ladder-type oligophenylene. The ZnO work function is substantially lowered with an organometallic donor monolayer, aligning the frontier levels of the inorganic and organic semiconductors. This increases the hybrid structure's radiative emission yield sevenfold, validating the relevance of our approach. PMID:25872919
Schlesinger, R; Bianchi, F; Blumstengel, S; Christodoulou, C; Ovsyannikov, R; Kobin, B; Moudgil, K; Barlow, S; Hecht, S; Marder, S R; Henneberger, F; Koch, N
2015-04-15
The fundamental limits of inorganic semiconductors for light emitting applications, such as holographic displays, biomedical imaging and ultrafast data processing and communication, might be overcome by hybridization with their organic counterparts, which feature enhanced frequency response and colour range. Innovative hybrid inorganic/organic structures exploit efficient electrical injection and high excitation density of inorganic semiconductors and subsequent energy transfer to the organic semiconductor, provided that the radiative emission yield is high. An inherent obstacle to that end is the unfavourable energy level offset at hybrid inorganic/organic structures, which rather facilitates charge transfer that quenches light emission. Here, we introduce a technologically relevant method to optimize the hybrid structure's energy levels, here comprising ZnO and a tailored ladder-type oligophenylene. The ZnO work function is substantially lowered with an organometallic donor monolayer, aligning the frontier levels of the inorganic and organic semiconductors. This increases the hybrid structure's radiative emission yield sevenfold, validating the relevance of our approach.
Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.
Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro
2016-01-01
At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.
Hu, Ying S; Zhu, Quan; Elkins, Keri; Tse, Kevin; Li, Yu; Fitzpatrick, James A J; Verma, Inder M; Cang, Hu
2013-01-01
Heterochromatin in the nucleus of human embryonic cells plays an important role in the epigenetic regulation of gene expression. The architecture of heterochromatin and its dynamic organization remain elusive because of the lack of fast and high-resolution deep-cell imaging tools. We enable this task by advancing instrumental and algorithmic implementation of the localization-based super-resolution technique. We present light-sheet Bayesian super-resolution microscopy (LSBM). We adapt light-sheet illumination for super-resolution imaging by using a novel prism-coupled condenser design to illuminate a thin slice of the nucleus with high signal-to-noise ratio. Coupled with a Bayesian algorithm that resolves overlapping fluorophores from high-density areas, we show, for the first time, nanoscopic features of the heterochromatin structure in both fixed and live human embryonic stem cells. The enhanced temporal resolution allows capturing the dynamic change of heterochromatin with a lateral resolution of 50-60 nm on a time scale of 2.3 s. Light-sheet Bayesian microscopy opens up broad new possibilities of probing nanometer-scale nuclear structures and real-time sub-cellular processes and other previously difficult-to-access intracellular regions of living cells at the single-molecule, and single cell level.
Speckless head-up display on two spatial light modulators
NASA Astrophysics Data System (ADS)
Siemion, Andrzej; Ducin, Izabela; Kakarenko, Karol; Makowski, Michał; Siemion, Agnieszka; Suszek, Jarosław; Sypek, Maciej; Wojnowski, Dariusz; Jaroszewicz, Zbigniew; Kołodziejczyk, Andrzej
2010-12-01
There is a continuous demand for the computer generated holograms to give an almost perfect reconstruction with a reasonable cost of manufacturing. One method of improving the image quality is to illuminate a Fourier hologram with a quasi-random, but well known, light field phase distribution. It can be achieved with a lithographically produced phase mask. Up to date, the implementation of the lithographic technique is relatively complex and time and money consuming, which is why we have decided to use two Spatial Light Modulators (SLM). For the correctly adjusted light polarization a SLM acts as a pure phase modulator with 256 adjustable phase levels between 0 and 2π. The two modulators give us an opportunity to use the whole surface of the device and to reduce the size of the experimental system. The optical system with one SLM can also be used but it requires dividing the active surface into halves (one for the Fourier hologram and the second for the quasi-random diffuser), which implies a more complicated optical setup. A larger surface allows to display three Fourier holograms, each for one primary colour: red, green and blue. This allows to reconstruct almost noiseless colourful dynamic images. In this work we present the results of numerical simulations of image reconstructions with the use of two SLM displays.
Hu, Ying S; Zhu, Quan; Elkins, Keri; Tse, Kevin; Li, Yu; Fitzpatrick, James A J; Verma, Inder M; Cang, Hu
2016-01-01
Background Heterochromatin in the nucleus of human embryonic cells plays an important role in the epigenetic regulation of gene expression. The architecture of heterochromatin and its dynamic organization remain elusive because of the lack of fast and high-resolution deep-cell imaging tools. We enable this task by advancing instrumental and algorithmic implementation of the localization-based super-resolution technique. Results We present light-sheet Bayesian super-resolution microscopy (LSBM). We adapt light-sheet illumination for super-resolution imaging by using a novel prism-coupled condenser design to illuminate a thin slice of the nucleus with high signal-to-noise ratio. Coupled with a Bayesian algorithm that resolves overlapping fluorophores from high-density areas, we show, for the first time, nanoscopic features of the heterochromatin structure in both fixed and live human embryonic stem cells. The enhanced temporal resolution allows capturing the dynamic change of heterochromatin with a lateral resolution of 50–60 nm on a time scale of 2.3 s. Conclusion Light-sheet Bayesian microscopy opens up broad new possibilities of probing nanometer-scale nuclear structures and real-time sub-cellular processes and other previously difficult-to-access intracellular regions of living cells at the single-molecule, and single cell level. PMID:27795878
Center for Adaptive Optics | What is Adaptive Optics
(?) microns in size. In astronomy, the turbulent atmosphere blurs images to a size of 0.5 to 1 arcsec even at an additional gain in contrast -- for astronomy, where light levels are often very low, this means
Interactive effects of melatonin, exercise and diabetes on liver glycogen levels.
Bicer, Mursel; Akil, Mustafa; Avunduk, Mustafa Cihat; Kilic, Mehmet; Mogulkoc, Rasim; Baltaci, Abdulkerim Kasim
2011-01-01
This study aimed to examine the effects of melatonin supplementation on liver glycogen levels in rats with streptozotocin- induced diabetes and subjected to acute swimming exercise. Eighty Sprague-Dawley type adult male rats were divided into eight groups: Group 1, general control; Group 2, melatonin-supplemented control; Group 3, melatonin-supplemented diabetes; Group 4, swimming control; Group 5, melatonin-supplemented swimming; Group 6, melatonin-supplemented diabetic swimming; Group 7, diabetic swimming; Group 8, diabetic control. Melatonin was supplemented at a dose of 3 mg/kg/day intraperitoneally for four weeks. Liver tissue samples were collected and evaluated using a Nikon Eclipse E400 light microscope. All images obtained from the light microscope were transferred to PC medium and evaluated using Clemex PE 3.5 image analysis software. The lowest liver glycogen levels in the study were found in group 4. Liver glycogen levels in groups 3, 6, 7 and 8 (the diabetic groups) were higher than group 4, but lower than those in groups 1 and 2. The lowest liver glycogen levels were obtained in groups 1 and 2. The study indicates that melatonin supplementation maintains the liver glycogen levels that decrease in acute swimming exercise, while induced diabetes prevents this maintenance effect in rats.
Ultrasound-modulated optical tomography with intense acoustic bursts.
Zemp, Roger J; Kim, Chulhong; Wang, Lihong V
2007-04-01
Ultrasound-modulated optical tomography (UOT) detects ultrasonically modulated light to spatially localize multiply scattered photons in turbid media with the ultimate goal of imaging the optical properties in living subjects. A principal challenge of the technique is weak modulated signal strength. We discuss ways to push the limits of signal enhancement with intense acoustic bursts while conforming to optical and ultrasonic safety standards. A CCD-based speckle-contrast detection scheme is used to detect acoustically modulated light by measuring changes in speckle statistics between ultrasound-on and ultrasound-off states. The CCD image capture is synchronized with the ultrasound burst pulse sequence. Transient acoustic radiation force, a consequence of bursts, is seen to produce slight signal enhancement over pure ultrasonic-modulation mechanisms for bursts and CCD exposure times of the order of milliseconds. However, acoustic radiation-force-induced shear waves are launched away from the acoustic sample volume, which degrade UOT spatial resolution. By time gating the CCD camera to capture modulated light before radiation force has an opportunity to accumulate significant tissue displacement, we reduce the effects of shear-wave image degradation, while enabling very high signal-to-noise ratios. Additionally, we maintain high-resolution images representative of optical and not mechanical contrast. Signal-to-noise levels are sufficiently high so as to enable acquisition of 2D images of phantoms with one acoustic burst per pixel.
Infrared and visible fusion face recognition based on NSCT domain
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-01-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.
Lighting issues in the 1980's. Summary and proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubin, A. I.
1980-01-01
The Lighting Roundtable described in this report was conducted to foster an open discussion of the goals, issues, and responsibilities of the lighting community. It was not a problem-solving session, but rather a time to examine the long-term aspirations and objectives of lighting and the barriers that may stand in the way of achieving them. Eight major issues were addressed by nine panelists and a number of invited auditors. The issues are as follows: (1) The Public Image of the Lighting Community; (2) US Role in the Worldwide Lighting Community; (3) Factors Affecting Human Activities in the Built Environment; (4)more » Effect of Lighting on Environmental Quality; (5) Effects of Barriers; (6) Establishment of Illuminance Levels; (7) Integration of Subsystems; and (8) Professional Development and Lighting Education. Two parts presented are: (1) a summary of the proceedings; and (2) a complete transcript.« less
Fukushima, S.; Furukawa, T.; Niioka, H.; Ichimiya, M.; Sannomiya, T.; Tanaka, N.; Onoshima, D.; Yukawa, H.; Baba, Y.; Ashida, M.; Miyake, J.; Araki, T.; Hashimoto, M.
2016-01-01
This paper presents a new correlative bioimaging technique using Y2O3:Tm, Yb and Y2O3:Er, Yb nanophosphors (NPs) as imaging probes that emit luminescence excited by both near-infrared (NIR) light and an electron beam. Under 980 nm NIR light irradiation, the Y2O3:Tm, Yb and Y2O3:Er, Yb NPs emitted NIR luminescence (NIRL) around 810 nm and 1530 nm, respectively, and cathodoluminescence at 455 nm and 660 nm under excitation of accelerated electrons, respectively. Multimodalities of the NPs were confirmed in correlative NIRL/CL imaging and their locations were visualized at the same observation area in both NIRL and CL images. Using CL microscopy, the NPs were visualized at the single-particle level and with multicolour. Multiscale NIRL/CL bioimaging was demonstrated through in vivo and in vitro NIRL deep-tissue observations, cellular NIRL imaging, and high-spatial resolution CL imaging of the NPs inside cells. The location of a cell sheet transplanted onto the back muscle fascia of a hairy rat was visualized through NIRL imaging of the Y2O3:Er, Yb NPs. Accurate positions of cells through the thickness (1.5 mm) of a tissue phantom were detected by NIRL from the Y2O3:Tm, Yb NPs. Further, locations of the two types of NPs inside cells were observed using CL microscopy. PMID:27185264
Alaskan Auroral All-Sky Images on the World Wide Web
NASA Technical Reports Server (NTRS)
Stenbaek-Nielsen, H. C.
1997-01-01
In response to a 1995 NASA SPDS announcement of support for preservation and distribution of important data sets online, the Geophysical Institute, University of Alaska Fairbanks, Alaska, proposed to provide World Wide Web access to the Poker Flat Auroral All-sky Camera images in real time. The Poker auroral all-sky camera is located in the Davis Science Operation Center at Poker Flat Rocket Range about 30 miles north-east of Fairbanks, Alaska, and is connected, through a microwave link, with the Geophysical Institute where we maintain the data base linked to the Web. To protect the low light-level all-sky TV camera from damage due to excessive light, we only operate during the winter season when the moon is down. The camera and data acquisition is now fully computer controlled. Digital images are transmitted each minute to the Web linked data base where the data are available in a number of different presentations: (1) Individual JPEG compressed images (1 minute resolution); (2) Time lapse MPEG movie of the stored images; and (3) A meridional plot of the entire night activity.
NASA Astrophysics Data System (ADS)
Kredzinski, Lukasz; Connelly, Michael J.
2011-06-01
Optical Coherence Tomography (OCT) is a promising non-invasive imaging technology capable of carrying out 3D high-resolution cross-sectional images of the internal microstructure of examined material. However, almost all of these systems are expensive, requiring the use of complex optical setups, expensive light sources and complicated scanning of the sample under test. In addition most of these systems have not taken advantage of the competitively priced optical components available at wavelength within the main optical communications band located in the 1550 nm region. A comparatively simple and inexpensive full-field OCT system (FF-OCT), based on a superluminescent diode (SLD) light source and anti-stokes imaging device was constructed, to perform 3D cross-sectional imaging. This kind of inexpensive setup with moderate resolution could be easily applicable in low-level biomedical and industrial diagnostics. This paper involves calibration of the system and determines its suitability for imaging structures of biological tissues such as teeth, which has low absorption at 1550 nm.
Qi, Chang; Changlin, Huang
2007-07-01
To examine the association between levers of cartilage oligomeric matrix protein (COMP), matrix metalloproteinases-1 (MMP-1), matrix metalloproteinases-3 (MMP-3), tissue inhibitor of matrix metalloproteinases-1 (TIMP-1) in serum and synovial fluid, and MR imaging of cartilage degeneration in knee joint, and to understand the effects of movement training with different intensity on cartilage of knee joint. 20 adult canines were randomly divided into three groups (8 in the light training group; 8 in the intensive training group; 4 in the control group), and canines of the two training groups were trained daily at different intensity. The training lasted for 10 weeks in all. Magnetic resonance imaging (MRI) examinations were performed regularly (2, 4, 6, 8, 10 week) to investigate the changes of articular cartilage in the canine knee, while concentrations of COMP, MMP-1, MMP-3, TIMP-1 in serum and synovial fluid were measured by ELISA assays. We could find imaging changes of cartilage degeneration in both the training groups by MRI examination during training period, compared with the control group. However, there was no significant difference between these two training groups. Elevations of levels of COMP, MMP-1, MMP-3, TIMP-1, MMP-3/TIMP-1 were seen in serum and synovial fluid after training, and their levels had obvious association with knee MRI grades of cartilage lesion. Furthermore, there were statistically significant associations between biomarkers levels in serum and in synovial fluid. Long-time and high-intensity movement training induces cartilage degeneration in knee joint. Within the intensity extent applied in this study, knee cartilage degeneration caused by light training or intensive training has no difference in MR imaging, but has a comparatively obvious difference in biomarkers level. To detect articular cartilage degeneration in early stage and monitor pathological process, the associated application of several biomarkers has a very good practical value, and can be used as a helpful supplement to MRI.
Saito, Kenta; Arai, Yoshiyuki; Zhang, Jize; Kobayashi, Kentaro; Tani, Tomomi; Nagai, Takeharu
2011-01-01
Laser-scanning confocal microscopy has been employed for exploring structures at subcellular, cellular and tissue level in three dimensions. To acquire the confocal image, a coherent light source, such as laser, is generally required in conventional single-point scanning microscopy. The illuminating beam must be focused onto a small spot with diffraction-limited size, and this determines the spatial resolution of the microscopy system. In contrast, multipoint scanning confocal microscopy using a Nipkow disk enables the use of an incoherent light source. We previously demonstrated successful application of a 100 W mercury arc lamp as a light source for the Yokogawa confocal scanner unit in which a microlens array was coupled with a Nipkow disk to focus the collimated incident light onto a pinhole (Saito et al., Cell Struct. Funct., 33: 133-141, 2008). However, transmission efficiency of incident light through the pinhole array was low because off-axis light, the major component of the incident light, was blocked by the non-aperture area of the disk. To improve transmission efficiency, we propose an optical system in which off-axis light is able to be transmitted through pinholes surrounding the pinhole located on the optical axis of the collimator lens. This optical system facilitates the use of not only the on-axis but also the off-axis light such that the available incident light is considerably improved. As a result, we apply the proposed system to high-speed confocal and multicolor imaging both with a satisfactory signal-to-noise ratio.
2017-12-08
NASA image acquired September 24, 2012 City lights at night are a fairly reliable indicator of where people live. But this isn’t always the case, and the Korean Peninsula shows why. As of July 2012, South Korea’s population was estimated at roughly 49 million people, and North Korea’s population was estimated at about half that number. But where South Korea is gleaming with city lights, North Korea has hardly any lights at all—just a faint glimmer around Pyongyang. On September 24, 2012, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured this nighttime view of the Korean Peninsula. This imagery is from the VIIRS “day-night band,” which detects light in a range of wavelengths from green to near-infrared and uses filtering techniques to observe signals such as gas flares, auroras, wildfires, city lights, and reflected moonlight. The wide-area image shows the Korean Peninsula, parts of China and Japan, the Yellow Sea, and the Sea of Japan. The white inset box encloses an area showing ship lights in the Yellow Sea. Many of the ships form a line, as if assembling along a watery border. Following the 1953 armistice ending the Korean War, per-capita income in South Korea rose to about 17 times the per-capital income level of North Korea, according to the U.S. Central Intelligence Agency. Worldwide, South Korea ranks 12th in electricity production, and 10th in electricity consumption, per 2011 estimates. North Korea ranks 71st in electricity production, and 73rd in electricity consumption, per 2009 estimates. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day-Night Band data from the Suomi National Polar-orbiting Partnership. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Michon Scott. Instrument: Suomi NPP - VIIRS Credit: NASA Earth Observatory Click here to view all of the Earth at Night 2012 images Click here to read more about this image NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
2017-12-08
In April 2012, waves in Earth’s “airglow” spread across the nighttime skies of northern Texas like ripples in a pond. In this case, the waves were provoked by a massive thunderstorm. Airglow is a layer of nighttime light emissions caused by chemical reactions high in Earth’s atmosphere. A variety of reactions involving oxygen, sodium, ozone and nitrogen result in the production of a very faint amount of light. In fact, it’s approximately one billion times fainter than sunlight (~10-11 to 10-9 W·cm-2· sr-1). This chemiluminescence is similar to the chemical reactions that light up a glow stick or glow-in-the-dark silly putty. The “day-night band,” of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured these glowing ripples in the night sky on April 15, 2012 (top image). The day-night band detects lights over a range of wavelengths from green to near-infrared and uses highly sensitive electronics to observe low light signals. (The absolute minimum signals detectable are at the levels of nightglow emission.) The lower image shows the thunderstorm as observed by a thermal infrared band on VIIRS. This thermal band, which is sensitive only to heat emissions (cold clouds appear white), is not sensitive to the subtle visible-light wave structures seen by the day-night band. Technically speaking, airglow occurs at all times. During the day it is called “dayglow,” at twilight “twilightglow,” and at night “nightglow.” There are slightly different processes taking place in each case, but in the image above the source of light is nightglow. The strongest nightglow emissions are mostly constrained to a relatively thin layer of atmosphere between 85 and 95 kilometers (53 and 60 miles) above the Earth’s surface. Little emission occurs below this layer since there’s a higher concentration of molecules, allowing for dissipation of chemical energy via collisions rather than light production. Likewise, little emission occurs above that layer because the atmospheric density is so tenuous that there are too few light-emitting reactions to yield an appreciable amount of light. Suomi NPP is in orbit around Earth at 834 kilometers (about 518 miles), well above the nightglow layer. The day-night band imagery therefore contains signals from the direction upward emission of the nightglow layer and the reflection of the downward nightglow emissions by clouds and the Earth’s surface. The presence of these nightglow waves is a graphic visualization of the usually unseen energy transfer processes that occur continuously between the lower and upper atmosphere. While nightglow is a well-known phenomenon, it’s not typically considered by Earth-viewing meteorological sensors. In fact, scientists were surprised at Suomi NPP’s ability to detect it. During the satellite’s check-out procedure, this unanticipated source of visible light was thought to indicate a problem with the sensor until scientists realized that what they were seeing was the faintest of light in the darkness of night. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day-Night Band data from the Suomi National Polar-orbiting Partnership. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Aries Keck and Steve Miller. Instrument: Suomi NPP - VIIRS Credit: NASA Earth Observatory Click here to view all of the Earth at Night 2012 images Click here to read more about this image NASA image use policy. NASA Goddard Space
Curriculum in biomedical optics and laser-tissue interactions
NASA Astrophysics Data System (ADS)
Jacques, Steven L.
2003-10-01
A graduate student level curriculum has been developed for teaching the basic principles of how lasers and light interact with biological tissues and materials. The field of Photomedicine can be divided into two topic areas: (1) where tissue affects photons, used for diagnostic sensing, imaging, and spectroscopy of tissues and biomaterials, and (2) where photons affect tissue, used for surgical and therapeutic cutting, dissecting, machining, processing, coagulating, welding, and oxidizing tissues and biomaterials. The courses teach basic principles of tissue optical properties and light transport in tissues, and interaction of lasers and conventional light sources with tissues via photochemical, photothermal and photomechanical mechanisms.
Multi-channel infrared thermometer
Ulrickson, Michael A.
1986-01-01
A device for measuring the two-dimensional temperature profile of a surface comprises imaging optics for generating an image of the light radiating from the surface; an infrared detector array having a plurality of detectors; and a light pipe array positioned between the imaging optics and the detector array for sampling, transmitting, and distributing the image over the detector surfaces. The light pipe array includes one light pipe for each detector in the detector array.
Diffractive micro-optical element with nonpoint response
NASA Astrophysics Data System (ADS)
Soifer, Victor A.; Golub, Michael A.
1993-01-01
Common-use diffractive lenses have microrelief zones in the form of simple rings that provide only an optical power but do not contain any image information. They have a point-image response under point-source illumination. We must use a more complicated non-point response to focus a light beam into different light marks, letter-type images as well as for optical pattern recognition. The current presentation describes computer generation of diffractive micro- optical elements with complicated curvilinear zones of a regular piecewise-smooth structure and grey-level or staircase phase microrelief. The manufacture of non-point response elements uses the steps of phase-transfer calculation and orthogonal-scan masks generation or lithographic glass etching. Ray-tracing method is shown to be applicable in this task. Several working samples of focusing optical elements generated by computer and photolithography are presented. Using the experimental results we discuss here such applications as laser branding.
Oxygen Nanobubble Tracking by Light Scattering in Single Cells and Tissues.
Bhandari, Pushpak; Wang, Xiaolei; Irudayaraj, Joseph
2017-03-28
Oxygen nanobubbles (ONBs) have significant potential in targeted imaging and treatment in cancer diagnosis and therapy. Precise localization and tracking of single ONBs is demonstrated based on hyperspectral dark-field microscope (HSDFM) to image and track single oxygen nanobubbles in single cells. ONBs were proposed as promising contrast-generating imaging agents due to their strong light scattering generated from nonuniformity of refractive index at the interface. With this powerful platform, we have revealed the trajectories and quantities of ONBs in cells, and demonstrated the relation between the size and diffusion coefficient. We have also evaluated the presence of ONBs in the nucleus with respect to an increase in incubation time and have quantified the uptake in single cells in ex vivo tumor tissues. Our results demonstrate that HSDFM can be a versatile platform to detect and measure cellulosic nanoparticles at the single-cell level and to assess the dynamics and trajectories of this delivery system.
Image intensifier gain uniformity improvements in sealed tubes by selective scrubbing
Thomas, S.W.
1995-04-18
The gain uniformity of sealed microchannel plate image intensifiers (MCPIs) is improved by selectively scrubbing the high gain sections with a controlled bright light source. Using the premise that ions returning to the cathode from the microchannel plate (MCP) damage the cathode and reduce its sensitivity, a HeNe laser beam light source is raster scanned across the cathode of a microchannel plate image intensifier (MCPI) tube. Cathode current is monitored and when it exceeds a preset threshold, the sweep rate is decreased 1000 times, giving 1000 times the exposure to cathode areas with sensitivity greater than the threshold. The threshold is set at the cathode current corresponding to the lowest sensitivity in the active cathode area so that sensitivity of the entire cathode is reduced to this level. This process reduces tube gain by between 10% and 30% in the high gain areas while gain reduction in low gain areas is negligible. 4 figs.
Image intensifier gain uniformity improvements in sealed tubes by selective scrubbing
Thomas, Stanley W.
1995-01-01
The gain uniformity of sealed microchannel plate image intensifiers (MCPIs) is improved by selectively scrubbing the high gain sections with a controlled bright light source. Using the premise that ions returning to the cathode from the microchannel plate (MCP) damage the cathode and reduce its sensitivity, a HeNe laser beam light source is raster scanned across the cathode of a microchannel plate image intensifier (MCPI) tube. Cathode current is monitored and when it exceeds a preset threshold, the sweep rate is decreased 1000 times, giving 1000 times the exposure to cathode areas with sensitivity greater than the threshold. The threshold is set at the cathode current corresponding to the lowest sensitivity in the active cathode area so that sensitivity of the entire cathode is reduced to this level. This process reduces tube gain by between 10% and 30% in the high gain areas while gain reduction in low gain areas is negligible.
2017-12-08
Caption: NASA’s Solar Dynamics Observatory (SDO) captured this image of an M5.7 class flare on May 3, 2013 at 1:30 p.m. EDT. This image shows light in the 131 Angstrom wavelength, a wavelength of light that can show material at the very hot temperatures of a solar flare and that is typically colorized in teal. Caption: NASA’s Solar Dynamics Observatory (SDO) captured this image of an M5.7 class flare on May 3, 2013 at 1:30 p.m. EDT. This image shows light in the 131 Angstrom wavelength, a wavelength of light that can show material at the very hot temperatures of a solar flare and that is typically colorized in teal. Credit: NASA/Goddard/SDO --- The sun emitted a mid-level solar flare, peaking at 1:32 pm EDT on May 3, 2013. Solar flares are powerful bursts of radiation. Harmful radiation from a flare cannot pass through Earth's atmosphere to physically affect humans on the ground, however -- when intense enough -- they can disturb the atmosphere in the layer where GPS and communications signals travel. This disrupts the radio signals for as long as the flare is ongoing, and the radio blackout for this flare has already subsided. This flare is classified as an M5.7 class flare. M-class flares are the weakest flares that can still cause some space weather effects near Earth. Increased numbers of flares are quite common at the moment, since the sun's normal 11-year activity cycle is ramping up toward solar maximum, which is expected in late 2013. Updates will be provided as they are available on the flare and whether there was an associated coronal mass ejection (CME), another solar phenomenon that can send solar particles into space and affect electronic systems in satellites and on Earth. NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Method and apparatus for synthesis of arrays of DNA probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerrina, Francesco; Sussman, Michael R.; Blattner, Frederick R.
The synthesis of arrays of DNA probes sequences, polypeptides, and the like is carried out using a patterning process on an active surface of a substrate. An image is projected onto the active surface of the substrate utilizing an image former that includes a light source that provides light to a micromirror device comprising an array of electronically addressable micromirrors, each of which can be selectively tilted between one of at least two positions. Projection optics receives the light reflected from the micromirrors along an optical axis and precisely images the micromirrors onto the active surface of the substrate, whichmore » may be used to activate the surface of the substrate. The first level of bases may then be applied to the substrate, followed by development steps, and subsequent exposure of the substrate utilizing a different pattern of micromirrors, with further repeats until the elements of a two dimensional array on the substrate surface have an appropriate base bound thereto. The micromirror array can be controlled in conjunction with a DNA synthesizer supplying appropriate reagents to a flow cell containing the active substrate to control the sequencing of images presented by the micromirror array in coordination of the reagents provided to the substrate.« less
Image processing of underwater multispectral imagery
Zawada, D. G.
2003-01-01
Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.
Face recognition in the thermal infrared domain
NASA Astrophysics Data System (ADS)
Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.
2017-10-01
Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.
Progress on MCT SWIR modules for passive and active imaging applications
NASA Astrophysics Data System (ADS)
Breiter, R.; Benecke, M.; Eich, D.; Figgemeier, H.; Weber, A.; Wendler, J.; Sieck, A.
2017-02-01
For SWIR imaging applications, based on AIM's state-of-the-art MCT IR technology specific detector designs for either low light level imaging or laser illuminated active imaging are under development. For imaging under low-light conditions a low-noise 640x512 15μm pitch ROIC with CTIA input stages and correlated double sampling was designed. The ROIC provides rolling shutter and snapshot integration. To reduce size, weight, power and cost (SWaP-C) a 640x512 format detector in a 10μm pitch is been realized. While LPE grown MCT FPAs with extended 2.5μm cut-off have been fabricated and integrated also MBE grown MCT on GaAs is considered for future production. The module makes use of the extended SWIR (eSWIR) spectral cut-off up to 2.5μm to allow combination of emissive and reflective imaging by already detecting thermal radiation in the eSWIR band. A demonstrator imager was built to allow field testing of this concept. A resulting product will be a small, compact clip-on weapon sight. For active imaging a detector module was designed providing gating capability. SWIR MCT avalanche photodiodes have been implemented and characterized on FPA level in a 640x512 15μm pitch format. The specific ROIC provides also the necessary functions for range gate control and triggering by the laser illumination. The FPAs are integrated in a compact dewar cooler configuration using AIM's split linear cooler. A command and control electronics (CCE) provides supply voltages, biasing, clocks, control and video digitization for easy system interfacing. First lab and field tests of a gated viewing demonstrator have been carried out and the module has been further improved.
A hyperspectral image projector for hyperspectral imagers
NASA Astrophysics Data System (ADS)
Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.
2007-04-01
We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.
Akkoç, Betül; Arslan, Ahmet; Kök, Hatice
2016-06-01
Gender is one of the intrinsic properties of identity, with performance enhancement reducing the cluster when a search is performed. Teeth have durable and resistant structure, and as such are important sources of identification in disasters (accident, fire, etc.). In this study, gender determination is accomplished by maxillary tooth plaster models of 40 people (20 males and 20 females). The images of tooth plaster models are taken with a lighting mechanism set-up. A gray level co-occurrence matrix of the image with segmentation is formed and classified via a Random Forest (RF) algorithm by extracting pertinent features of the matrix. Automatic gender determination has a 90% success rate, with an applicable system to determine gender from maxillary tooth plaster images. Copyright © 2016 Elsevier Ltd. All rights reserved.
Computer simulation of reconstructed image for computer-generated holograms
NASA Astrophysics Data System (ADS)
Yasuda, Tomoki; Kitamura, Mitsuru; Watanabe, Masachika; Tsumuta, Masato; Yamaguchi, Takeshi; Yoshikawa, Hiroshi
2009-02-01
This report presents the results of computer simulation images for image-type Computer-Generated Holograms (CGHs) observable under white light fabricated with an electron beam lithography system. The simulated image is obtained by calculating wavelength and intensity of diffracted light traveling toward the viewing point from the CGH. Wavelength and intensity of the diffracted light are calculated using FFT image generated from interference fringe data. Parallax image of CGH corresponding to the viewing point can be easily obtained using this simulation method. Simulated image from interference fringe data was compared with reconstructed image of real CGH with an Electron Beam (EB) lithography system. According to the result, the simulated image resembled the reconstructed image of the CGH closely in shape, parallax, coloring and shade. And, in accordance with the shape of the light sources the simulated images which were changed in chroma saturation and blur by using two kinds of simulations: the several light sources method and smoothing method. In addition, as the applications of the CGH, full-color CGH and CGH with multiple images were simulated. The result was that the simulated images of those CGHs closely resembled the reconstructed image of real CGHs.
NASA Astrophysics Data System (ADS)
Wong, Terence T. W.; Zhang, Ruiying; Hsu, Hsun-Chia; Maslov, Konstantin I.; Shi, Junhui; Chen, Ruimin; Shung, K. Kirk; Zhou, Qifa; Wang, Lihong V.
2018-02-01
In biomedical imaging, all optical techniques face a fundamental trade-off between spatial resolution and tissue penetration. Therefore, obtaining an organelle-level resolution image of a whole organ has remained a challenging and yet appealing scientific pursuit. Over the past decade, optical microscopy assisted by mechanical sectioning or chemical clearing of tissue has been demonstrated as a powerful technique to overcome this dilemma, one of particular use in imaging the neural network. However, this type of techniques needs lengthy special preparation of the tissue specimen, which hinders broad application in life sciences. Here, we propose a new label-free three-dimensional imaging technique, named microtomy-assisted photoacoustic microscopy (mPAM), for potentially imaging all biomolecules with 100% endogenous natural staining in whole organs with high fidelity. We demonstrate the first label-free mPAM, using UV light for label-free histology-like imaging, in whole organs (e.g., mouse brains), most of them formalin-fixed and paraffin- or agarose-embedded for minimal morphological deformation. Furthermore, mPAM with dual wavelength illuminations is also employed to image a mouse brain slice, demonstrating the potential for imaging of multiple biomolecules without staining. With visible light illumination, mPAM also shows its deep tissue imaging capability, which enables less slicing and hence reduces sectioning artifacts. mPAM could potentially provide a new insight for understanding complex biological organs.
Omucheni, Dickson L; Kaduki, Kenneth A; Bulimo, Wallace D; Angeyo, Hudson K
2014-12-11
Multispectral imaging microscopy is a novel microscopic technique that integrates spectroscopy with optical imaging to record both spectral and spatial information of a specimen. This enables acquisition of a large and more informative dataset than is achievable in conventional optical microscopy. However, such data are characterized by high signal correlation and are difficult to interpret using univariate data analysis techniques. In this work, the development and application of a novel method which uses principal component analysis (PCA) in the processing of spectral images obtained from a simple multispectral-multimodal imaging microscope to detect Plasmodium parasites in unstained thin blood smear for malaria diagnostics is reported. The optical microscope used in this work has been modified by replacing the broadband light source (tungsten halogen lamp) with a set of light emitting diodes (LEDs) emitting thirteen different wavelengths of monochromatic light in the UV-vis-NIR range. The LEDs are activated sequentially to illuminate same spot of the unstained thin blood smears on glass slides, and grey level images are recorded at each wavelength. PCA was used to perform data dimensionality reduction and to enhance score images for visualization as well as for feature extraction through clusters in score space. Using this approach, haemozoin was uniquely distinguished from haemoglobin in unstained thin blood smears on glass slides and the 590-700 spectral range identified as an important band for optical imaging of haemozoin as a biomarker for malaria diagnosis. This work is of great significance in reducing the time spent on staining malaria specimens and thus drastically reducing diagnosis time duration. The approach has the potential of replacing a trained human eye with a trained computerized vision system for malaria parasite blood screening.
Correlative Light- and Electron Microscopy Using Quantum Dot Nanoparticles.
Killingsworth, Murray C; Bobryshev, Yuri V
2016-08-07
A method is described whereby quantum dot (QD) nanoparticles can be used for correlative immunocytochemical studies of human pathology tissue using widefield fluorescence light microscopy and transmission electron microscopy (TEM). To demonstrate the protocol we have immunolabeled ultrathin epoxy sections of human somatostatinoma tumor using a primary antibody to somatostatin, followed by a biotinylated secondary antibody and visualization with streptavidin conjugated 585 nm cadmium-selenium (CdSe) quantum dots (QDs). The sections are mounted on a TEM specimen grid then placed on a glass slide for observation by widefield fluorescence light microscopy. Light microscopy reveals 585 nm QD labeling as bright orange fluorescence forming a granular pattern within the tumor cell cytoplasm. At low to mid-range magnification by light microscopy the labeling pattern can be easily recognized and the level of non-specific or background labeling assessed. This is a critical step for subsequent interpretation of the immunolabeling pattern by TEM and evaluation of the morphological context. The same section is then blotted dry and viewed by TEM. QD probes are seen to be attached to amorphous material contained in individual secretory granules. Images are acquired from the same region of interest (ROI) seen by light microscopy for correlative analysis. Corresponding images from each modality may then be blended to overlay fluorescence data on TEM ultrastructure of the corresponding region.
Heat generation and light scattering of green fluorescent protein-like pigments in coral tissue
NASA Astrophysics Data System (ADS)
Lyndby, Niclas H.; Kühl, Michael; Wangpraseurt, Daniel
2016-05-01
Green fluorescent protein (GFP)-like pigments have been proposed to have beneficial effects on coral photobiology. Here, we investigated the relationships between green fluorescence, coral heating and tissue optics for the massive coral Dipsastraea sp. (previously Favia sp.). We used microsensors to measure tissue scalar irradiance and temperature along with hyperspectral imaging and combined imaging of variable chlorophyll fluorescence and green fluorescence. Green fluorescence correlated positively with coral heating and scalar irradiance enhancement at the tissue surface. Coral tissue heating saturated for maximal levels of green fluorescence. The action spectrum of coral surface heating revealed that heating was highest under red (peaking at 680 nm) irradiance. Scalar irradiance enhancement in coral tissue was highest when illuminated with blue light, but up to 62% (for the case of highest green fluorescence) of this photon enhancement was due to green fluorescence emission. We suggest that GFP-like pigments scatter the incident radiation, which enhances light absorption and heating of the coral. However, heating saturates, because intense light scattering reduces the vertical penetration depth through the tissue eventually leading to reduced light absorption at high fluorescent pigment density. We conclude that fluorescent pigments can have a central role in modulating coral light absorption and heating.
Representations of race and skin tone in medical textbook imagery.
Louie, Patricia; Wilkes, Rima
2018-04-01
Although a large literature has documented racial inequities in health care delivery, there continues to be debate about the potential sources of these inequities. Preliminary research suggests that racial inequities are embedded in the curricular edification of physicians and patients. We investigate this hypothesis by considering whether the race and skin tone depicted in images in textbooks assigned at top medical schools reflects the diversity of the U.S. We analyzed 4146 images from Atlas of Human Anatomy, Bates' Guide to Physical Examination & History Taking, Clinically Oriented Anatomy, and Gray's Anatomy for Students by coding race (White, Black, and Person of Color) and skin tone (light, medium, and dark) at the textbook, chapter, and topic level. While the textbooks approximate the racial distribution of the U.S. population - 62.5% White, 20.4% Black, and 17.0% Person of Color - the skin tones represented - 74.5% light, 21% medium, and 4.5% dark - overrepresent light skin tone and underrepresent dark skin tone. There is also an absence of skin tone diversity at the chapter and topic level. Even though medical texts often have overall proportional racial representation this is not the case for skin tone. Furthermore, racial minorities are still often absent at the topic level. These omissions may provide one route through which bias enters medical treatment. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Bolan, Jeffrey; Hall, Elise; Clifford, Chris; Thurow, Brian
The Light-Field Imaging Toolkit (LFIT) is a collection of MATLAB functions designed to facilitate the rapid processing of raw light field images captured by a plenoptic camera. An included graphical user interface streamlines the necessary post-processing steps associated with plenoptic images. The generation of perspective shifted views and computationally refocused images is supported, in both single image and animated formats. LFIT performs necessary calibration, interpolation, and structuring steps to enable future applications of this technology.
Hiding Information Using different lighting Color images
NASA Astrophysics Data System (ADS)
Majead, Ahlam; Awad, Rash; Salman, Salema S.
2018-05-01
The host medium for the secret message is one of the important principles for the designers of steganography method. In this study, the best color image was studied to carrying any secret image.The steganography approach based Lifting Wavelet Transform (LWT) and Least Significant Bits (LSBs) substitution. The proposed method offers lossless and unnoticeable changes in the contrast carrier color image and imperceptible by human visual system (HVS), especially the host images which was captured in dark lighting conditions. The aim of the study was to study the process of masking the data in colored images with different light intensities. The effect of the masking process was examined on the images that are classified by a minimum distance and the amount of noise and distortion in the image. The histogram and statistical characteristics of the cover image the results showed the efficient use of images taken with different light intensities in hiding data using the least important bit substitution method. This method succeeded in concealing textual data without distorting the original image (low light) Lire developments due to the concealment process.The digital image segmentation technique was used to distinguish small areas with masking. The result is that smooth homogeneous areas are less affected as a result of hiding comparing with high light areas. It is possible to use dark color images to send any secret message between two persons for the purpose of secret communication with good security.
Optical image encryption scheme with multiple light paths based on compressive ghost imaging
NASA Astrophysics Data System (ADS)
Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan
2018-02-01
An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.
Fluorescent image tracking velocimeter
Shaffer, Franklin D.
1994-01-01
A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-01-01
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-02-09
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.
NASA Astrophysics Data System (ADS)
Park, Dubok; Han, David K.; Ko, Hanseok
2017-05-01
Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.
Thin laser light sheet microscope for microbial oceanography
NASA Astrophysics Data System (ADS)
Fuchs, Eran; Jaffe, Jules S.; Long, Richard A.; Azam, Farooq
2002-01-01
Despite a growing need, oceanographers are limited by existing technological constrains and are unable to observe aquatic microbes in their natural setting. In order to provide a simple and easy to implement solution for such studies, a new Thin Light Sheet Microscope (TLSM) has been developed. The TLSM utilizes a well-defined sheet of laser light, which has a narrow (23 micron) axial dimension over a 1 mm x 1 mm field of view. This light sheet is positioned precisely within the depth of field of the microscope’s objective lens. The technique thus utilizes conventional microscope optics but replaces the illumination system. The advantages of the TLSM are two-fold: First, it concentrates light only where excitation is needed, thus maximizing the efficiency of the illumination source. Secondly, the TLSM maximizes image sharpness while at the same time minimizing the level of background noise. Particles that are not located within the objective's depth of field are not illuminated and therefore do not contribute to an out-of-focus image. Images from a prototype system that used SYBR Green I fluorescence stain in order to localize single bacteria are reported. The bacteria were in a relatively large and undisturbed volume of 4ml, which contained natural seawater. The TLSM can be used for fresh water studies of bacteria with no modification. The microscope permits the observation of interactions at the microscale and has potential to yield insights into how microbes structure pelagic ecosystems.
NASA Astrophysics Data System (ADS)
He, Xiao Dong
This thesis studies light scattering processes off rough surfaces. Analytic models for reflection, transmission and subsurface scattering of light are developed. The results are applicable to realistic image generation in computer graphics. The investigation focuses on the basic issue of how light is scattered locally by general surfaces which are neither diffuse nor specular; Physical optics is employed to account for diffraction and interference which play a crucial role in the scattering of light for most surfaces. The thesis presents: (1) A new reflectance model; (2) A new transmittance model; (3) A new subsurface scattering model. All of these models are physically-based, depend on only physical parameters, apply to a wide range of materials and surface finishes and more importantly, provide a smooth transition from diffuse-like to specular reflection as the wavelength and incidence angle are increased or the surface roughness is decreased. The reflectance and transmittance models are based on the Kirchhoff Theory and the subsurface scattering model is based on Energy Transport Theory. They are valid only for surfaces with shallow slopes. The thesis shows that predicted reflectance distributions given by the reflectance model compare favorably with experiment. The thesis also investigates and implements fast ways of computing the reflectance and transmittance models. Furthermore, the thesis demonstrates that a high level of realistic image generation can be achieved due to the physically -correct treatment of the scattering processes by the reflectance model.
Jovian Planet Finder optical system
NASA Astrophysics Data System (ADS)
Krist, John E.; Clampin, Mark; Petro, Larry; Woodruff, Robert A.; Ford, Holland C.; Illingworth, Garth D.; Ftaclas, Christ
2003-02-01
The Jovian Planet Finder (JPF) is a proposed NASA MIDEX mission to place a highly optimized coronagraphic telescope on the International Space Station (ISS) to image Jupiter-like planets around nearby stars. The optical system is an off-axis, unobscured telescope with a 1.5 m primary mirror. A classical Lyot coronagraph with apodized occulting spots is used to reduce diffracted light from the central star. In order to provide the necessary contrast for detection of a planet, scattered light from mid-spatial-frequency errors is reduced by using super-smooth optics. Recent advances in polishing optics for extreme-ultraviolet lithography have shown that a factor of >30 reduction in midfrequency errors relative to those in the Hubble Space Telescope is possible (corresponding to a reduction in scattered light of nearly 1000x). The low level of scattered and diffracted light, together with a novel utilization of field rotation introduced by the alt-azimuth ISS telescope mounting, will provide a relatively low-cost facility for not only imaging extrasolar planets, but also circumstellar disks, host galaxies of quasars, and low-mass substellar companions such as brown dwarfs.
NASA Astrophysics Data System (ADS)
Zhou, Yi; Tang, Yan; Deng, Qinyuan; Liu, Junbo; Wang, Jian; Zhao, Lixin
2017-08-01
Dimensional metrology for micro structure plays an important role in addressing quality issues and observing the performance of micro-fabricated products. In white light interferometry, the proposed method is expected to measure three-dimensional topography through modulation depth in spatial frequency domain. A normalized modulation depth is first obtained in the xy plane (image plane) for each CCD image individually. After that, the modulation depth of each pixel is analyzed along the scanning direction (z-axis) to reshape the topography of micro samples. Owing to the characteristics of modulation depth in broadband light interferometry, the method could effectively suppress the negative influences caused by light fluctuations and external irradiance disturbance. Both theory and experiments are elaborated in detail to verify that the modulation depth-based method can greatly level up the stability and sensitivity with satisfied precision in the measurement system. This technique can achieve an improved robustness in a complex measurement environment with the potential to be applied in online topography measurement such as chemistry and medical domains.
In vivo imaging of the retinal pigment epithelial cells
NASA Astrophysics Data System (ADS)
Morgan, Jessica Ijams Wolfing
The retinal pigment epithelial (RPE) cells form an important layer of the retina because they are responsible for providing metabolic support to the photoreceptors. Techniques to image the RPE layer include autofluorescence imaging with a scanning laser ophthalmoscope (SLO). However, previous studies were unable to resolve single RPE cells in vivo. This thesis describes the technique of combining autofluorescence, SLO, adaptive optics (AO), and dual-wavelength simultaneous imaging and registration to visualize the individual cells in the RPE mosaic in human and primate retina for the first time in vivo. After imaging the RPE mosaic non-invasively, the cell layer's structure and regularity were characterized using quantitative metrics of cell density, spacing, and nearest neighbor distances. The RPE mosaic was compared to the cone mosaic, and RPE imaging methods were confirmed using histology. The ability to image the RPE mosaic led to the discovery of a novel retinal change following light exposure; 568 nm exposures caused an immediate reduction in autofluorescence followed by either full recovery or permanent damage in the RPE layer. A safety study was conducted to determine the range of exposure irradiances that caused permanent damage or transient autofluorescence reductions. Additionally, the threshold exposure causing autofluorescence reduction was determined and reciprocity of radiant exposure was confirmed. Light exposures delivered by the AOSLO were not significantly different than those delivered by a uniform source. As all exposures tested were near or below the permissible light levels of safety standards, this thesis provides evidence that the current light safety standards need to be revised. Finally, with the retinal damage and autofluorescence reduction thresholds identified, the methods of RPE imaging were modified to allow successful imaging of the individual cells in the RPE mosaic while still ensuring retinal safety. This thesis has provided a highly sensitive method for studying the in vivo morphology of individual RPE cells in normal, diseased, and damaged retinas. The methods presented here also will allow longitudinal studies for tracking disease progression and assessing treatment efficacy in human patients and animal models of retinal diseases affecting the RPE.
Helicopter flights with night-vision goggles: Human factors aspects
NASA Technical Reports Server (NTRS)
Brickner, Michael S.
1989-01-01
Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques.
A CMOS image sensor with programmable pixel-level analog processing.
Massari, Nicola; Gottardi, Massimo; Gonzo, Lorenzo; Stoppa, David; Simoni, Andrea
2005-11-01
A prototype of a 34 x 34 pixel image sensor, implementing real-time analog image processing, is presented. Edge detection, motion detection, image amplification, and dynamic-range boosting are executed at pixel level by means of a highly interconnected pixel architecture based on the absolute value of the difference among neighbor pixels. The analog operations are performed over a kernel of 3 x 3 pixels. The square pixel, consisting of 30 transistors, has a pitch of 35 microm with a fill-factor of 20%. The chip was fabricated in a 0.35 microm CMOS technology, and its power consumption is 6 mW with 3.3 V power supply. The device was fully characterized and achieves a dynamic range of 50 dB with a light power density of 150 nW/mm2 and a frame rate of 30 frame/s. The measured fixed pattern noise corresponds to 1.1% of the saturation level. The sensor's dynamic range can be extended up to 96 dB using the double-sampling technique.
Gallicchio, Lisa; Helzlsouer, Kathy J; Audlin, Kevin M; Miller, Charles; MacDonald, Ryan; Johnston, Mary; Barrueto, Fermin F
2015-01-01
To examine whether the addition of narrow band imaging (NBI) to traditional white light imaging during laparoscopic surgery impacts pain and quality of life (QOL) at 3 and 6 months after surgery among women with suspected endometriosis and/or infertility. A randomized controlled trial (Canadian Task Force classification level I). The trial was conducted in 2 medical centers. From October 2011 to November 2013, 167 patients undergoing laparoscopic examination for suspected endometriosis and/or infertility were recruited. The analytic study sample includes 148 patients with pain and QOL outcome data. Patients were randomized in a 3:1 ratio to receive white light imaging followed by NBI (WL/NBI) or white light imaging only (WL/WL). Questionnaires were administered at baseline and at 3- and 6-month follow-up time points. Average and most severe pain at each time point were assessed using a 10-cm visual analog scale. QOL was measured using the Endometriosis Health Profile-30. Baseline characteristics were similar for the study groups. The WL/NBI and WL/WL groups had similar reductions in pain at 3 and 6 months. In addition, QOL improved similarly for both the WL/NBI and WL/WL groups at 3 and 6 months. Laparoscopic surgery for suspected endometriosis is associated with a reduction in pain and an improvement in QOL. The differences in pain reduction and QOL improvement, which are noted at 3 months and remain stable at 6 months after surgery, are similar for those undergoing surgery with WL/NBI compared with those undergoing surgery under traditional white light conditions. Copyright © 2015 AAGL. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lance, C.; Eather, R.
1993-09-30
A low-light-level monochromatic imaging system was designed and fabricated which was optimized to detect and record optical emissions associated with high-power rf heating of the ionosphere. The instrument is capable of detecting very low intensities, of the order of 1 Rayleigh, from typical ionospheric atomic and molecular emissions. This is achieved through co-adding of ON images during heater pulses and subtraction of OFF (background) images between pulses. Images can be displayed and analyzed in real time and stored in optical disc for later analysis. Full image processing software is provided which was customized for this application and uses menu ormore » mouse user interaction.« less
Imaging spectrometer wide field catadioptric design
Chrisp,; Michael, P [Danville, CA
2008-08-19
A wide field catadioptric imaging spectrometer with an immersive diffraction grating that compensates optical distortions. The catadioptric design has zero Petzval field curvature. The imaging spectrometer comprises an entrance slit for transmitting light, a system with a catadioptric lens and a dioptric lens for receiving the light and directing the light, an immersion grating, and a detector array. The entrance slit, the system for receiving the light, the immersion grating, and the detector array are positioned wherein the entrance slit transmits light to the system for receiving the light and the system for receiving the light directs the light to the immersion grating and the immersion grating receives the light and directs the light through the system for receiving the light to the detector array.
Towards dosimetry for photodynamic diagnosis with the low-level dose of photosensitizer.
Buzalewicz, Igor; Hołowacz, Iwona; Ulatowska-Jarża, Agnieszka; Podbielska, Halina
2017-08-01
Contemporary medicine does not concern the issue of dosimetry in photodynamic diagnosis (PDD) but follows the photosensitizer (PS) producers recommendation. Most preclinical and clinical PDD studies indicate a considerable variation in the possibility of visualization and treatment, as e.g. in case of cervix lesions. Although some of these variations can be caused by the different histological subtypes or various tumor geometries, the issue of varying PS concentration in the tumor tissue volume is definitely an important factor. Therefore, there is a need to establish the objective and systematic PDD dosimetry protocol regarding doses of light and photosensitizers. Four different irradiation sources investigated in PDD (literature) were used for PS excitation. The PS luminescence was examined by means of the non-imaging (spectroscopic) and imaging (wide- and narrow-field of view) techniques. The methodology for low-level intensity photoluminescence (PL) characterization and dedicated image processing algorithm for PS luminescence images analysis were proposed. Further, HeLa cells' cultures penetration by PS was studied by a confocal microscopy. Reducing the PS dose with the choice of proper photoexcitation conditions decreases the PDD procedure costs and the side effects, not affecting the diagnostic efficiency. We determined in vitro the minimum incubation time and photosensitizer concentration of Photolon for diagnostic purposes, for which the Photolon PL can still be observed. It was demonstrated that quantification of PS concentration, choice of proper photoexcitation source, appropriate adjustment of light dose and PS penetration of cancer cells may improve the low-level luminescence photodynamic diagnostics performance. Practical effectiveness of the PDD strongly depends on irradiation source parameters (bandwidth, maximum intensity, half-width) and their optimization is the main conditioning factor for low-level intensity and low-cost PDD. Copyright © 2017 Elsevier B.V. All rights reserved.
McElwain, Elizabeth F.; Bohnert, Hans J.; Thomas, John C.
1992-01-01
In Mesembryanthemum crystallinum, phosphoenolpyruvate carboxylase is synthesized de novo in response to osmotic stress, as part of the switch from C3-photosynthesis to Crassulacean acid metabolism. To better understand the environmental signals involved in this pathway, we have investigated the effects of light on the induced expression of phosphoenolpyruvate carboxylase mRNA and protein in response to stress by 400 millimolar NaCl or 10 micromolar abscisic acid in hydroponically grown plants. When plants were grown in high-intensity fluorescent or incandescent light (850 microeinsteins per square meter per second), NaCl and abscisic acid induced approximately an eightfold accumulation of phosphoenolpyruvate carboxylase mRNA when compared to untreated controls. Levels of phosphoenolpyruvate carboxylase protein were high in these abscisic acid- and NaCl-treated plants, and detectable in the unstressed control. Growth in high-intensity incandescent (red) light resulted in approximately twofold higher levels of phosphoenolpyruvate carboxylase mRNA in the untreated plants when compared to control plants grown in high-intensity fluorescent light. In low light (300 microeinsteins per square meter per second fluorescent), only NaCl induced mRNA levels significantly above the untreated controls. Low light grown abscisic acid- and NaCl-treated plants contained a small amount of phosphoenolpyruvate carboxylase protein, whereas the (untreated) control plants did not contain detectable amounts of phosphoenolpyruvate carboxylase. Environmental stimuli, such as light and osmotic stress, exert a combined effect on gene expression in this facultative halophyte. ImagesFigure 1Figure 2 PMID:16668999
Compact imaging spectrometer utilizing immersed gratings
Lerner, Scott A.
2005-12-20
A compact imaging spectrometer comprising an entrance slit for directing light, lens means for receiving the light, refracting the light, and focusing the light; an immersed diffraction grating that receives the light from the lens means and defracts the light, the immersed diffraction grating directing the detracted light back to the lens means; and a detector that receives the light from the lens means.
ERIC Educational Resources Information Center
Physics Teacher, 1979
1979-01-01
Some topics included are: the relative merits of a programmable calculator and a microcomputer; the advantages of acquiring a sound-level meter for the laboratory; how to locate a virtual image in a plane mirror; center of gravity of a student; and how to demonstrate interference of light using two cords.
The effect of microchannel plate gain depression on PAPA photon counting cameras
NASA Astrophysics Data System (ADS)
Sams, Bruce J., III
1991-03-01
PAPA (precision analog photon address) cameras are photon counting imagers which employ microchannel plates (MCPs) for image intensification. They have been used extensively in astronomical speckle imaging. The PAPA camera can produce artifacts when light incident on its MCP is highly concentrated. The effect is exacerbated by adjusting the strobe detection level too low, so that the camera accepts very small MCP pulses. The artifacts can occur even at low total count rates if the image has highly a concentrated bright spot. This paper describes how to optimize PAPA camera electronics, and describes six techniques which can avoid or minimize addressing errors.
Multi Spectral Fluorescence Imager (MSFI)
NASA Technical Reports Server (NTRS)
Caron, Allison
2016-01-01
Genetic transformation with in vivo reporter genes for fluorescent proteins can be performed on a variety of organisms to address fundamental biological questions. Model organisms that may utilize an ISS imager include unicellular organisms (Saccharomyces cerevisiae), plants (Arabidopsis thaliana), and invertebrates (Caenorhabditis elegans). The multispectral fluorescence imager (MSFI) will have the capability to accommodate 10 cm x 10 cm Petri plates, various sized multi-well culture plates, and other custom culture containers. Features will include programmable temperature and light cycles, ethylene scrubbing (less than 25 ppb), CO2 control (between 400 ppm and ISS-ambient levels in units of 100 ppm) and sufficient airflow to prevent condensation that would interfere with imaging.
A trillion frames per second: the techniques and applications of light-in-flight photography.
Faccio, Daniele; Velten, Andreas
2018-06-14
Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light in flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze it's motion and therefore extract information that would otherwise be blurred out and lost. . © 2018 IOP Publishing Ltd.
Self-imaging of partially coherent light in graded-index media.
Ponomarenko, Sergey A
2015-02-15
We demonstrate that partially coherent light beams of arbitrary intensity and spectral degree of coherence profiles can self-image in linear graded-index media. The results can be applicable to imaging with noisy spatial or temporal light sources.
NASA Astrophysics Data System (ADS)
McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol
2016-06-01
Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants - Cr(VI), total chlorine, caffeine, and E. coli K12 - at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring.
3D reconstruction of internal structure of animal body using near-infrared light
NASA Astrophysics Data System (ADS)
Tran, Trung Nghia; Yamamoto, Kohei; Namita, Takeshi; Kato, Yuji; Shimizu, Koichi
2014-03-01
To realize three-dimensional (3D) optical imaging of the internal structure of animal body, we have developed a new technique to reconstruct CT images from two-dimensional (2D) transillumination images. In transillumination imaging, the image is blurred due to the strong scattering in the tissue. We had developed a scattering suppression technique using the point spread function (PSF) for a fluorescent light source in the body. In this study, we have newly proposed a technique to apply this PSF for a light source to the image of unknown light-absorbing structure. The effectiveness of the proposed technique was examined in the experiments with a model phantom and a mouse. In the phantom experiment, the absorbers were placed in the tissue-equivalent medium to simulate the light-absorbing organs in mouse body. Near-infrared light was illuminated from one side of the phantom and the image was recorded with CMOS camera from another side. Using the proposed techniques, the scattering effect was efficiently suppressed and the absorbing structure can be visualized in the 2D transillumination image. Using the 2D images obtained in many different orientations, we could reconstruct the 3D image. In the mouse experiment, an anesthetized mouse was held in an acrylic cylindrical holder. We can visualize the internal organs such as kidneys through mouse's abdomen using the proposed technique. The 3D image of the kidneys and a part of the liver were reconstructed. Through these experimental studies, the feasibility of practical 3D imaging of the internal light-absorbing structure of a small animal was verified.
NASA Astrophysics Data System (ADS)
Momin, Md. Abdul; Kondo, Naoshi; Kuramoto, Makoto; Ogawa, Yuichi; Shigi, Tomoo
2011-06-01
Research was conducted to acquire knowledge of the ultraviolet and visible spectrums from 300 -800 nm of some common varieties of Japanese citrus, to investigate the best wave-lengths for fluorescence excitation and the resulting fluorescence wave-lengths and to provide a scientific background for the best quality fluorescent imaging technique for detecting surface defects of citrus. A Hitachi U-4000 PC-based microprocessor controlled spectrophotometer was used to measure the absorption spectrum and a Hitachi F-4500 spectrophotometer was used for the fluorescence and excitation spectrums. We analyzed the spectrums and the selected varieties of citrus were categorized into four groups of known fluorescence level, namely strong, medium, weak and no fluorescence.The level of fluorescence of each variety was also examined by using machine vision system. We found that around 340-380 nm LEDs or UV lamps are appropriate as lighting devices for acquiring the best quality fluorescent image of the citrus varieties to examine their fluorescence intensity. Therefore an image acquisition device was constructed with three different lighting panels with UV LED at peak 365 nm, Blacklight blue lamps (BLB) peak at 350 nm and UV-B lamps at peak 306 nm. The results from fluorescent images also revealed that the findings of the measured spectrums worked properly and can be used for practical applications such as for detecting rotten, injured or damaged parts of a wide variety of citrus.
Analysis of Low-Light and Night-Time Stereo-Pair Images for Photogrammetric Reconstruction
NASA Astrophysics Data System (ADS)
Santise, M.; Thoeni, K.; Roncella, R.; Diotri, F.; Giacomini, A.
2018-05-01
Rockfalls and rockslides represent a significant risk to human lives and infrastructures because of the high levels of energy involved in the phenomena. Generally, these events occur in accordance to specific environmental conditions, such as temperature variations between day and night, that can contribute to the triggering of structural instabilities in the rock-wall and the detachment of blocks and debris. The monitoring and the geostructural characterization of the wall are required for reducing the potential hazard and to improve the management of the risk at the bottom of the slopes affected by such phenomena. In this context, close range photogrammetry is largely used for the monitoring of high-mountain terrains and rock walls in mine sites allowing for periodic survey of rockfalls and wall movements. This work focuses on the analysis of low-light and night-time images of a fixed-base stereo pair photogrammetry system. The aim is to study the reliability of the images acquired over the night to produce digital surface models (DSMs) for change detection. The images are captured by a high-sensitivity DLSR camera using various settings accounting for different values of ISO, aperture and time of exposure. For each acquisition, the DSM is compared to a photogrammetric reference model produced by images captured in optimal illumination conditions. Results show that, with high level of ISO and maintaining the same grade of aperture, extending the exposure time improves the quality of the point clouds in terms of completeness and accuracy of the photogrammetric models.
Imaging Lenticular Autofluorescence in Older Subjects.
Charng, Jason; Tan, Rose; Luu, Chi D; Sadigh, Sam; Stambolian, Dwight; Guymer, Robyn H; Jacobson, Samuel G; Cideciyan, Artur V
2017-10-01
To evaluate whether a practical method of imaging lenticular autofluorescence (AF) can provide an individualized measure correlated with age-related lens yellowing in older subjects undergoing tests involving shorter wavelength lights. Lenticular AF was imaged with 488-nm excitation using a confocal scanning laser ophthalmoscope (cSLO) routinely used for retinal AF imaging. There were 75 older subjects (ages 47-87) at two sites; a small cohort of younger subjects served as controls. At one site, the cSLO was equipped with an internal reference to allow quantitative AF measurements; at the other site, reduced-illuminance AF imaging (RAFI) was used. In a subset of subjects, lens density index was independently estimated from dark-adapted spectral sensitivities performed psychophysically. Lenticular AF intensity was significantly higher in the older eyes than the younger cohort when measured with the internal reference (59.2 ± 15.4 vs. 134.4 ± 31.7 gray levels; P < 0.05) as well as when recorded with RAFI without the internal reference (10.9 ± 1.5 vs. 26.1 ± 5.7 gray levels; P < 0.05). Lenticular AF was positively correlated with age; however, there could also be large differences between individuals of similar age. Lenticular AF intensity correlated well with lens density indices estimated from psychophysical measures. Lenticular AF measured with a retinal cSLO can provide a practical and individualized measure of lens yellowing, and may be a good candidate to distinguish between preretinal and retinal deficits involving short-wavelength lights in older eyes.
Ageing and proton irradiation damage of a low voltage EMCCD in a CMOS process
NASA Astrophysics Data System (ADS)
Dunford, A.; Stefanov, K.; Holland, A.
2018-02-01
Electron Multiplying Charge Coupled Devices (EMCCDs) have revolutionised low light level imaging, providing highly sensitive detection capabilities. Implementing Electron Multiplication (EM) in Charge Coupled Devices (CCDs) can increase the Signal to Noise Ratio (SNR) and lead to further developments in low light level applications such as improvements in image contrast and single photon imaging. Demand has grown for EMCCD devices with properties traditionally restricted to Complementary Metal-Oxide-Semiconductor (CMOS) image sensors, such as lower power consumption and higher radiation tolerance. However, EMCCDs are known to experience an ageing effect, such that the gain gradually decreases with time. This paper presents results detailing EM ageing in an Electron Multiplying Complementary Metal-Oxide-Semiconductor (EMCMOS) device and its effect on several device characteristics such as Charge Transfer Inefficiency (CTI) and thermal dark signal. When operated at room temperature an average decrease in gain of over 20% after an operational period of 175 hours was detected. With many image sensors deployed in harsh radiation environments, the radiation hardness of the device following proton irradiation was also tested. This paper presents the results of a proton irradiation completed at the Paul Scherrer Institut (PSI) at a 10 MeV equivalent fluence of 4.15× 1010 protons/cm2. The pre-irradiation characterisation, irradiation methodology and post-irradiation results are detailed, demonstrating an increase in dark current and a decrease in its activation energy. Finally, this paper presents a comparison of the damage caused by EM gain ageing and proton irradiation.
NASA Astrophysics Data System (ADS)
Wade, Cherrie; Brennan, Patrick C.; Mc Entee, Mark F.
2005-04-01
Diagnostic efficacy in soft-copy reporting relies heavily on the quality of workstation monitors and an investigation performed in 2002 demonstrated that CRT monitors in Dublin imaging departments were not operating at optimal levels. The current work examines the performance of CRTs being used in Dublin and other parts of Ireland to establish if problems reported in the earlier work have been rectified. All hospitals performing soft-copy reporting for general radiology using CRTs were included in the work. Examination of ambient lighting, calibration of monitors and analysis of CRT performance using the SMPTE test pattern and a selection of the AAPM test images was performed. Maximum luminance, spatial uniformity of luminance, temporal luminance stability, gamma, geometry, sharpness, veiling glare and spatial resolution of each monitor was evaluated. Ambient lighting in all reporting areas was within recommended levels. All the monitors were calibrated appropriately and were performing at acceptable levels for maximum luminance and temporal stability and only one of the thirty-three investigated failed to reach the standard for spatial uniformity. In contrast a number of the CRTs investigated showed poor adherence to acceptable levels for geometrical distortions, veiling glare and spatial resolution all of which are important influencers of image quality. Gamma values also appeared to be low for a number of monitors but this interpretation is provisional and subject to the establishment of ratified guideline values. The results demonstrate that although some improvement on the previous situation is evident, greater adherence to acceptable levels is required for certain parameters.
NASA Astrophysics Data System (ADS)
Agrawal, Anant
Optical coherence tomography (OCT) is a powerful medical imaging modality that uniquely produces high-resolution cross-sectional images of tissue using low energy light. Its clinical applications and technological capabilities have grown substantially since its invention about twenty years ago, but efforts have been limited to develop tools to assess performance of OCT devices with respect to the quality and content of acquired images. Such tools are important to ensure information derived from OCT signals and images is accurate and consistent, in order to support further technology development, promote standardization, and benefit public health. The research in this dissertation investigates new physical and computational models which can provide unique insights into specific performance characteristics of OCT devices. Physical models, known as phantoms, are fabricated and evaluated in the interest of establishing standardized test methods to measure several important quantities relevant to image quality. (1) Spatial resolution is measured with a nanoparticle-embedded phantom and model eye which together yield the point spread function under conditions where OCT is commonly used. (2) A multi-layered phantom is constructed to measure the contrast transfer function along the axis of light propagation, relevant for cross-sectional imaging capabilities. (3) Existing and new methods to determine device sensitivity are examined and compared, to better understand the detection limits of OCT. A novel computational model based on the finite-difference time-domain (FDTD) method, which simulates the physics of light behavior at the sub-microscopic level within complex, heterogeneous media, is developed to probe device and tissue characteristics influencing the information content of an OCT image. This model is first tested in simple geometric configurations to understand its accuracy and limitations, then a highly realistic representation of a biological cell, the retinal cone photoreceptor, is created and its resulting OCT signals studied. The phantoms and their associated test methods have successfully yielded novel types of data on the specific performance parameters of interest, which can feed standardization efforts within the OCT community. The level of signal detail provided by the computational model is unprecedented and gives significant insights into the effects of subcellular structures on OCT signals. Together, the outputs of this research effort serve as new tools in the toolkit to examine the intricate details of how and how well OCT devices produce information-rich images of biological tissue.
Plenoptic imaging with second-order correlations of light
NASA Astrophysics Data System (ADS)
Pepe, Francesco V.; Scarcelli, Giuliano; Garuccio, Augusto; D'Angelo, Milena
2016-01-01
Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable tridimensional imaging in a single shot. We demonstrate that it is possible to implement plenoptic imaging through second-order correlations of chaotic light, thus enabling to overcome the typical limitations of classical plenoptic devices.
Multispectral Palmprint Recognition Using a Quaternion Matrix
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049
Multispectral palmprint recognition using a quaternion matrix.
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
NASA Astrophysics Data System (ADS)
Zabarylo, U.; Minet, O.
2010-01-01
Investigations on the application of optical procedures for the diagnosis of rheumatism using scattered light images are only at the beginning both in terms of new image-processing methods and subsequent clinical application. For semi-automatic diagnosis using laser light, the multispectral scattered light images are registered and overlapped to pseudo-coloured images, which depict diagnostically essential contents by visually highlighting pathological changes.
Twin imaging phenomenon of integral imaging.
Hu, Juanmei; Lou, Yimin; Wu, Fengmin; Chen, Aixi
2018-05-14
The imaging principles and phenomena of integral imaging technique have been studied in detail using geometrical optics, wave optics, or light filed theory. However, most of the conclusions are only suit for the integral imaging systems using diffused illumination. In this work, a kind of twin imaging phenomenon and mechanism has been observed in a non-diffused illumination reflective integral imaging system. Interactive twin images including a real and a virtual 3D image of one object can be activated in the system. The imaging phenomenon is similar to the conjugate imaging effect of hologram, but it base on the refraction and reflection instead of diffraction. The imaging characteristics and mechanisms different from traditional integral imaging are deduced analytically. Thin film integral imaging systems with 80μm thickness have also been made to verify the imaging phenomenon. Vivid lighting interactive twin 3D images have been realized using a light-emitting diode (LED) light source. When the LED is moving, the twin 3D images are moving synchronously. This interesting phenomenon shows a good application prospect in interactive 3D display, argument reality, and security authentication.
Assessment of oral mucosal lesions with autofluorescence imaging and reflectance spectroscopy.
Lalla, Yastira; Matias, Marie Anne T; Farah, Camile S
2016-08-01
The aim of this prospective study was to evaluate the efficacy of a new form of autofluorescence imaging and tissue reflectance spectroscopy (Identafi, DentalEZ) in examining patients with oral mucosal lesions. The authors examined 88 patients with 231 oral mucosal lesions by conventional oral examination (COE) using white-light illumination and ×2.5 magnification loupes, followed by examination using Identafi. The authors noted fluorescence visualization loss, the presence of blanching, and diffuseness of vasculature. They performed incisional biopsies to provide definitive histopathologic diagnosis. Identafi's white light produced lesion visibility and border distinctness equivalent to COE. Identafi's violet light displayed a sensitivity of 12.5% and specificity of 85.4% for detection of oral epithelial dysplasia (OED). The authors noted visible vasculature using the green-amber light in 40.9% of lesions. Identafi's intraoral white light provided detailed visualization of oral mucosal lesions comparable with examination using an extraoral white-light source with magnification. A high level of clinical experience is required to interpret the results of autofluorescence examination as the violet light displayed low sensitivity for detection of OED. The green-amber light provided additional clinical information in relation to underlying vasculature and inflammation of lesions. Examination using Identafi can provide clinicians with more clinical data than a standard COE with yellow incandescent light, but the clinical and optical findings should be interpreted as a whole and not in isolation. Clinicians should use the light features of Identafi in a sequential and differential manner. Copyright © 2016 American Dental Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
Imaging skin pathologies with polarized light: Empirical and theoretical studies
NASA Astrophysics Data System (ADS)
Ramella-Roman, Jessica C.
The use of polarized light imaging can facilitate the determination of skin cancer borders before a Mohs surgery procedure. Linearly polarized light that illuminates the skin is backscattered by superficial layers where cancer often arises and is randomized by the collagen fibers. The superficially backscattered light can be distinguished from the diffused reflected light using a detector analyzer that is sequentially oriented parallel and perpendicular to the source polarization. A polarized image pol = parallel - perpendicular / parallel + perpendicular is generated. This image has a higher contrast to the superficial skin layers than simple total reflectance images. Pilot clinical trials were conducted with a small hand-held device for the accumulation of a library of lesions to establish the efficacy of polarized light imaging in vivo. It was found that melanoma exhibits a high contrast to polarized light imaging as well as basal and sclerosing cell carcinoma. Mechanisms of polarized light scattering from different tissues and tissue phantoms were studied in vitro. Parameters such as depth of depolarization (DOD), retardance, and birefringence were studied in theory and experimentally. Polarized light traveling through different tissues (skin, muscle, and liver) depolarized after a few hundred microns. Highly birefringent materials such as skin (DOD = 300 mum 696nm) and muscle (DOD = 370 mum 696nm) depolarized light faster than less birefringent materials such as liver (DOD = 700 mum 696nm). Light depolarization can also be attributed to scattering. Three Monte Carlo programs for modeling polarized light transfer into scattering media were implemented to evaluate these mechanisms. Simulations conducted with the Monte Carlo programs showed that small diameter spheres have different mechanisms of depolarization than larger ones. The models also showed that the anisotropy parameter g strongly influences the depolarization mechanism. (Abstract shortened by UMI.)
The reduction of retinal autofluorescence caused by light exposure.
Morgan, Jessica I W; Hunter, Jennifer J; Merigan, William H; Williams, David R
2009-12-01
A prior study showed that long exposure to 568-nm light at levels below the maximum permissible exposure safety limit produces retinal damage preceded by a transient reduction in the autofluorescence of retinal pigment epithelial (RPE) cells in vivo. The present study shows how the effects of exposure power and duration combine to produce this autofluorescence reduction and find the minimum exposure causing a detectable autofluorescence reduction. Macaque retinas were imaged using a fluorescence adaptive optics scanning laser ophthalmoscope to resolve individual RPE cells in vivo. The retina was exposed to 568-nm light over a square subtending 0.5 degrees with energies ranging from 1 to 788 J/cm(2), where power and duration were independently varied. In vivo exposures of 5 J/cm(2) and higher caused an immediate decrease in autofluorescence followed by either full autofluorescence recovery (exposures
The Reduction of Retinal Autofluorescence Caused by Light Exposure
Morgan, Jessica I. W.; Hunter, Jennifer J.; Merigan, William H.; Williams, David R.
2009-01-01
Purpose We have previously shown that long exposure to 568 nm light at levels below the maximum permissible exposure safety limit produces retinal damage preceded by a transient reduction in the autofluorescence of retinal pigment epithelial (RPE) cells in vivo. Here, we determine how the effects of exposure power and duration combine to produce this autofluorescence reduction and find the minimum exposure causing a detectable autofluorescence reduction. Methods Macaque retinas were imaged using a fluorescence adaptive optics scanning laser ophthalmoscope to resolve individual RPE cells in vivo. The retina was exposed to 568 nm light over a square subtending 0.5° with energies ranging from 1 J/cm2 to 788 J/cm2, where power and duration were independently varied. Results In vivo exposures of 5 J/cm2 and higher caused an immediate decrease in autofluorescence followed by either full autofluorescence recovery (exposures ≤ 210 J/cm2) or permanent RPE cell damage (exposures ≥ 247 J/cm2). No significant autofluorescence reduction was observed for exposures of 2 J/cm2 and lower. Reciprocity of exposure power and duration held for the exposures tested, implying that the total energy delivered to the retina, rather than its distribution in time, determines the amount of autofluorescence reduction. Conclusions That reciprocity holds is consistent with a photochemical origin, which may or may not cause retinal degeneration. The implementation of safe methods for delivering light to the retina requires a better understanding of the mechanism causing autofluorescence reduction. Finally, RPE imaging was demonstrated using light levels that do not cause a detectable reduction in autofluorescence. PMID:19628734
Jiang, Lide; Wang, Menghua
2013-09-20
A new flag/masking scheme has been developed for identifying stray light and cloud shadow pixels that significantly impact the quality of satellite-derived ocean color products. Various case studies have been carried out to evaluate the performance of the new cloud contamination flag/masking scheme on ocean color products derived from the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (SNPP). These include direct visual assessments, detailed quantitative case studies, objective statistic analyses, and global image examinations and comparisons. The National Oceanic and Atmospheric Administration (NOAA) Multisensor Level-1 to Level-2 (NOAA-MSL12) ocean color data processing system has been used in the study. The new stray light and cloud shadow identification method has been shown to outperform the current stray light flag in both valid data coverage and data quality of satellite-derived ocean color products. In addition, some cloud-related flags from the official VIIRS-SNPP data processing software, i.e., the Interface Data Processing System (IDPS), have been assessed. Although the data quality with the IDPS flags is comparable to that of the new flag implemented in the NOAA-MSL12 ocean color data processing system, the valid data coverage from the IDPS is significantly less than that from the NOAA-MSL12 using the new stray light and cloud shadow flag method. Thus, the IDPS flag/masking algorithms need to be refined and modified to reduce the pixel loss, e.g., the proposed new cloud contamination flag/masking can be implemented in IDPS VIIRS ocean color data processing.
2007-03-01
the system is treated in a gray-box manner, with limited known parameters. The analytical approach which follows was used to identify the deviations be...effect spherical aberration, coma and astigmatism is to blur the image by introducing light from outside each pixel’s IFOV. Petzval field curvature and...difference between the two records is not the linear difference of the incident light levels. Even dark current subtraction must be treated with caution
Tsai, F Y; Coruzzi, G
1991-01-01
Asparagine synthetase (AS) mRNA in Pisum sativum accumulates preferentially in plants grown in the dark. Nuclear run-on experiments demonstrate that expression of both the AS1 and AS2 genes is negatively regulated by light at the level of transcription. A decrease in the transcriptional rate of the AS1 gene can be detected as early as 20 min after exposure to light. Time course experiments reveal that the levels of AS mRNA fluctuate dramatically during a "normal" light/dark cycle. This is due to a direct effect of light and not to changes associated with circadian rhythm. A novel finding is that the light-repressed expression of the AS1 gene is as dramatic in nonphotosynthetic organs such as roots as it is in leaves. Experiments demonstrate that the small amount of light which passes through the soil is sufficient to repress AS1 expression in roots, indicating that light has a direct effect on AS1 gene expression in roots. The negative regulation of AS gene expression by light was shown to be a general phenomenon in plants which also occurs in nonlegumes such as Nicotiana plumbaginifolia and Nicotiana tabacum. Thus, the AS genes can serve as a model with which to dissect the molecular basis for light-regulated transcriptional repression in plants. Images PMID:1681424
Fuzzy entropy thresholding and multi-scale morphological approach for microscopic image enhancement
NASA Astrophysics Data System (ADS)
Zhou, Jiancan; Li, Yuexiang; Shen, Linlin
2017-07-01
Microscopic images provide lots of useful information for modern diagnosis and biological research. However, due to the unstable lighting condition during image capturing, two main problems, i.e., high-level noises and low image contrast, occurred in the generated cell images. In this paper, a simple but efficient enhancement framework is proposed to address the problems. The framework removes image noises using a hybrid method based on wavelet transform and fuzzy-entropy, and enhances the image contrast with an adaptive morphological approach. Experiments on real cell dataset were made to assess the performance of proposed framework. The experimental results demonstrate that our proposed enhancement framework increases the cell tracking accuracy to an average of 74.49%, which outperforms the benchmark algorithm, i.e., 46.18%.
Full-frame, programmable hyperspectral imager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, Steven P.; Graff, David L.
A programmable, many-band spectral imager based on addressable spatial light modulators (ASLMs), such as micro-mirror-, micro-shutter- or liquid-crystal arrays, is described. Capable of collecting at once, without scanning, a complete two-dimensional spatial image with ASLM spectral processing applied simultaneously to the entire image, the invention employs optical assemblies wherein light from all image points is forced to impinge at the same angle onto the dispersing element, eliminating interplay between spatial position and wavelength. This is achieved, as examples, using telecentric optics to image light at the required constant angle, or with micro-optical array structures, such as micro-lens- or capillary arrays,more » that aim the light on a pixel-by-pixel basis. Light of a given wavelength then emerges from the disperser at the same angle for all image points, is collected at a unique location for simultaneous manipulation by the ASLM, then recombined with other wavelengths to form a final spectrally-processed image.« less
2010-09-01
external sources ‘L1’ like zodiacal light (or diffuse nebula ) or stray light ‘L2’ and these components change with the telescope pointing. Bk (T,t...Astronomical scene background (zodiacal light, diffuse nebulae , etc.). L2(P A(tk), t): Image background component caused by stray light. MS
HUBBLE FINDS A BARE BLACK HOLE POURING OUT LIGHT
NASA Technical Reports Server (NTRS)
2002-01-01
NASA's Hubble Space Telescope has provided a never-before-seen view of a warped disk flooded with a torrent of ultraviolet light from hot gas trapped around a suspected massive black hole. [Right] This composite image of the core of the galaxy was constructed by combining a visible light image taken with Hubble's Wide Field Planetary Camera 2 (WFPC2), with a separate image taken in ultraviolet light with the Faint Object Camera (FOC). While the visible light image shows a dark dust disk, the ultraviolet image (color-coded blue) shows a bright feature along one side of the disk. Because Hubble sees ultraviolet light reflected from only one side of the disk, astronomers conclude the disk must be warped like the brim of a hat. The bright white spot at the image's center is light from the vicinity of the black hole which is illuminating the disk. [Left] A ground-based telescopic view of the core of the elliptical galaxy NGC 6251. The inset box shows Hubble Space Telescope's field of view. The galaxy is 300 million light-years away in the constellation Ursa Minor. Photo Credit: Philippe Crane (European Southern Observatory), and NASA
QUASAR PG1115+080 AND GRAVITATIONAL LENS
NASA Technical Reports Server (NTRS)
2002-01-01
Left: The light from the single quasar PG 1115+080 is split and distorted in this infrared image. PG 1115+080 is at a distance of about 8 billion light years in the constellation Leo, and it is viewed through an elliptical galaxy lens at a distance of 3 billion light years. The NICMOS frame is taken at a wavelength of 1.6 microns and it shows the four images of the quasar (the two on the left are nearly merging) surrounding the galaxy that causes the light to be lensed. The quasar is a variable light source and the light in each image travels a different path to reach the Earth. The time delay of the variations allows the distance scale to be measured directly. The linear streaks on the image are diffraction artifacts in the NICMOS instrument (NASA/Space Telescope Science Institute). Right: In this NICMOS image, the four quasar images and the lens galaxy have been subtracted, revealing a nearly complete ring of infrared light. This ring is the stretched and amplified starlight of the galaxy that contains the quasar, some 8 billion light years away. (NASA/Space Telescope Science Institute). Credit: Christopher D. Impey (University of Arizona)
Image quality assessment for teledermatology: from consumer devices to a dedicated medical device
NASA Astrophysics Data System (ADS)
Amouroux, Marine; Le Cunff, Sébastien; Haudrechy, Alexandre; Blondel, Walter
2017-03-01
Aging population as well as growing incidence of type 2 diabetes induce a growing incidence of chronic skin disorders. In the meantime, chronic shortage of dermatologists leaves some areas underserved. Remote triage and assistance to homecare nurses (known as "teledermatology") appear to be promising solutions to provide dermatological valuation in a decent time to patients wherever they live. Nowadays, teledermatology is often based on consumer devices (digital tablets, smartphones, webcams) whose photobiological and electrical safety levels do not match with medical devices' levels. The American Telemedicine Association (ATA) has published recommendations on quality standards for teledermatology. This "quick guide" does not address the issue of image quality which is critical in domestic environments where lighting is rarely reproducible. Standardized approaches of image quality would allow clinical trial comparison, calibration, manufacturing quality control and quality insurance during clinical use. Therefore, we defined several critical metrics using calibration charts (color and resolution charts) in order to assess image quality such as resolution, lighting uniformity, color repeatability and discrimination of key couples of colors. Using such metrics, we compared quality of images produced by several medical devices (handheld and video-dermoscopes) as well as by consumer devices (digital tablet and cameras) widely spread among dermatologists practice. Since diagnosis accuracy may be impaired by "low quality-images", this study highlights that, from an optical point of view, teledermatology should only be performed using medical devices. Furthermore, a dedicated medical device should probably be developed for the time follow-up of skin lesions often managed in teledermatology such as chronic wounds that require i) noncontact imaging of ii) large areas of skin surfaces, both criteria that cannot be matched using dermoscopes.
Imaging cellular and subcellular structure of human brain tissue using micro computed tomography
NASA Astrophysics Data System (ADS)
Khimchenko, Anna; Bikis, Christos; Schweighauser, Gabriel; Hench, Jürgen; Joita-Pacureanu, Alexandra-Teodora; Thalmann, Peter; Deyhle, Hans; Osmani, Bekim; Chicherova, Natalia; Hieber, Simone E.; Cloetens, Peter; Müller-Gerbl, Magdalena; Schulz, Georg; Müller, Bert
2017-09-01
Brain tissues have been an attractive subject for investigations in neuropathology, neuroscience, and neurobiol- ogy. Nevertheless, existing imaging methodologies have intrinsic limitations in three-dimensional (3D) label-free visualisation of extended tissue samples down to (sub)cellular level. For a long time, these morphological features were visualised by electron or light microscopies. In addition to being time-consuming, microscopic investigation includes specimen fixation, embedding, sectioning, staining, and imaging with the associated artefacts. More- over, optical microscopy remains hampered by a fundamental limit in the spatial resolution that is imposed by the diffraction of visible light wavefront. In contrast, various tomography approaches do not require a complex specimen preparation and can now reach a true (sub)cellular resolution. Even laboratory-based micro computed tomography in the absorption-contrast mode of formalin-fixed paraffin-embedded (FFPE) human cerebellum yields an image contrast comparable to conventional histological sections. Data of a superior image quality was obtained by means of synchrotron radiation-based single-distance X-ray phase-contrast tomography enabling the visualisation of non-stained Purkinje cells down to the subcellular level and automated cell counting. The question arises, whether the data quality of the hard X-ray tomography can be superior to optical microscopy. Herein, we discuss the label-free investigation of the human brain ultramorphology be means of synchrotron radiation-based hard X-ray magnified phase-contrast in-line tomography at the nano-imaging beamline ID16A (ESRF, Grenoble, France). As an example, we present images of FFPE human cerebellum block. Hard X-ray tomography can provide detailed information on human tissues in health and disease with a spatial resolution below the optical limit, improving understanding of the neuro-degenerative diseases.
Sandison, David R.; Platzbecker, Mark R.; Descour, Michael R.; Armour, David L.; Craig, Marcus J.; Richards-Kortum, Rebecca
1999-01-01
A multispectral imaging probe delivers a range of wavelengths of excitation light to a target and collects a range of expressed light wavelengths. The multispectral imaging probe is adapted for mobile use and use in confined spaces, and is sealed against the effects of hostile environments. The multispectral imaging probe comprises a housing that defines a sealed volume that is substantially sealed from the surrounding environment. A beam splitting device mounts within the sealed volume. Excitation light is directed to the beam splitting device, which directs the excitation light to a target. Expressed light from the target reaches the beam splitting device along a path coaxial with the path traveled by the excitation light from the beam splitting device to the target. The beam splitting device directs expressed light to a collection subsystem for delivery to a detector.
Sandison, D.R.; Platzbecker, M.R.; Descour, M.R.; Armour, D.L.; Craig, M.J.; Richards-Kortum, R.
1999-07-27
A multispectral imaging probe delivers a range of wavelengths of excitation light to a target and collects a range of expressed light wavelengths. The multispectral imaging probe is adapted for mobile use and use in confined spaces, and is sealed against the effects of hostile environments. The multispectral imaging probe comprises a housing that defines a sealed volume that is substantially sealed from the surrounding environment. A beam splitting device mounts within the sealed volume. Excitation light is directed to the beam splitting device, which directs the excitation light to a target. Expressed light from the target reaches the beam splitting device along a path coaxial with the path traveled by the excitation light from the beam splitting device to the target. The beam splitting device directs expressed light to a collection subsystem for delivery to a detector. 8 figs.
Light-sheet enhanced resolution of light field microscopy for rapid imaging of large volumes
NASA Astrophysics Data System (ADS)
Madrid Wolff, Jorge; Castro, Diego; Arbeláez, Pablo; Forero-Shelton, Manu
2018-02-01
Whole-brain imaging is challenging because it demands microscopes with high temporal and spatial resolution, which are often at odds, especially in the context of large fields of view. We have designed and built a light-sheet microscope with digital micromirror illumination and light-field detection. On the one hand, light sheets provide high resolution optical sectioning on live samples without compromising their viability. On the other hand, light field imaging makes it possible to reconstruct full volumes of relatively large fields of view from a single camera exposure; however, its enhanced temporal resolution comes at the expense of spatial resolution, limiting its applicability. We present an approach to increase the resolution of light field images using DMD-based light sheet illumination. To that end, we develop a method to produce synthetic resolution targets for light field microscopy and a procedure to correct the depth at which planes are refocused with rendering software. We measured the axial resolution as a function of depth and show a three-fold potential improvement with structured illumination, albeit by sacrificing some temporal resolution, also three-fold. This results in an imaging system that may be adjusted to specific needs without having to reassemble and realign it. This approach could be used to image relatively large samples at high rates.
Developments in the recovery of colour in fine art prints using spatial image processing
NASA Astrophysics Data System (ADS)
Rizzi, A.; Parraman, C.
2010-06-01
Printmakers have at their disposal a wide range of colour printing processes. The majority of artists will utilise high quality materials with the expectation that the best materials and pigments will ensure image permanence. However, as many artists have experienced, this is not always the case. Inks, papers and materials can deteriorate over time. For artists and conservators who need to restore colour or tone to a print could benefit from the assistance of spatial colour enhancement tools. This paper studies two collections from the same edition of fine art prints that were made in 1991. The first edition has been kept in an archive and not exposed to light. The second edition has been framed and exposed to light for about 18 years. Previous experiments using colour enhancement methods [9,10] have involved a series of photographs that had been taken under poor or extreme lighting conditions, fine art works, scanned works. There are a range of colour enhancement methods: Retinex, RSR, ACE, Histogram Equalisation, Auto Levels, which are described in this paper. In this paper we will concentrate on the ACE algorithm and use a range of parameters to process the printed images and describe these results.
Computational imaging of light in flight
NASA Astrophysics Data System (ADS)
Hullin, Matthias B.
2014-10-01
Many computer vision tasks are hindered by image formation itself, a process that is governed by the so-called plenoptic integral. By averaging light falling into the lens over space, angle, wavelength and time, a great deal of information is irreversibly lost. The emerging idea of transient imaging operates on a time resolution fast enough to resolve non-stationary light distributions in real-world scenes. It enables the discrimination of light contributions by the optical path length from light source to receiver, a dimension unavailable in mainstream imaging to date. Until recently, such measurements used to require high-end optical equipment and could only be acquired under extremely restricted lab conditions. To address this challenge, we introduced a family of computational imaging techniques operating on standard time-of-flight image sensors, for the first time allowing the user to "film" light in flight in an affordable, practical and portable way. Just as impulse responses have proven a valuable tool in almost every branch of science and engineering, we expect light-in-flight analysis to impact a wide variety of applications in computer vision and beyond.
NASA Astrophysics Data System (ADS)
Mazlin, Viacheslav; Xiao, Peng; Dalimier, Eugénie; Grieve, Kate; Irsch, Kristina; Sahel, José; Fink, Mathias; Boccara, Claude
2018-02-01
Despite obvious improvements in visualization of the in vivo cornea through the faster imaging speeds and higher axial resolutions, cellular imaging stays unresolvable task for OCT, as en face viewing with a high lateral resolution is required. The latter is possible with FFOCT, a method that relies on a camera, moderate numerical aperture (NA) objectives and an incoherent light source to provide en face images with a micrometer-level resolution. Recently, we for the first time demonstrated the ability of FFOCT to capture images from the in vivo human cornea1. In the current paper we present an extensive study of appearance of healthy in vivo human corneas under FFOCT examination. En face corneal images with a micrometer-level resolution were obtained from the three healthy subjects. For each subject it was possible to acquire images through the entire corneal depth and visualize the epithelium structures, Bowman's layer, sub-basal nerve plexus (SNP) fibers, anterior, middle and posterior stroma, endothelial cells with nuclei. Dimensions and densities of the structures visible with FFOCT, are in agreement with those seen by other cornea imaging methods. Cellular-level details in the images obtained together with the relatively large field-of-view (FOV) and contactless way of imaging make this device a promising candidate for becoming a new tool in ophthalmological diagnostics.
NASA Astrophysics Data System (ADS)
Brachmann, Johannes F. S.; Baumgartner, Andreas; Lenhard, Karim
2016-10-01
The Calibration Home Base (CHB) at the Remote Sensing Technology Institute of the German Aerospace Center (DLR-IMF) is an optical laboratory designed for the calibration of imaging spectrometers for the VNIR/SWIR wavelength range. Radiometric, spectral and geometric characterization is realized in the CHB in a precise and highly automated fashion. This allows performing a wide range of time consuming measurements in an efficient way. The implementation of ISO 9001 standards ensures a traceable quality of results. DLR-IMF will support the calibration and characterization campaign of the future German spaceborne hyperspectral imager EnMAP. In the context of this activity, a procedure for the correction of imaging artifacts, such as due to stray light, is currently being developed by DLR-IMF. Goal is the correction of in-band stray light as well as ghost images down to a level of a few digital numbers in the whole wavelength range 420-2450 nm. DLR-IMF owns a Norsk Elektro Optikks HySpex airborne imaging spectrometer system that has been thoroughly characterized. This system will be used to test stray light calibration procedures for EnMAP. Hyperspectral snapshot sensors offer the possibility to simultaneously acquire hyperspectral data in two dimensions. Recently, these rather new spectrometers have arisen much interest in the remote sensing community. Different designs are currently used for local area observation such as by use of small unmanned aerial vehicles (sUAV). In this context the CHB's measurement capabilities are currently extended such that a standard measurement procedure for these new sensors will be implemented.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
NASA Astrophysics Data System (ADS)
Vega, David; Kiekens, Kelli C.; Syson, Nikolas C.; Romano, Gabriella; Baker, Tressa; Barton, Jennifer K.
2018-02-01
While Optical Coherence Microscopy (OCM), Multiphoton Microscopy (MPM), and narrowband imaging are powerful imaging techniques that can be used to detect cancer, each imaging technique has limitations when used by itself. Combining them into an endoscope to work in synergy can help achieve high sensitivity and specificity for diagnosis at the point of care. Such complex endoscopes have an elevated risk of failure, and performing proper modelling ensures functionality and minimizes risk. We present full 2D and 3D models of a multimodality optical micro-endoscope to provide real-time detection of carcinomas, called a salpingoscope. The models evaluate the endoscope illumination and light collection capabilities of various modalities. The design features two optical paths with different numerical apertures (NA) through a single lens system with a scanning optical fiber. The dual path is achieved using dichroic coatings embedded in a triplet. A high NA optical path is designed to perform OCM and MPM while a low NA optical path is designed for the visible spectrum to navigate the endoscope to areas of interest and narrowband imaging. Different tests such as the reflectance profile of homogeneous epithelial tissue were performed to adjust the models properly. Light collection models for the different modalities were created and tested for efficiency. While it is challenging to evaluate the efficiency of multimodality endoscopes, the models ensure that the system is design for the expected light collection levels to provide detectable signal to work for the intended imaging.
Surgical guidance system using hand-held probe with accompanying positron coincidence detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majewski, Stanislaw; Weisenberger, Andrew G.
A surgical guidance system offering different levels of imaging capability while maintaining the same hand-held convenient small size of light-weight intra-operative probes. The surgical guidance system includes a second detector, typically an imager, located behind the area of surgical interest to form a coincidence guidance system with the hand-held probe. This approach is focused on the detection of positron emitting biomarkers with gamma rays accompanying positron emissions from the radiolabeled nuclei.
Spectral imaging using consumer-level devices and kernel-based regression.
Heikkinen, Ville; Cámara, Clara; Hirvonen, Tapani; Penttinen, Niko
2016-06-01
Hyperspectral reflectance factor image estimations were performed in the 400-700 nm wavelength range using a portable consumer-level laptop display as an adjustable light source for a trichromatic camera. Targets of interest were ColorChecker Classic samples, Munsell Matte samples, geometrically challenging tempera icon paintings from the turn of the 20th century, and human hands. Measurements and simulations were performed using Nikon D80 RGB camera and Dell Vostro 2520 laptop screen as a light source. Estimations were performed without spectral characteristics of the devices and by emphasizing simplicity for training sets and estimation model optimization. Spectral and color error images are shown for the estimations using line-scanned hyperspectral images as the ground truth. Estimations were performed using kernel-based regression models via a first-degree inhomogeneous polynomial kernel and a Matérn kernel, where in the latter case the median heuristic approach for model optimization and link function for bounded estimation were evaluated. Results suggest modest requirements for a training set and show that all estimation models have markedly improved accuracy with respect to the DE00 color distance (up to 99% for paintings and hands) and the Pearson distance (up to 98% for paintings and 99% for hands) from a weak training set (Digital ColorChecker SG) case when small representative training data were used in the estimation.
Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang
2016-01-01
Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312
Menon, Nadia; White, David; Kemp, Richard I
2015-01-01
According to cognitive and neurological models of the face-processing system, faces are represented at two levels of abstraction. First, image-based pictorial representations code a particular instance of a face and include information that is unrelated to identity-such as lighting, pose, and expression. Second, at a more abstract level, identity-specific representations combine information from various encounters with a single face. Here we tested whether identity-level representations mediate unfamiliar face matching performance. Across three experiments we manipulated identity attributions to pairs of target images and measured the effect on subsequent identification decisions. Participants were instructed that target images were either two photos of the same person (1ID condition) or photos of two different people (2ID condition). This manipulation consistently affected performance in sequential matching: 1ID instructions improved accuracy on "match" trials and caused participants to adopt a more liberal response bias than the 2ID condition. However, this manipulation did not affect performance in simultaneous matching. We conclude that identity-level representations, generated in working memory, influence the amount of variation tolerated between images, when making identity judgements in sequential face matching.
Shapiro, Jeffrey H.; Venkatraman, Dheera; Wong, Franco N. C.
2013-01-01
Ragy and Adesso argue that quantum discord is involved in the formation of a pseudothermal ghost image. We show that quantum discord plays no role in spatial light modulator ghost imaging, i.e., ghost-image formation based on structured illumination realized with laser light that has undergone spatial light modulation by the output from a pseudorandom number generator. Our analysis thus casts doubt on the degree to which quantum discord is necessary for ghost imaging. PMID:23673426
NASA Astrophysics Data System (ADS)
Stoyanov, Stiliyan; Mardirossian, Garo
2012-10-01
The light diffraction is for telescope apparatuses an especially important characteristic which has an influence on the record image contrast from the eye observer. The task of the investigation is to determine to what degree the coefficient of light diffraction influences the record image brightness. The object of the theoretical research are experimental results provided from a telescope system experiment in the process of observation of remote objects with different brightness of the background in the fixed light diffraction coefficients and permanent contrast of the background in respect to the object. The received values and the ratio of the image contrast to the light diffraction coefficient is shown in a graphic view. It's settled that with increasing of the value of background brightness in permanent background contrast in respect to the object, the image contrast sharply decrease. The relationship between the increase of the light diffraction coefficient and the decrease of the brightness of the project image from telescope apparatuses can be observed.
Deformability analysis of sickle blood using ektacytometry.
Rabai, Miklos; Detterich, Jon A; Wenby, Rosalinda B; Hernandez, Tatiana M; Toth, Kalman; Meiselman, Herbert J; Wood, John C
2014-01-01
Sickle cell disease (SCD) is characterized by decreased erythrocyte deformability, microvessel occlusion and severe painful infarctions of different organs. Ektacytometry of SCD red blood cells (RBC) is made difficult by the presence of rigid, poorly-deformable irreversibly sickled cells (ISC) that do not align with the fluid shear field and distort the elliptical diffraction pattern seen with normal RBC. In operation, the computer software fits an outline to the diffraction pattern, then reports an elongation index (EI) at each shear stress based on the length and width of the fitted ellipse: EI=(length-width)/(length+width). Using a commercial ektacytometer (LORCA, Mechatronics Instruments, The Netherlands) we have approached the problem of ellipse fitting in two ways: (1) altering the height of the diffraction image on a computer monitor using an aperture within the camera lens; (2) altering the light intensity level (gray level) used by the software to fit the image to an elliptical shape. Neither of these methods affected deformability results (elongation index-shear stress relations) for normal RBC but did markedly affect results for SCD erythrocytes: (1) decreasing image height by 15% and 30% increased EI at moderate to high stresses; (2) progressively increasing the light level increased EI over a wide range of stresses. Fitting data obtained at different image heights using the Lineweaver-Burke routine yielded percentage ISC results in good agreement with microscopic cell counting. We suggest that these two relatively simple approaches allow minimizing artifacts due to the presence of rigid discs or ISC and also suggest the need for additional studies to evaluate the physiological relevance of deformability data obtained via these methods.
Flor-Henry, Michel; McCabe, Tulene C; de Bruxelles, Guy L; Roberts, Michael R
2004-01-01
Background All living organisms emit spontaneous low-level bioluminescence, which can be increased in response to stress. Methods for imaging this ultra-weak luminescence have previously been limited by the sensitivity of the detection systems used. Results We developed a novel configuration of a cooled charge-coupled device (CCD) for 2-dimensional imaging of light emission from biological material. In this study, we imaged photon emission from plant leaves. The equipment allowed short integration times for image acquisition, providing high resolution spatial and temporal information on bioluminescence. We were able to carry out time course imaging of both delayed chlorophyll fluorescence from whole leaves, and of low level wound-induced luminescence that we showed to be localised to sites of tissue damage. We found that wound-induced luminescence was chlorophyll-dependent and was enhanced at higher temperatures. Conclusions The data gathered on plant bioluminescence illustrate that the equipment described here represents an improvement in 2-dimensional luminescence imaging technology. Using this system, we identify chlorophyll as the origin of wound-induced luminescence from leaves. PMID:15550176
Flight Calibration of the LROC Narrow Angle Camera
NASA Astrophysics Data System (ADS)
Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.
2016-04-01
Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.
[Fundus Autofluorescence Imaging].
Schmitz-Valckenberg, S
2015-09-01
Fundus autofluorescence (FAF) imaging allows for non-invasive mapping of changes at the level of the retinal pigment epithelium/photoreceptor complex and of alterations of macular pigment distribution. This imaging method is based on the visualisation of intrinsic fluorophores and may be easily and rapidly used in routine patient care. Main applications include degenerative disorders of the outer retina such as age-related macular degeneration, hereditary and acquired retinal diseases. FAF imaging is particularly helpful for differential diagnosis, detection and extent of involved retinal areas, structural-functional correlations and monitoring of changes over time. Recent developments include - in addition to the original application of short wavelength light for excitation ("blue" FAF imaging) - the use of other wavelength ranges ("green" or "near-infrared" FAF imaging), widefield imaging for visualisation of peripheral retinal areas and quantitative FAF imaging. Georg Thieme Verlag KG Stuttgart · New York.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Editor); Schenker, Paul (Editor)
1987-01-01
The papers presented in this volume provide an overview of current research in both optical and digital pattern recognition, with a theme of identifying overlapping research problems and methodologies. Topics discussed include image analysis and low-level vision, optical system design, object analysis and recognition, real-time hybrid architectures and algorithms, high-level image understanding, and optical matched filter design. Papers are presented on synthetic estimation filters for a control system; white-light correlator character recognition; optical AI architectures for intelligent sensors; interpreting aerial photographs by segmentation and search; and optical information processing using a new photopolymer.
ARIES: Enabling Visual Exploration and Organization of Art Image Collections.
Crissaff, Lhaylla; Wood Ruby, Louisa; Deutch, Samantha; DuBois, R Luke; Fekete, Jean-Daniel; Freire, Juliana; Silva, Claudio
2018-01-01
Art historians have traditionally used physical light boxes to prepare exhibits or curate collections. On a light box, they can place slides or printed images, move the images around at will, group them as desired, and visual-ly compare them. The transition to digital images has rendered this workflow obsolete. Now, art historians lack well-designed, unified interactive software tools that effectively support the operations they perform with physi-cal light boxes. To address this problem, we designed ARIES (ARt Image Exploration Space), an interactive image manipulation system that enables the exploration and organization of fine digital art. The system allows images to be compared in multiple ways, offering dynamic overlays analogous to a physical light box, and sup-porting advanced image comparisons and feature-matching functions, available through computational image processing. We demonstrate the effectiveness of our system to support art historians tasks through real use cases.
Real-time intraoperative fluorescence imaging system using light-absorption correction.
Themelis, George; Yoo, Jung Sun; Soh, Kwang-Sup; Schulz, Ralf; Ntziachristos, Vasilis
2009-01-01
We present a novel fluorescence imaging system developed for real-time interventional imaging applications. The system implements a correction scheme that improves the accuracy of epi-illumination fluorescence images for light intensity variation in tissues. The implementation is based on the use of three cameras operating in parallel, utilizing a common lens, which allows for the concurrent collection of color, fluorescence, and light attenuation images at the excitation wavelength from the same field of view. The correction is based on a ratio approach of fluorescence over light attenuation images. Color images and video is used for surgical guidance and for registration with the corrected fluorescence images. We showcase the performance metrics of this system on phantoms and animals, and discuss the advantages over conventional epi-illumination systems developed for real-time applications and the limits of validity of corrected epi-illumination fluorescence imaging.
Imaging skeletal muscle with linearly polarized light
NASA Astrophysics Data System (ADS)
Li, X.; Ranasinghesagara, J.; Yao, G.
2008-04-01
We developed a polarization sensitive imaging system that can acquire reflectance images in turbid samples using incident light of different polarization states. Using this system, we studied polarization imaging on bovine sternomandibularis muscle strips using light of two orthogonal linearly polarized states. We found the obtained polarization sensitive reflectance images had interesting patterns depending on the polarization states. In addition, we computed four elements of the Mueller matrix from the acquired images. As a comparison, we also obtained polarization images of a 20% Intralipid"R" solution and compared the results with those from muscle samples. We found that the polarization imaging patterns from Intralipid solution can be described with a model based on single-scattering approximation. However, the polarization images in muscle had distinct patterns and can not be explained by this simple model. These results implied that the unique structural properties of skeletal muscle play important roles in modulating the propagation of polarized light.
2010-03-29
This image shows NASA Cassini spacecraft imaging science subsystem visible-light mosaic of Mimas from previous flybys on the left. The right-hand image shows new infrared temperature data mapped on top of the visible-light image.
Distance measurement based on light field geometry and ray tracing.
Chen, Yanqin; Jin, Xin; Dai, Qionghai
2017-01-09
In this paper, we propose a geometric optical model to measure the distances of object planes in a light field image. The proposed geometric optical model is composed of two sub-models based on ray tracing: object space model and image space model. The two theoretic sub-models are derived on account of on-axis point light sources. In object space model, light rays propagate into the main lens and refract inside it following the refraction theorem. In image space model, light rays exit from emission positions on the main lens and subsequently impinge on the image sensor with different imaging diameters. The relationships between imaging diameters of objects and their corresponding emission positions on the main lens are investigated through utilizing refocusing and similar triangle principle. By combining the two sub-models together and tracing light rays back to the object space, the relationships between objects' imaging diameters and corresponding distances of object planes are figured out. The performance of the proposed geometric optical model is compared with existing approaches using different configurations of hand-held plenoptic 1.0 cameras and real experiments are conducted using a preliminary imaging system. Results demonstrate that the proposed model can outperform existing approaches in terms of accuracy and exhibits good performance at general imaging range.
In vivo dark-field imaging of the retinal pigment epithelium cell mosaic
Scoles, Drew; Sulai, Yusufu N.; Dubra, Alfredo
2013-01-01
Non-invasive reflectance imaging of the human RPE cell mosaic is demonstrated using a modified confocal adaptive optics scanning light ophthalmoscope (AOSLO). The confocal circular aperture in front of the imaging detector was replaced with a combination of a circular aperture 4 to 16 Airy disks in diameter and an opaque filament, 1 or 3 Airy disks thick. This arrangement reveals the RPE cell mosaic by dramatically attenuating the light backscattered by the photoreceptors. The RPE cell mosaic was visualized in all 7 recruited subjects at multiple retinal locations with varying degrees of contrast and cross-talk from the photoreceptors. Various experimental settings were explored for improving the visualization of the RPE cell boundaries including: pinhole diameter, filament thickness, illumination and imaging pupil apodization, unmatched imaging and illumination focus, wavelength and polarization. None of these offered an obvious path for enhancing image contrast. The demonstrated implementation of dark-field AOSLO imaging using 790 nm light requires low light exposures relative to light safety standards and it is more comfortable for the subject than the traditional autofluorescence RPE imaging with visible light. Both these factors make RPE dark-field imaging appealing for studying mechanisms of eye disease, as well as a clinical tool for screening and monitoring disease progression. PMID:24049692
Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy
Cha, Jae Won; Ballesta, Jerome; So, Peter T.C.
2010-01-01
The imaging depth of two-photon excitation fluorescence microscopy is partly limited by the inhomogeneity of the refractive index in biological specimens. This inhomogeneity results in a distortion of the wavefront of the excitation light. This wavefront distortion results in image resolution degradation and lower signal level. Using an adaptive optics system consisting of a Shack-Hartmann wavefront sensor and a deformable mirror, wavefront distortion can be measured and corrected. With adaptive optics compensation, we demonstrate that the resolution and signal level can be better preserved at greater imaging depth in a variety of ex-vivo tissue specimens including mouse tongue muscle, heart muscle, and brain. However, for these highly scattering tissues, we find signal degradation due to scattering to be a more dominant factor than aberration. PMID:20799824
Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy.
Cha, Jae Won; Ballesta, Jerome; So, Peter T C
2010-01-01
The imaging depth of two-photon excitation fluorescence microscopy is partly limited by the inhomogeneity of the refractive index in biological specimens. This inhomogeneity results in a distortion of the wavefront of the excitation light. This wavefront distortion results in image resolution degradation and lower signal level. Using an adaptive optics system consisting of a Shack-Hartmann wavefront sensor and a deformable mirror, wavefront distortion can be measured and corrected. With adaptive optics compensation, we demonstrate that the resolution and signal level can be better preserved at greater imaging depth in a variety of ex-vivo tissue specimens including mouse tongue muscle, heart muscle, and brain. However, for these highly scattering tissues, we find signal degradation due to scattering to be a more dominant factor than aberration.
Multiframe super resolution reconstruction method based on light field angular images
NASA Astrophysics Data System (ADS)
Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao
2017-12-01
The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.
An integrated single- and two-photon non-diffracting light-sheet microscope
NASA Astrophysics Data System (ADS)
Lau, Sze Cheung; Chiu, Hoi Chun; Zhao, Luwei; Zhao, Teng; Loy, M. M. T.; Du, Shengwang
2018-04-01
We describe a fluorescence optical microscope with both single-photon and two-photon non-diffracting light-sheet excitations for large volume imaging. With a special design to accommodate two different wavelength ranges (visible: 400-700 nm and near infrared: 800-1200 nm), we combine the line-Bessel sheet (LBS, for single-photon excitation) and the scanning Bessel beam (SBB, for two-photon excitation) light sheet together in a single microscope setup. For a transparent thin sample where the scattering can be ignored, the LBS single-photon excitation is the optimal imaging solution. When the light scattering becomes significant for a deep-cell or deep-tissue imaging, we use SBB light-sheet two-photon excitation with a longer wavelength. We achieved nearly identical lateral/axial resolution of about 350/270 nm for both imagings. This integrated light-sheet microscope may have a wide application for live-cell and live-tissue three-dimensional high-speed imaging.
Color transfer method preserving perceived lightness
NASA Astrophysics Data System (ADS)
Ueda, Chiaki; Azetsu, Tadahiro; Suetake, Noriaki; Uchino, Eiji
2016-06-01
Color transfer originally proposed by Reinhard et al. is a method to change the color appearance of an input image by using the color information of a reference image. The purpose of this study is to modify color transfer so that it works well even when the scenes of the input and reference images are not similar. Concretely, a color transfer method with lightness correction and color gamut adjustment is proposed. The lightness correction is applied to preserve the perceived lightness which is explained by the Helmholtz-Kohlrausch (H-K) effect. This effect is the phenomenon that vivid colors are perceived as brighter than dull colors with the same lightness. Hence, when the chroma is changed by image processing, the perceived lightness is also changed even if the physical lightness is preserved after the image processing. In the proposed method, by considering the H-K effect, color transfer that preserves the perceived lightness after processing is realized. Furthermore, color gamut adjustment is introduced to address the color gamut problem, which is caused by color space conversion. The effectiveness of the proposed method is verified by performing some experiments.
White-Light Optical Information Processing and Holography.
1983-05-03
Processing, White-Light Holography, Image Subtraction, Image Deblurring , Coherence Requirement, Apparent Transfer Function, Source Encoding, Signal...in this period, also demonstrated several color image processing capabilities. Among those are broadband color image deblurring and color image...Broadband Image Deblurring ..... ......... 6 2.5 Color Image Subtraction ............... 7 2.6 Rainbow Holographic Aberrations . . ..... 7 2.7
Willingness to Pay for a Clear Night Sky: Use of the Contingent Valuation Method
NASA Astrophysics Data System (ADS)
Simpson, Stephanie; Winebrake, J.; Noel-Storr, J.
2006-12-01
A clear night sky is a public good, and as a public good government intervention to regulate it is feasible and necessary. Light pollution decreases the ability to view the unobstructed night sky, and can have biological, human health, energy related, and scientific consequences. In order for governments to intervene more effectively with light pollution controls (costs), the benefits of light pollution reduction also need to be determined. This project uses the contingent valuation method to place an economic value on one of the benefits of light pollution reduction aesthetics. Using a willingness to pay approach, this study monetizes the value of a clear night sky for students at RIT. Images representing various levels of light pollution were presented to this population as part of a survey. The results of this study may aid local, state, and federal policy makers in making informed decisions regarding light pollution.
Sub-cycle light transients for attosecond, X-ray, four-dimensional imaging
NASA Astrophysics Data System (ADS)
Fattahi, Hanieh
2016-10-01
This paper reviews the revolutionary development of ultra-short, multi-TW laser pulse generation made possible by current laser technology. The design of the unified laser architecture discussed in this paper, based on the synthesis of ultrabroadband optical parametric chirped-pulse amplifiers, promises to provide powerful light transients with electromagnetic forces engineerable on the electron time scale. By coherent combination of multiple amplifiers operating in different wavelength ranges, pulses with wavelength spectra extending from less than 1 ?m to more than 10 ?m, with sub-cycle duration at unprecedented peak and average power levels can be generated. It is shown theoretically that these light transients enable the efficient generation of attosecond X-ray pulses with photon flux sufficient to image, for the first time, picometre-attosecond trajectories of electrons, by means of X-ray diffraction and record the electron dynamics by attosecond spectroscopy. The proposed system leads to a tool with sub-atomic spatio-temporal resolution for studying different processes deep inside matter.
Wavelength-scale light concentrator made by direct 3D laser writing of polymer metamaterials.
Moughames, J; Jradi, S; Chan, T M; Akil, S; Battie, Y; Naciri, A En; Herro, Z; Guenneau, S; Enoch, S; Joly, L; Cousin, J; Bruyant, A
2016-10-04
We report on the realization of functional infrared light concentrators based on a thick layer of air-polymer metamaterial with controlled pore size gradients. The design features an optimum gradient index profile leading to light focusing in the Fresnel zone of the structures for two selected operating wavelength domains near 5.6 and 10.4 μm. The metamaterial which consists in a thick polymer containing air holes with diameters ranging from λ/20 to λ/8 is made using a 3D lithography technique based on the two-photon polymerization of a homemade photopolymer. Infrared imaging of the structures reveals a tight focusing for both structures with a maximum local intensity increase by a factor of 2.5 for a concentrator volume of 1.5 λ 3 , slightly limited by the residual absorption of the selected polymer. Such porous and flat metamaterial structures offer interesting perspectives to increase infrared detector performance at the pixel level for imaging or sensing applications.
Wavelength-scale light concentrator made by direct 3D laser writing of polymer metamaterials
Moughames, J.; Jradi, S.; Chan, T. M.; Akil, S.; Battie, Y.; Naciri, A. En; Herro, Z.; Guenneau, S.; Enoch, S.; Joly, L.; Cousin, J.; Bruyant, A.
2016-01-01
We report on the realization of functional infrared light concentrators based on a thick layer of air-polymer metamaterial with controlled pore size gradients. The design features an optimum gradient index profile leading to light focusing in the Fresnel zone of the structures for two selected operating wavelength domains near 5.6 and 10.4 μm. The metamaterial which consists in a thick polymer containing air holes with diameters ranging from λ/20 to λ/8 is made using a 3D lithography technique based on the two-photon polymerization of a homemade photopolymer. Infrared imaging of the structures reveals a tight focusing for both structures with a maximum local intensity increase by a factor of 2.5 for a concentrator volume of 1.5 λ3, slightly limited by the residual absorption of the selected polymer. Such porous and flat metamaterial structures offer interesting perspectives to increase infrared detector performance at the pixel level for imaging or sensing applications. PMID:27698476
Semiconductor Quantum Dots for Bioimaging and Biodiagnostic Applications
NASA Astrophysics Data System (ADS)
Kairdolf, Brad A.; Smith, Andrew M.; Stokes, Todd H.; Wang, May D.; Young, Andrew N.; Nie, Shuming
2013-06-01
Semiconductor quantum dots (QDs) are light-emitting particles on the nanometer scale that have emerged as a new class of fluorescent labels for chemical analysis, molecular imaging, and biomedical diagnostics. Compared with traditional fluorescent probes, QDs have unique optical and electronic properties such as size-tunable light emission, narrow and symmetric emission spectra, and broad absorption spectra that enable the simultaneous excitation of multiple fluorescence colors. QDs are also considerably brighter and more resistant to photobleaching than are organic dyes and fluorescent proteins. These properties are well suited for dynamic imaging at the single-molecule level and for multiplexed biomedical diagnostics at ultrahigh sensitivity. Here, we discuss the fundamental properties of QDs; the development of next-generation QDs; and their applications in bioanalytical chemistry, dynamic cellular imaging, and medical diagnostics. For in vivo and clinical imaging, the potential toxicity of QDs remains a major concern. However, the toxic nature of cadmium-containing QDs is no longer a factor for in vitro diagnostics, so the use of multicolor QDs for molecular diagnostics and pathology is probably the most important and clinically relevant application for semiconductor QDs in the immediate future.
Semiconductor quantum dots for bioimaging and biodiagnostic applications.
Kairdolf, Brad A; Smith, Andrew M; Stokes, Todd H; Wang, May D; Young, Andrew N; Nie, Shuming
2013-01-01
Semiconductor quantum dots (QDs) are light-emitting particles on the nanometer scale that have emerged as a new class of fluorescent labels for chemical analysis, molecular imaging, and biomedical diagnostics. Compared with traditional fluorescent probes, QDs have unique optical and electronic properties such as size-tunable light emission, narrow and symmetric emission spectra, and broad absorption spectra that enable the simultaneous excitation of multiple fluorescence colors. QDs are also considerably brighter and more resistant to photobleaching than are organic dyes and fluorescent proteins. These properties are well suited for dynamic imaging at the single-molecule level and for multiplexed biomedical diagnostics at ultrahigh sensitivity. Here, we discuss the fundamental properties of QDs; the development of next-generation QDs; and their applications in bioanalytical chemistry, dynamic cellular imaging, and medical diagnostics. For in vivo and clinical imaging, the potential toxicity of QDs remains a major concern. However, the toxic nature of cadmium-containing QDs is no longer a factor for in vitro diagnostics, so the use of multicolor QDs for molecular diagnostics and pathology is probably the most important and clinically relevant application for semiconductor QDs in the immediate future.
Semiconductor Quantum Dots for Bioimaging and Biodiagnostic Applications
Kairdolf, Brad A.; Smith, Andrew M.; Stokes, Todd H.; Wang, May D.; Young, Andrew N.; Nie, Shuming
2013-01-01
Semiconductor quantum dots (QDs) are light-emitting particles on the nanometer scale that have emerged as a new class of fluorescent labels for chemical analysis, molecular imaging, and biomedical diagnostics. Compared with traditional fluorescent probes, QDs have unique optical and electronic properties such as size-tunable light emission, narrow and symmetric emission spectra, and broad absorption spectra that enable the simultaneous excitation of multiple fluorescence colors. QDs are also considerably brighter and more resistant to photobleaching than are organic dyes and fluorescent proteins. These properties are well suited for dynamic imaging at the single-molecule level and for multiplexed biomedical diagnostics at ultrahigh sensitivity. Here, we discuss the fundamental properties of QDs; the development of next-generation QDs; and their applications in bioanalytical chemistry, dynamic cellular imaging, and medical diagnostics. For in vivo and clinical imaging, the potential toxicity of QDs remains a major concern. However, the toxic nature of cadmium-containing QDs is no longer a factor for in vitro diagnostics, so the use of multicolor QDs for molecular diagnostics and pathology is probably the most important and clinically relevant application for semiconductor QDs in the immediate future. PMID:23527547
NASA Technical Reports Server (NTRS)
Balasubramanian, Kunjithapatha; White, Victor; Yee, Karl; Echternach, Pierre; Muller, Richard; Dickie, Matthew; Cady, Eric; Mejia Prada, Camilo; Ryan, Daniel; Poberezhskiy, Ilya;
2015-01-01
Star light suppression technologies to find and characterize faint exoplanets include internal coronagraph instruments as well as external star shade occulters. Currently, the NASA WFIRST-AFTA mission study includes an internal coronagraph instrument to find and characterize exoplanets. Various types of masks could be employed to suppress the host star light to about 10 -9 level contrast over a broad spectrum to enable the coronagraph mission objectives. Such masks for high contrast internal coronagraphic imaging require various fabrication technologies to meet a wide range of specifications, including precise shapes, micron scale island features, ultra-low reflectivity regions, uniformity, wave front quality, achromaticity, etc. We present the approaches employed at JPL to produce pupil plane and image plane coronagraph masks by combining electron beam, deep reactive ion etching, and black silicon technologies with illustrative examples of each, highlighting milestone accomplishments from the High Contrast Imaging Testbed (HCIT) at JPL and from the High Contrast Imaging Lab (HCIL) at Princeton University. We also present briefly the technologies applied to fabricate laboratory scale star shade masks.
NASA Astrophysics Data System (ADS)
Balasubramanian, Kunjithapatham; White, Victor; Yee, Karl; Echternach, Pierre; Muller, Richard; Dickie, Matthew; Cady, Eric; Mejia Prada, Camilo; Ryan, Daniel; Poberezhskiy, Ilya; Zhou, Hanying; Kern, Brian; Riggs, A. J.; Zimmerman, Neil T.; Sirbu, Dan; Shaklan, Stuart; Kasdin, Jeremy
2015-09-01
Star light suppression technologies to find and characterize faint exoplanets include internal coronagraph instruments as well as external star shade occulters. Currently, the NASA WFIRST-AFTA mission study includes an internal coronagraph instrument to find and characterize exoplanets. Various types of masks could be employed to suppress the host star light to about 10-9 level contrast over a broad spectrum to enable the coronagraph mission objectives. Such masks for high contrast internal coronagraphic imaging require various fabrication technologies to meet a wide range of specifications, including precise shapes, micron scale island features, ultra-low reflectivity regions, uniformity, wave front quality, achromaticity, etc. We present the approaches employed at JPL to produce pupil plane and image plane coronagraph masks by combining electron beam, deep reactive ion etching, and black silicon technologies with illustrative examples of each, highlighting milestone accomplishments from the High Contrast Imaging Testbed (HCIT) at JPL and from the High Contrast Imaging Lab (HCIL) at Princeton University. We also present briefly the technologies applied to fabricate laboratory scale star shade masks.
The Propeller Belts in Saturn A Ring
2017-01-30
This image from NASA's Cassini mission shows a region in Saturn's A ring. The level of detail is twice as high as this part of the rings has ever been seen before. The view contains many small, bright blemishes due to cosmic rays and charged particle radiation near the planet. The view shows a section of the A ring known to researchers for hosting belts of propellers -- bright, narrow, propeller-shaped disturbances in the ring produced by the gravity of unseen embedded moonlets. Several small propellers are visible in this view. These are on the order of 10 times smaller than the large, bright propellers whose orbits scientists have routinely tracked (and which are given nicknames for famous aviators). This image is a lightly processed version, with minimal enhancement, preserving all original details present in the image. he image was taken in visible light with the Cassini spacecraft wide-angle camera on Dec. 18, 2016. The view was obtained at a distance of approximately 33,000 miles (54,000 kilometers) from the rings and looks toward the unilluminated side of the rings. Image scale is about a quarter-mile (330 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA21059
Preliminary study of the reliability of imaging charge coupled devices
NASA Technical Reports Server (NTRS)
Beall, J. R.; Borenstein, M. D.; Homan, R. A.; Johnson, D. L.; Wilson, D. D.; Young, V. F.
1978-01-01
Imaging CCDs are capable of low light level response and high signal-to-noise ratios. In space applications they offer the user the ability to achieve extremely high resolution imaging with minimum circuitry in the photo sensor array. This work relates the CCD121H Fairchild device to the fundamentals of CCDs and the representative technologies. Several failure modes are described, construction is analyzed and test results are reported. In addition, the relationship of the device reliability to packaging principles is analyzed and test data presented. Finally, a test program is defined for more general reliability evaluation of CCDs.
Model analysis for the MAGIC telescope
NASA Astrophysics Data System (ADS)
Mazin, D.; Bigongiari, C.; Goebel, F.; Moralejo, A.; Wittek, W.
The MAGIC Collaboration operates the 17m imaging Cherenkov telescope on the Canary island La Palma. The main goal of the experiment is an energy threshold below 100 GeV for primary gamma rays. The new analysis technique (model analysis) takes advantage of the high resolution (both in space and time) camera by fitting the averaged expected templates of the shower development to the measured shower images in the camera. This approach allows to recognize and reconstruct images just above the level of the night sky background light fluctuations. Progress and preliminary results of the model analysis technique will be presented.
1981-01-01
Video cameras with contrast and black level controls can yield polarized light and differential interference contrast microscope images with unprecedented image quality, resolution, and recording speed. The theoretical basis and practical aspects of video polarization and differential interference contrast microscopy are discussed and several applications in cell biology are illustrated. These include: birefringence of cortical structures and beating cilia in Stentor, birefringence of rotating flagella on a single bacterium, growth and morphogenesis of echinoderm skeletal spicules in culture, ciliary and electrical activity in a balancing organ of a nudibranch snail, and acrosomal reaction in activated sperm. PMID:6788777
Elimination of coherent noise in a coherent light imaging system
NASA Technical Reports Server (NTRS)
Grebowsky, G. J.; Hermann, R. L.; Paull, H. B.; Shulman, A. R.
1970-01-01
Optical imaging systems using coherent light introduce objectionable noise into the output image plane. Dust and bubbles on and in lenses cause most of the noise in the output image. This noise usually appears as bull's-eye diffraction patterns in the image. By rotating the lens about the optical axis these diffraction patterns can be essentially eliminated. The technique does not destroy the spatial coherence of the light and permits spatial filtering of the input plane.
Lord, D.E.; Carter, G.W.; Petrini, R.R.
1983-08-02
A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.
Fringe image processing based on structured light series
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Li, Hongyan
2009-11-01
The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.
Zhao, Ming; Zhang, Han; Li, Yu; Ashok, Amit; Liang, Rongguang; Zhou, Weibin; Peng, Leilei
2014-01-01
In vivo fluorescent cellular imaging of deep internal organs is highly challenging, because the excitation needs to penetrate through strong scattering tissue and the emission signal is degraded significantly by photon diffusion induced by tissue-scattering. We report that by combining two-photon Bessel light-sheet microscopy with nonlinear structured illumination microscopy (SIM), live samples up to 600 microns wide can be imaged by light-sheet microscopy with 500 microns penetration depth, and diffused background in deep tissue light-sheet imaging can be reduced to obtain clear images at cellular resolution in depth beyond 200 microns. We demonstrate in vivo two-color imaging of pronephric glomeruli and vasculature of zebrafish kidney, whose cellular structures located at the center of the fish body are revealed in high clarity by two-color two-photon Bessel light-sheet SIM. PMID:24876996
Exploring plenoptic properties of correlation imaging with chaotic light
NASA Astrophysics Data System (ADS)
Pepe, Francesco V.; Vaccarelli, Ornella; Garuccio, Augusto; Scarcelli, Giuliano; D'Angelo, Milena
2017-11-01
In a setup illuminated by chaotic light, we consider different schemes that enable us to perform imaging by measuring second-order intensity correlations. The most relevant feature of the proposed protocols is the ability to perform plenoptic imaging, namely to reconstruct the geometrical path of light propagating in the system, by imaging both the object and the focusing element. This property allows us to encode, in a single data acquisition, both multi-perspective images of the scene and light distribution in different planes between the scene and the focusing element. We unveil the plenoptic property of three different setups, explore their refocusing potentialities and discuss their practical applications.
A laboratory demonstration of the capability to image an Earth-like extrasolar planet.
Trauger, John T; Traub, Wesley A
2007-04-12
The detection and characterization of an Earth-like planet orbiting a nearby star requires a telescope with an extraordinarily large contrast at small angular separations. At visible wavelengths, an Earth-like planet would be 1 x 10(-10) times fainter than the star at angular separations of typically 0.1 arcsecond or less. There are several proposed space telescope systems that could, in principle, achieve this. Here we report a laboratory experiment that reaches these limits. We have suppressed the diffracted and scattered light near a star-like source to a level of 6 x 10(-10) times the peak intensity in individual coronagraph images. In a series of such images, together with simple image processing, we have effectively reduced this to a residual noise level of about 0.1 x 10(-10). This demonstrates that a coronagraphic telescope in space could detect and spectroscopically characterize nearby exoplanetary systems, with the sensitivity to image an 'Earth-twin' orbiting a nearby star.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
Optimal front light design for reflective displays under different ambient illumination
NASA Astrophysics Data System (ADS)
Wang, Sheng-Po; Chang, Ting-Ting; Li, Chien-Ju; Bai, Yi-Ho; Hu, Kuo-Jui
2011-01-01
The goal of this study is to find out the optimal luminance and color temperature of front light for reflective displays in different ambient illumination by conducting series of psychophysical experiments. A color and brightness tunable front light device with ten LED units was built and been calibrated to present 256 luminance levels and 13 different color temperature at fixed luminance of 200 cd/m2. The experiment results revealed the best luminance and color temperature settings for human observers under different ambient illuminant, which could also assist the e-paper manufacturers to design front light device, and present the best image quality on reflective displays. Furthermore, a similar experiment procedure was conducted by utilizing new flexible e-signage display developed by ITRI and an optimal front light device for the new display panel has been designed and utilized.
NASA Astrophysics Data System (ADS)
Shiramizu, Hideyuki; Kuroda, Chiaki; Ohki, Yoshimichi; Shima, Takayuki; Wang, Xiaomin; Fujimaki, Makoto
2018-03-01
We have developed an optical disk system for imaging transmitted light from Escherichia coli dispersed on an optical disk. When E. coli was stained using Bismarck brown, the transmittance was found to decrease in images obtained at λ = 405 nm. The results indicate that transmittance imaging is suitable for finding the difference in light intensity between stained and unstained E. coli, whereas the reflectance images were scarcely changed by staining. Therefore, E. coli can be selectively discriminated from abiotic contaminants using transmittance imaging.
High speed line-scan confocal imaging of stimulus-evoked intrinsic optical signals in the retina
Li, Yang-Guo; Liu, Lei; Amthor, Franklin; Yao, Xin-Cheng
2010-01-01
A rapid line-scan confocal imager was developed for functional imaging of the retina. In this imager, an acousto-optic deflector (AOD) was employed to produce mechanical vibration- and inertia-free light scanning, and a high-speed (68,000 Hz) linear CCD camera was used to achieve sub-cellular and sub-millisecond spatiotemporal resolution imaging. Two imaging modalities, i.e., frame-by-frame and line-by-line recording, were validated for reflected light detection of intrinsic optical signals (IOSs) in visible light stimulus activated frog retinas. Experimental results indicated that fast IOSs were tightly correlated with retinal stimuli, and could track visible light flicker stimulus frequency up to at least 2 Hz. PMID:20125743
Design of light-small high-speed image data processing system
NASA Astrophysics Data System (ADS)
Yang, Jinbao; Feng, Xue; Li, Fei
2015-10-01
A light-small high speed image data processing system was designed in order to meet the request of image data processing in aerospace. System was constructed of FPGA, DSP and MCU (Micro-controller), implementing a video compress of 3 million pixels@15frames and real-time return of compressed image to the upper system. Programmable characteristic of FPGA, high performance image compress IC and configurable MCU were made best use to improve integration. Besides, hard-soft board design was introduced and PCB layout was optimized. At last, system achieved miniaturization, light-weight and fast heat dispersion. Experiments show that, system's multifunction was designed correctly and worked stably. In conclusion, system can be widely used in the area of light-small imaging.
Intelligent correction of laser beam propagation through turbulent media using adaptive optics
NASA Astrophysics Data System (ADS)
Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.
2014-10-01
Adaptive optics methods have long been used by researchers in the astronomy field to retrieve correct images of celestial bodies. The approach is to use a deformable mirror combined with Shack-Hartmann sensors to correct the slightly distorted image when it propagates through the earth's atmospheric boundary layer, which can be viewed as adding relatively weak distortion in the last stage of propagation. However, the same strategy can't be easily applied to correct images propagating along a horizontal deep turbulence path. In fact, when turbulence levels becomes very strong (Cn 2>10-13 m-2/3), limited improvements have been made in correcting the heavily distorted images. We propose a method that reconstructs the light field that reaches the camera, which then provides information for controlling a deformable mirror. An intelligent algorithm is applied that provides significant improvement in correcting images. In our work, the light field reconstruction has been achieved with a newly designed modified plenoptic camera. As a result, by actively intervening with the coherent illumination beam, or by giving it various specific pre-distortions, a better (less turbulence affected) image can be obtained. This strategy can also be expanded to much more general applications such as correcting laser propagation through random media and can also help to improve designs in free space optical communication systems.
Non-Destructive Inspection by Infrared Imaging Spectroscopy. Phase 1
1994-10-14
light, the AOTF through its birefrengence , the LCD through an entrance polarizer necessary for proper operation. Hence, these devices provide a fixed...through atmosphere of transmission Ta(X), but without any spectral filtering is 20 UNCLASSIFIED I I UNCLASSIFIED P(A) = ~A) , A1 T.(1) T...UNCLASSIFIED dependent on the signal level. Atmospheric transmission is assumed to be from a three meter path at sea-level. Target radiance comprises
Using DMSP/OLS nighttime imagery to estimate carbon dioxide emission
NASA Astrophysics Data System (ADS)
Desheng, B.; Letu, H.; Bao, Y.; Naizhuo, Z.; Hara, M.; Nishio, F.
2012-12-01
This study highlighted a method for estimating CO2 emission from electric power plants using the Defense Meteorological Satellite Program's Operational Linescan System (DMSP/OLS) stable light image product for 1999. CO2 emissions from power plants account for a high percentage of CO2 emissions from fossil fuel consumptions. Thermal power plants generate the electricity by burning fossil fuels, so they emit CO2 directly. In many Asian countries such as China, Japan, India, and South Korea, the amounts of electric power generated by thermal power accounts over 58% in the total amount of electric power in 1999. So far, figures of the CO2 emission were obtained mainly by traditional statistical methods. Moreover, the statistical data were summarized as administrative regions, so it is difficult to examine the spatial distribution of non-administrative division. In some countries the reliability of such CO2 emission data is relatively low. However, satellite remote sensing can observe the earth surface without limitation of administrative regions. Thus, it is important to estimate CO2 using satellite remote sensing. In this study, we estimated the CO2 emission by fossil fuel consumption from electric power plant using stable light image of the DMSP/OLS satellite data for 1999 after correction for saturation effect in Japan. Digital number (DN) values of the stable light images in center areas of cities are saturated due to the large nighttime light intensities and characteristics of the OLS satellite sensors. To more accurately estimate the CO2 emission using the stable light images, a saturation correction method was developed by using the DMSP radiance calibration image, which does not include any saturation pixels. A regression equation was developed by the relationship between DN values of non-saturated pixels in the stable light image and those in the radiance calibration image. And, regression equation was used to adjust the DNs of the radiance calibration image. Then, saturated DNs of the stable light image was corrected using adjusted radiance calibration image. After that, regression analysis was performed with cumulative DNs of the corrected stable light image, electric power consumption, electric power generation and CO2 emission by fossil fuel consumption from electric power plant each other. Results indicated that there are good relationships (R2>90%) between DNs of the corrected stable light image and other parameters. Based on the above results, we estimated the CO2 emission from electric power plant using corrected stable light image. Keywords: DMSP/OLS, stable light, saturation light correction method, regression analysis Acknowledgment: The research was financially supported by the Sasakawa Scientific Research Grant from the Japan Science Society.
Letu, Husi; Hara, Masanao; Tana, Gegen; Bao, Yuhai; Nishio, Fumihiko
2015-09-01
Nighttime lights of the human settlements (hereafter, "stable lights") are seen as a valuable proxy of social economic activity and greenhouse gas emissions at the subnational level. In this study, we propose an improved method to generate the stable lights from Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) daily nighttime light data for 1999. The study area includes Japan, China, India, and other 10 countries in East Asia. A noise reduction filter (NRF) was employed to generate a stable light from DMSP/OLS time-series daily nighttime light data. It was found that noise from amplitude of the 1-year periodic component is included in the stable light. To remove the amplitude of the 1-year periodic component noise included in the stable light, the NRF method was improved to extract the periodic component. Then, new stable light was generated by removing the amplitude of the 1-year periodic component using the improved NRF method. The resulting stable light was evaluated by comparing it with the conventional nighttime stable light provided by the National Oceanic and Atmosphere Administration/National Geophysical Data Center (NOAA/NGDC). It is indicated that DNs of the NOAA stable light image are lower than those of the new stable light image. This might be attributable to the influence of attenuation effects from thin warm water clouds. However, due to overglow effect of the thin cloud, light area in new stable light is larger than NOAA stable light. Furthermore, the cumulative digital numbers (CDNs) and number of light area pixels (NLAP) of the generated stable light and NOAA/NGDC stable light were applied to estimate socioeconomic variables of population, electric power consumption, gross domestic product, and CO2 emissions from fossil fuel consumption. It is shown that the correlations of the population and CO2FF with new stable light data are higher than those in NOAA stable light data; correlations of the EPC and GDP with NOAA stable light data are higher those in the new stable light data.
Pseudo color ghost coding imaging with pseudo thermal light
NASA Astrophysics Data System (ADS)
Duan, De-yang; Xia, Yun-jie
2018-04-01
We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.
NASA Astrophysics Data System (ADS)
Zhu, Yiting; Narendran, Nadarajah; Tan, Jianchuan; Mou, Xi
2014-09-01
The organic light-emitting diode (OLED) has demonstrated its novelty in displays and certain lighting applications. Similar to white light-emitting diode (LED) technology, it also holds the promise of saving energy. Even though the luminous efficacy values of OLED products have been steadily growing, their longevity is still not well understood. Furthermore, currently there is no industry standard for photometric and colorimetric testing, short and long term, of OLEDs. Each OLED manufacturer tests its OLED panels under different electrical and thermal conditions using different measurement methods. In this study, an imaging-based photometric and colorimetric measurement method for OLED panels was investigated. Unlike an LED that can be considered as a point source, the OLED is a large form area source. Therefore, for an area source to satisfy lighting application needs, it is important that it maintains uniform light level and color properties across the emitting surface of the panel over a long period. This study intended to develop a measurement procedure that can be used to test long-term photometric and colorimetric properties of OLED panels. The objective was to better understand how test parameters such as drive current or luminance and temperature affect the degradation rate. In addition, this study investigated whether data interpolation could allow for determination of degradation and lifetime, L70, at application conditions based on the degradation rates measured at different operating conditions.
Exploring Algorithms for Stellar Light Curves With TESS
NASA Astrophysics Data System (ADS)
Buzasi, Derek
2018-01-01
The Kepler and K2 missions have produced tens of thousands of stellar light curves, which have been used to measure rotation periods, characterize photometric activity levels, and explore phenomena such as differential rotation. The quasi-periodic nature of rotational light curves, combined with the potential presence of additional periodicities not due to rotation, complicates the analysis of these time series and makes characterization of uncertainties difficult. A variety of algorithms have been used for the extraction of rotational signals, including autocorrelation functions, discrete Fourier transforms, Lomb-Scargle periodograms, wavelet transforms, and the Hilbert-Huang transform. In addition, in the case of K2 a number of different pipelines have been used to produce initial detrended light curves from the raw image frames.In the near future, TESS photometry, particularly that deriving from the full-frame images, will dramatically further expand the number of such light curves, but details of the pipeline to be used to produce photometry from the FFIs remain under development. K2 data offers us an opportunity to explore the utility of different reduction and analysis tool combinations applied to these astrophysically important tasks. In this work, we apply a wide range of algorithms to light curves produced by a number of popular K2 pipeline products to better understand the advantages and limitations of each approach and provide guidance for the most reliable and most efficient analysis of TESS stellar data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sick, Jonathan; Courteau, Stéphane; Cuillandre, Jean-Charles
We present wide-field near-infrared J and K{sub s} images of the Andromeda Galaxy (M31) taken with WIRCam at the Canada-France-Hawaii Telescope as part of the Andromeda Optical and Infrared Disk Survey. This data set allows simultaneous observations of resolved stars and near-infrared (NIR) surface brightness across M31's entire bulge and disk (within R = 22 kpc), permitting a direct test of the stellar composition of near-infrared light in a nearby galaxy. Here we develop NIR observation and reduction methods to recover a uniform surface brightness map across the 3° × 1° disk of M31 with 27 WIRCam fields. Two sky-targetmore » nodding strategies are tested, and we find that strictly minimizing sky sampling latency cannot improve background subtraction accuracy to better than 2% of the background level due to spatio-temporal variations in the NIR skyglow. We fully describe our WIRCam reduction pipeline and advocate using flats built from night-sky images over a single night, rather than dome flats that do not capture the WIRCam illumination field. Contamination from scattered light and thermal background in sky flats has a negligible effect on the surface brightness shape compared to the stochastic differences in background shape between sky and galaxy disk fields, which are ∼0.3% of the background level. The most dramatic calibration step is the introduction of scalar sky offsets to each image that optimizes surface brightness continuity. Sky offsets reduce the mean surface brightness difference between observation blocks from 1% to <0.1% of the background level, though the absolute background level remains statistically uncertain to 0.15% of the background level. We present our WIRCam reduction pipeline and performance analysis to give specific recommendations for the improvement of NIR wide-field imaging methods.« less
Hodson, Nicholas A; Dunne, Stephen M; Pankhurst, Caroline L
2005-04-01
Dental curing lights are vulnerable to contamination with oral fluids during routine intra-oral use. This controlled study aimed to evaluate whether or not disposable transparent barriers placed over the light-guide tip would affect light output intensity or the subsequent depth of cure of a composite restoration. The impact on light intensity emitted from high-, medium- and low-output light-cure units in the presence of two commercially available disposable infection-control barriers was evaluated against a no-barrier control. Power density measurements from the three intensity light-cure units were recorded with a radiometer, then converted to a digital image using an intra-oral camera and values determined using a commercial computer program. For each curing unit, the measurements were repeated on ten separate occasions with each barrier and the control. Depth of cure was evaluated using a scrape test in a natural tooth model. At each level of light output, the two disposable barriers produced a significant reduction in the mean power density readings compared to the no-barrier control (P<0.005). The cure sleeve inhibited light output to a greater extent than either the cling film or the control (P<0.005). Only composite restorations light-activated by the high level unit demonstrated a small but significant decrease in the depth of cure compared to the control (P<0.05). Placing disposable barriers over the light-guide tip reduced the light intensity from all three curing lights. There was no impact on depth of cure except for the high-output light, where a small decrease in cure depth was noted but this was not considered clinically significant. Disposable barriers can be recommended for use with light-cure lights.
2010-09-08
NASA Cassini spacecraft examines Titan dark and light seasonal hemispheric dichotomy as it images the moon with a filter sensitive to near-infrared light. This image also shows Titan north polar hood.
Optical nulling apparatus and method for testing an optical surface
NASA Technical Reports Server (NTRS)
Olczak, Eugene (Inventor); Hannon, John J. (Inventor); Dey, Thomas W. (Inventor); Jensen, Arthur E. (Inventor)
2008-01-01
An optical nulling apparatus for testing an optical surface includes an aspheric mirror having a reflecting surface for imaging light near or onto the optical surface under test, where the aspheric mirror is configured to reduce spherical aberration of the optical surface under test. The apparatus includes a light source for emitting light toward the aspheric mirror, the light source longitudinally aligned with the aspheric mirror and the optical surface under test. The aspheric mirror is disposed between the light source and the optical surface under test, and the emitted light is reflected off the reflecting surface of the aspheric mirror and imaged near or onto the optical surface under test. An optical measuring device is disposed between the light source and the aspheric mirror, where light reflected from the optical surface under test enters the optical measuring device. An imaging mirror is disposed longitudinally between the light source and the aspheric mirror, and the imaging mirror is configured to again reflect light, which is first reflected from the reflecting surface of the aspheric mirror, onto the optical surface under test.
Robust reflective ghost imaging against different partially polarized thermal light
NASA Astrophysics Data System (ADS)
Li, Hong-Guo; Wang, Yan; Zhang, Rui-Xue; Zhang, De-Jian; Liu, Hong-Chao; Li, Zong-Guo; Xiong, Jun
2018-03-01
We theoretically study the influence of degree of polarization (DOP) of thermal light on the contrast-to-noise ratio (CNR) of the reflective ghost imaging (RGI), which is a novel and indirect imaging modality. An expression for the CNR of RGI with partially polarized thermal light is carefully derived, which suggests a weak dependence of CNR on the DOP, especially when the ratio of the object size to the speckle size of thermal light has a large value. Different from conventional imaging approaches, our work reveals that RGI is much more robust against the DOP of the light source, which thereby has advantages in practical applications, such as remote sensing.
Kimura, Takahiro; Hiraoka, Kei; Kasahara, Noriyuki; Logg, Christopher R.
2010-01-01
Background Bioluminescence imaging (BLI) permits the noninvasive quantitation and localization of transduction and expression by gene transfer vectors. The tendency of tissue to attenuate light in the optical region, however, limits the sensitivity of BLI. Improvements in light output from bioluminescent reporter systems would allow the detection of lower levels of expression, smaller numbers of cells and expression from deeper and more attenuating tissues within an animal. Methods With the goal of identifying substrates that allow improved sensitivity with Renilla luciferase (RLuc) and Gaussia luciferase (GLuc) reporter genes, we evaluated native coelenterazine and three of its most promising derivatives in BLI of cultured cells transduced with retroviral vectors encoding these reporters. Of the eight enzyme-substrate pairs tested, the two that performed best were further evaluated in mice to compare their effectiveness for imaging vector-modified cells in live animals. Results In cell culture, we observed striking differences in luminescence levels from the various enzyme-substrate combinations and found that the two luciferases exhibited markedly distinct abilities to generate light with the substrates. The most effective pairs were RLuc with the synthetic coelenterazine derivative ViviRen, and GLuc with native coelenterazine. In animals, these two pairs allowed similar detection sensitivities, which were 8–15 times higher than that of the prototypical RLuc-native coelenterazine combination. Conclusions Our results demonstrate that substrate selection can dramatically influence the detection sensitivity of RLuc and GLuc and that appropriate selection of substrate can greatly improve the performance of reporter genes encoding these enzymes for monitoring gene transfer by BLI. PMID:20527045
Kimura, Takahiro; Hiraoka, Kei; Kasahara, Noriyuki; Logg, Christopher R
2010-06-01
Bioluminescence imaging (BLI) permits the non-invasive quantification and localization of transduction and expression by gene transfer vectors. The tendency of tissue to attenuate light in the optical region, however, limits the sensitivity of BLI. Improvements in light output from bioluminescent reporter systems would allow the detection of lower levels of expression, smaller numbers of cells and expression from deeper and more attenuating tissues within an animal. With the goal of identifying substrates that allow improved sensitivity with Renilla luciferase (RLuc) and Gaussia luciferase (GLuc) reporter genes, we evaluated native coelenterazine and three of its most promising derivatives in BLI of cultured cells transduced with retroviral vectors encoding these reporters. Of the eight enzyme-substrate pairs tested, the two that performed best were further evaluated in mice to compare their effectiveness for imaging vector-modified cells in live animals. In cell culture, we observed striking differences in luminescence levels from the various enzyme-substrate combinations and found that the two luciferases exhibited markedly distinct abilities to generate light with the substrates. The most effective pairs were RLuc with the synthetic coelenterazine derivative ViviRen, and GLuc with native coelenterazine. In animals, these two pairs allowed similar detection sensitivities, which were eight- to 15-fold higher than that of the prototypical RLuc-native coelenterazine combination. Substrate selection can dramatically influence the detection sensitivity of RLuc and GLuc and appropriate choice of substrate can greatly improve the performance of reporter genes encoding these enzymes for monitoring gene transfer by BLI.
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
Low cost light-sheet microscopy for whole brain imaging
NASA Astrophysics Data System (ADS)
Kumar, Manish; Nasenbeny, Jordan; Kozorovitskiy, Yevgenia
2018-02-01
Light-sheet microscopy has evolved as an indispensable tool in imaging biological samples. It can image 3D samples at fast speed, with high-resolution optical sectioning, and with reduced photobleaching effects. These properties make light-sheet microscopy ideal for imaging fluorophores in a variety of biological samples and organisms, e.g. zebrafish, drosophila, cleared mouse brains, etc. While most commercial turnkey light-sheet systems are expensive, the existing lower cost implementations, e.g. OpenSPIM, are focused on achieving high-resolution imaging of small samples or organisms like zebrafish. In this work, we substantially reduce the cost of light-sheet microscope system while targeting to image much larger samples, i.e. cleared mouse brains, at single-cell resolution. The expensive components of a lightsheet system - excitation laser, water-immersion objectives, and translation stage - are replaced with an incoherent laser diode, dry objectives, and a custom-built Arduino-controlled translation stage. A low-cost CUBIC protocol is used to clear fixed mouse brain samples. The open-source platforms of μManager and Fiji support image acquisition, processing, and visualization. Our system can easily be extended to multi-color light-sheet microscopy.
The formation of quantum images and their transformation and super-resolution reading
NASA Astrophysics Data System (ADS)
Balakin, D. A.; Belinsky, A. V.
2016-05-01
Images formed by light with suppressed photon fluctuations are interesting objects for studies with the aim of increasing their limiting information capacity and quality. This light in the sub-Poisson state can be prepared in a resonator filled with a medium with Kerr nonlinearity, in which self-phase modulation takes place. Spatially and temporally multimode light beams are studied and the production of spatial frequency spectra of suppressed photon fluctuations is described. The efficient operation regimes of the system are found. A particular schematic solution is described, which allows one to realize the potential possibilities laid in the formation of the squeezed states of light to a maximum degree during self-phase modulation in a resonator for the maximal suppression of amplitude quantum noises upon two-dimensional imaging. The efficiency of using light with suppressed quantum fluctuations for computer image processing is studied. An algorithm is described for interpreting measurements for increasing the resolution with respect to the geometrical resolution. A mathematical model that characterizes the measurement scheme is constructed and the problem of the image reconstruction is solved. The algorithm for the interpretation of images is verified. Conditions are found for the efficient application of sub-Poisson light for super-resolution imaging. It is found that the image should have a low contrast and be maximally transparent.
Schröter, Tobias J.; Johnson, Shane B.; John, Kerstin; Santi, Peter A.
2011-01-01
We report replacement of one side of a static illumination, dual sided, thin-sheet laser imaging microscope (TSLIM) with an intensity modulated laser scanner in order to implement structured illumination (SI) and HiLo image demodulation techniques for background rejection. The new system is equipped with one static and one scanned light-sheet and is called a scanning thin-sheet laser imaging microscope (sTSLIM). It is an optimized version of a light-sheet fluorescent microscope that is designed to image large specimens (<15 mm in diameter). In this paper we describe the hardware and software modifications to TSLIM that allow for static and uniform light-sheet illumination with SI and HiLo image demodulation. The static light-sheet has a thickness of 3.2 µm; whereas, the scanned side has a light-sheet thickness of 4.2 µm. The scanned side images specimens with subcellular resolution (<1 µm lateral and <4 µm axial resolution) with a size up to 15 mm. SI and HiLo produce superior contrast compared to both the uniform static and scanned light-sheets. HiLo contrast was greater than SI and is faster and more robust than SI because as it produces images in two-thirds of the time and exhibits fewer intensity streaking artifacts. PMID:22254177
Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask
NASA Astrophysics Data System (ADS)
Morel, Sébastien
2004-09-01
A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.
Note: An absolute X-Y-Θ position sensor using a two-dimensional phase-encoded binary scale
NASA Astrophysics Data System (ADS)
Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan
2018-04-01
This Note presents a new absolute X-Y-Θ position sensor for measuring planar motion of a precision multi-axis stage system. By analyzing the rotated image of a two-dimensional phase-encoded binary scale (2D), the absolute 2D position values at two separated points were obtained and the absolute X-Y-Θ position could be calculated combining these values. The sensor head was constructed using a board-level camera, a light-emitting diode light source, an imaging lens, and a cube beam-splitter. To obtain the uniform intensity profiles from the vignette scale image, we selected the averaging directions deliberately, and higher resolution in the angle measurement could be achieved by increasing the allowable offset size. The performance of a prototype sensor was evaluated in respect of resolution, nonlinearity, and repeatability. The sensor could resolve 25 nm linear and 0.001° angular displacements clearly, and the standard deviations were less than 18 nm when 2D grid positions were measured repeatedly.
NASA Astrophysics Data System (ADS)
N'Diaye, Mamadou; Choquet, Elodie; Egron, Sylvain; Pueyo, Laurent; Leboulleux, Lucie; Levecq, Olivier; Perrin, Marshall D.; Elliot, Erin; Wallace, J. Kent; Hugot, Emmanuel; Marcos, Michel; Ferrari, Marc; Long, Chris A.; Anderson, Rachel; DiFelice, Audrey; Soummer, Rémi
2014-08-01
We present a new high-contrast imaging testbed designed to provide complete solutions in wavefront sensing, control and starlight suppression with complex aperture telescopes. The testbed was designed to enable a wide range of studies of the effects of such telescope geometries, with primary mirror segmentation, central obstruction, and spiders. The associated diffraction features in the point spread function make high-contrast imaging more challenging. In particular the testbed will be compatible with both AFTA-like and ATLAST-like aperture shapes, respectively on-axis monolithic, and on-axis segmented telescopes. The testbed optical design was developed using a novel approach to define the layout and surface error requirements to minimize amplitude induced errors at the target contrast level performance. In this communication we compare the as-built surface errors for each optic to their specifications based on end-to-end Fresnel modelling of the testbed. We also report on the testbed optical and optomechanical alignment performance, coronagraph design and manufacturing, and preliminary first light results.
X-ray micro-modulated luminescence tomography (XMLT)
Cong, Wenxiang; Liu, Fenglin; Wang, Chao; Wang, Ge
2014-01-01
Imaging depth of optical microscopy has been fundamentally limited to millimeter or sub-millimeter due to strong scattering of light in a biological sample. X-ray microscopy can resolve spatial details of few microns deep inside a sample but contrast resolution is inadequate to depict heterogeneous features at cellular or sub-cellular levels. To enhance and enrich biological contrast at large imaging depth, various nanoparticles are introduced and become essential to basic research and molecular medicine. Nanoparticles can be functionalized as imaging probes, similar to fluorescent and bioluminescent proteins. LiGa5O8:Cr3+ nanoparticles were recently synthesized to facilitate luminescence energy storage with x-ray pre-excitation and subsequently stimulated luminescence emission by visible/near-infrared (NIR) light. In this paper, we propose an x-ray micro-modulated luminescence tomography (XMLT, or MLT to be more general) approach to quantify a nanophosphor distribution in a thick biological sample with high resolution. Our numerical simulation studies demonstrate the feasibility of the proposed approach. PMID:24663898
Interactive display system having a digital micromirror imaging device
Veligdan, James T.; DeSanto, Leonard; Kaull, Lisa; Brewster, Calvin
2006-04-11
A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector cooperates with a digital imaging device, e.g. a digital micromirror imaging device, for projecting an image through the panel for display on the outlet face. The imaging device includes an array of mirrors tiltable between opposite display and divert positions. The display positions reflect an image light beam from the projector through the panel for display on the outlet face. The divert positions divert the image light beam away from the panel, and are additionally used for reflecting a probe light beam through the panel toward the outlet face. Covering a spot on the panel, e.g. with a finger, reflects the probe light beam back through the panel toward the inlet face for detection thereat and providing interactive capability.
Design of light guide sleeve on hyperspectral imaging system for skin diagnosis
NASA Astrophysics Data System (ADS)
Yan, Yung-Jhe; Chang, Chao-Hsin; Huang, Ting-Wei; Chiang, Hou-Chi; Wu, Jeng-Fu; Ou-Yang, Mang
2017-08-01
A hyperspectral imaging system is proposed for early study of skin diagnosis. A stable and high hyperspectral image quality is important for analysis. Therefore, a light guide sleeve (LGS) was designed for the embedded on a hyperspectral imaging system. It provides a uniform light source on the object plane with the determined distance. Furthermore, it can shield the ambient light from entering the system and increasing noise. For the purpose of producing a uniform light source, the LGS device was designed in the symmetrical double-layered structure. It has light cut structures to adjust distribution of rays between two layers and has the Lambertian surface in the front-end to promote output uniformity. In the simulation of the design, the uniformity of illuminance was about 91.7%. In the measurement of the actual light guide sleeve, the uniformity of illuminance was about 92.5%.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
The effect of light intensity on image quality in endoscopic ear surgery.
McCallum, R; McColl, J; Iyer, A
2018-05-16
Endoscopic ear surgery is a rapidly developing field with many advantages. But endoscopes can reach temperatures of over 110°C at the tip, raising safety concerns. Reducing the intensity of the light source reduces temperatures produced. However, quality of images at lower light intensities has not yet been studied. We set out to study the effect of light intensity on image quality in EES. Prospective study of patients undergoing EES from April to October 2016. Consecutive images of the same operative field at 10%, 30%, 50% and 100% light intensities were taken. Eight international experts were asked to each evaluate 100 anonymised, randomised images. District General Hospital. Twenty patients. Images were evaluated on a 5-point Likert scale (1 = significantly worse than average; 5 = significantly better than average) for detail of anatomy; colour contrast; overall quality; and suitability for operating. Mean scores for photographs at 10%, 30%, 50% and 100% light intensity were 3.22 (SD 0.93), 3.15 (SD 0.84), 3.08 (SD 0.88) and 3.10 (SD 0.86), respectively. In ANOVA models for the scores on each of the scales (anatomy, colour contrast, overall quality and suitability for operating), the effects of rater and patient were highly significant (P < .0005) but light intensity was non-significant (P = .34, .32, .21, .15, respectively). Images taken during surgery by our endoscope and operative camera have no loss of quality when taken at lower light intensities. We recommend the surgeon considers use of lower light intensities in endoscopic ear surgery. © 2018 John Wiley & Sons Ltd.
Near-infrared spectroscopic tissue imaging for medical applications
Demos,; Stavros, Staggs [Livermore, CA; Michael, C [Tracy, CA
2006-03-21
Near infrared imaging using elastic light scattering and tissue autofluorescence are explored for medical applications. The approach involves imaging using cross-polarized elastic light scattering and tissue autofluorescence in the Near Infra-Red (NIR) coupled with image processing and inter-image operations to differentiate human tissue components.
Near-infrared spectroscopic tissue imaging for medical applications
Demos, Stavros [Livermore, CA; Staggs, Michael C [Tracy, CA
2006-12-12
Near infrared imaging using elastic light scattering and tissue autofluorescence are explored for medical applications. The approach involves imaging using cross-polarized elastic light scattering and tissue autofluorescence in the Near Infra-Red (NIR) coupled with image processing and inter-image operations to differentiate human tissue components.
Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven
2012-01-01
The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921
Assessment of spatial information for hyperspectral imaging of lesion
NASA Astrophysics Data System (ADS)
Yang, Xue; Li, Gang; Lin, Ling
2016-10-01
Multiple diseases such as breast tumor poses a great threat to women's health and life, while the traditional detection method is complex, costly and unsuitable for frequently self-examination, therefore, an inexpensive, convenient and efficient method for tumor self-inspection is needed urgently, and lesion localization is an important step. This paper proposes an self-examination method for positioning of a lesion. The method adopts transillumination to acquire the hyperspectral images and to assess the spatial information of lesion. Firstly, multi-wavelength sources are modulated with frequency division, which is advantageous to separate images of different wavelength, meanwhile, the source serves as fill light to each other to improve the sensitivity in the low-lightlevel imaging. Secondly, the signal-to-noise ratio of transmitted images after demodulation are improved by frame accumulation technology. Next, gray distributions of transmitted images are analyzed. The gray-level differences is constituted by the actual transmitted images and fitting transmitted images of tissue without lesion, which is to rule out individual differences. Due to scattering effect, there will be transition zones between tissue and lesion, and the zone changes with wavelength change, which will help to identify the structure details of lesion. Finally, image segmentation is adopted to extract the lesion and the transition zones, and the spatial features of lesion are confirmed according to the transition zones and the differences of transmitted light intensity distributions. Experiment using flat-shaped tissue as an example shows that the proposed method can extract the space information of lesion.
Autofluorescence imaging of macular pigment: influence and correction of ocular media opacities
NASA Astrophysics Data System (ADS)
Sharifzadeh, Mohsen; Obana, Akira; Gohto, Yuko; Seto, Takahiko; Gellermann, Werner
2014-09-01
The healthy adult human retina contains in its macular region a high concentration of blue-light absorbing carotenoid compounds, known as macular pigment (MP). Consisting of the carotenoids lutein, zeaxanthin, and meso-zeaxanthin, the MP is thought to shield the vulnerable tissue layers in the retina from light-induced damage through its function as an optical attenuator and to protect the tissue cells within its immediate vicinity through its function as a potent antioxidant. Autofluorescence imaging (AFI) is emerging as a viable optical method for MP screening of large subject populations, for tracking of MP changes over time, and for monitoring MP uptake in response to dietary supplementation. To investigate the influence of ocular media opacities on AFI-based MP measurements, in particular, the influence of lens cataracts, we conducted a clinical trial with a large subject population (93 subjects) measured before and after cataract surgery. General AFI image contrast, retinal blood vessel contrast, and presurgery lens opacity scores [Lens Opacities Classification System III (LOCS III)] were investigated as potential predictors for image degradation. These clinical results show that lens cataracts can severely degrade the achievable pixel contrasts in the AFI images, which results in nominal MP optical density levels that are artifactually reduced. While LOCS III scores and blood vessel contrast are found to be only a weak predictor for this effect, a strong correlation exists between the reduction factor and the image contrast, which can be quantified via pixel intensity histogram parameters. Choosing the base width of the histogram, the presence or absence of ocular media opacities can be determined and, if needed, the nominal MP levels can be corrected with factors depending on the strength of the opacity.
Hyperspectral imaging with near-infrared-enabled mobile phones for tissue oximetry
NASA Astrophysics Data System (ADS)
Lin, Jonathan L.; Ghassemi, Pejhman; Chen, Yu; Pfefer, Joshua
2018-02-01
Hyperspectral reflectance imaging (HRI) is an emerging clinical tool for characterizing spatial and temporal variations in blood perfusion and oxygenation for applications such as burn assessment, wound healing, retinal exams and intraoperative tissue viability assessment. Since clinical HRI-based oximeters often use near-infrared (NIR) light, NIR-enabled mobile phones may provide a useful platform for future point-of-care devices. Furthermore, quantitative NIR imaging on mobile phones may dramatically increase the availability and accessibility of medical diagnostics for low-resource settings. We have evaluated the potential for phone-based NIR oximetry imaging and elucidated factors affecting performance using devices from two different manufacturers, as well as a scientific CCD. A broadband light source and liquid crystal tunable filter were used for imaging at 10 nm bands from 650 to 1000 nm. Spectral sensitivity measurements indicated that mobile phones with standard NIR blocking filters had minimal response beyond 700 nm, whereas one modified phone showed sensitivity to 800 nm and another to 1000 nm. Red pixel channels showed the greatest sensitivity up to 800 nm, whereas all channels provided essentially equivalent sensitivity at longer wavelengths. Referencing of blood oxygenation levels was performed with a CO-oximeter. HRI measurements were performed using cuvettes filled with hemoglobin solutions of different oxygen saturation levels. Good agreement between absorbance spectra measured with mobile phone and a CCD cameras were seen for wavelengths below 900 nm. Saturation estimates showed root-mean-squared-errors of 5.2% and 4.5% for the CCD and phone, respectively. Overall, this work provides strong evidence of the potential for mobile phones to provide quantitative spectral imaging in the NIR for applications such as oximetry, and generates practical insights into factors that impact performance as well as test methods for performance assessment.
Walther, Andreas; Rippe, Lars; Wang, Lihong V; Andersson-Engels, Stefan; Kröll, Stefan
2017-10-01
Despite the important medical implications, it is currently an open task to find optical non-invasive techniques that can image deep organs in humans. Addressing this, photo-acoustic tomography (PAT) has received a great deal of attention in the past decade, owing to favorable properties like high contrast and high spatial resolution. However, even with optimal components PAT cannot penetrate beyond a few centimeters, which still presents an important limitation of the technique. Here, we calculate the absorption contrast levels for PAT and for ultrasound optical tomography (UOT) and compare them to their relevant noise sources as a function of imaging depth. The results indicate that a new development in optical filters, based on rare-earth-ion crystals, can push the UOT technique significantly ahead of PAT. Such filters allow the contrast-to-noise ratio for UOT to be up to three orders of magnitude better than for PAT at depths of a few cm into the tissue. It also translates into a significant increase of the image depth of UOT compared to PAT, enabling deep organs to be imaged in humans in real time. Furthermore, such spectral holeburning filters are not sensitive to speckle decorrelation from the tissue and can operate at nearly any angle of incident light, allowing good light collection. We theoretically demonstrate the improved performance in the medically important case of non-invasive optical imaging of the oxygenation level of the frontal part of the human myocardial tissue. Our results indicate that further studies on UOT are of interest and that the technique may have large impact on future directions of biomedical optics.
Imaging with a small number of photons
Morris, Peter A.; Aspden, Reuben S.; Bell, Jessica E. C.; Boyd, Robert W.; Padgett, Miles J.
2015-01-01
Low-light-level imaging techniques have application in many diverse fields, ranging from biological sciences to security. A high-quality digital camera based on a multi-megapixel array will typically record an image by collecting of order 105 photons per pixel, but by how much could this photon flux be reduced? In this work we demonstrate a single-photon imaging system based on a time-gated intensified camera from which the image of an object can be inferred from very few detected photons. We show that a ghost-imaging configuration, where the image is obtained from photons that have never interacted with the object, is a useful approach for obtaining images with high signal-to-noise ratios. The use of heralded single photons ensures that the background counts can be virtually eliminated from the recorded images. By applying principles of image compression and associated image reconstruction, we obtain high-quality images of objects from raw data formed from an average of fewer than one detected photon per image pixel. PMID:25557090
An alternative cost-effective image processing based sensor for continuous turbidity monitoring
NASA Astrophysics Data System (ADS)
Chai, Matthew Min Enn; Ng, Sing Muk; Chua, Hong Siang
2017-03-01
Turbidity is the degree to which the optical clarity of water is reduced by impurities in the water. High turbidity values in rivers and lakes promote the growth of pathogen, reduce dissolved oxygen levels and reduce light penetration. The conventional ways of on-site turbidity measurements involve the use of optical sensors similar to those used in commercial turbidimeters. However, these instruments require frequent maintenance due to biological fouling on the sensors. Thus, image processing was proposed as an alternative technique for continuous turbidity measurement to reduce frequency of maintenance. The camera was kept out of water to avoid biofouling while other parts of the system submerged in water can be coated with anti-fouling surface. The setup developed consisting of a webcam, a light source, a microprocessor and a motor used to control the depth of a reference object. The image processing algorithm quantifies the relationship between the number of circles detected on the reference object and the depth of the reference object. By relating the quantified data to turbidity, the setup was able to detect turbidity levels from 20 NTU to 380 NTU with measurement error of 15.7 percent. The repeatability and sensitivity of the turbidity measurement was found to be satisfactory.
Passive ranging redundancy reduction in diurnal weather conditions
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Abbott, A. Lynn; Szu, Harold H.
2013-05-01
Ambiguity in binocular ranging (David Marr's paradox) may be resolved by using two eyes moving from side to side behind an optical bench while integrating multiple views. Moving a head from left to right with one eye closed can also help resolve the foreground and background range uncertainty. That empirical experiment implies redundancy in image data, which may be reduced by adopting a 3-D camera imaging model to perform compressive sensing. Here, the compressive sensing concept is examined from the perspective of redundancy reduction in images subject to diurnal and weather variations for the purpose of resolving range uncertainty at all weather conditions such as the dawn or dusk, the daytime with different light level or the nighttime at different spectral band. As an example, a scenario at an intersection of a country road at dawn/dusk is discussed where the location of the traffic signs needs to be resolved by passive ranging to answer whether it is located on the same side of the road or the opposite side, which is under the influence of temporal light/color level variation. A spectral band extrapolation via application of Lagrange Constrained Neural Network (LCNN) learning algorithm is discussed to address lost color restoration at dawn/dusk. A numerical simulation is illustrated along with the code example.
Imaging Lenticular Autofluorescence in Older Subjects
Charng, Jason; Tan, Rose; Luu, Chi D.; Sadigh, Sam; Stambolian, Dwight; Guymer, Robyn H.; Jacobson, Samuel G.; Cideciyan, Artur V.
2017-01-01
Purpose To evaluate whether a practical method of imaging lenticular autofluorescence (AF) can provide an individualized measure correlated with age-related lens yellowing in older subjects undergoing tests involving shorter wavelength lights. Methods Lenticular AF was imaged with 488-nm excitation using a confocal scanning laser ophthalmoscope (cSLO) routinely used for retinal AF imaging. There were 75 older subjects (ages 47–87) at two sites; a small cohort of younger subjects served as controls. At one site, the cSLO was equipped with an internal reference to allow quantitative AF measurements; at the other site, reduced-illuminance AF imaging (RAFI) was used. In a subset of subjects, lens density index was independently estimated from dark-adapted spectral sensitivities performed psychophysically. Results Lenticular AF intensity was significantly higher in the older eyes than the younger cohort when measured with the internal reference (59.2 ± 15.4 vs. 134.4 ± 31.7 gray levels; P < 0.05) as well as when recorded with RAFI without the internal reference (10.9 ± 1.5 vs. 26.1 ± 5.7 gray levels; P < 0.05). Lenticular AF was positively correlated with age; however, there could also be large differences between individuals of similar age. Lenticular AF intensity correlated well with lens density indices estimated from psychophysical measures. Conclusions Lenticular AF measured with a retinal cSLO can provide a practical and individualized measure of lens yellowing, and may be a good candidate to distinguish between preretinal and retinal deficits involving short-wavelength lights in older eyes. PMID:28973367
Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel
2015-01-01
Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.
Modeling the National Ignition Facility neutron imaging system.
Wilson, D C; Grim, G P; Tregillis, I L; Wilke, M D; Patel, M V; Sepke, S M; Morgan, G L; Hatarik, R; Loomis, E N; Wilde, C H; Oertel, J A; Fatherley, V E; Clark, D D; Fittinghoff, D N; Bower, D E; Schmitt, M J; Marinak, M M; Munro, D H; Merrill, F E; Moran, M J; Wang, T-S F; Danly, C R; Hilko, R A; Batha, S H; Frank, M; Buckles, R
2010-10-01
Numerical modeling of the neutron imaging system for the National Ignition Facility (NIF), forward from calculated target neutron emission to a camera image, will guide both the reduction of data and the future development of the system. Located 28 m from target chamber center, the system can produce two images at different neutron energies by gating on neutron arrival time. The brighter image, using neutrons near 14 MeV, reflects the size and symmetry of the implosion "hot spot." A second image in scattered neutrons, 10-12 MeV, reflects the size and symmetry of colder, denser fuel, but with only ∼1%-7% of the neutrons. A misalignment of the pinhole assembly up to ±175 μm is covered by a set of 37 subapertures with different pointings. The model includes the variability of the pinhole point spread function across the field of view. Omega experiments provided absolute calibration, scintillator spatial broadening, and the level of residual light in the down-scattered image from the primary neutrons. Application of the model to light decay measurements of EJ399, BC422, BCF99-55, Xylene, DPAC-30, and Liquid A suggests that DPAC-30 and Liquid A would be preferred over the BCF99-55 scintillator chosen for the first NIF system, if they could be fabricated into detectors with sufficient resolution.
Investigation of the ripeness of oil palm fresh fruit bunches using bio-speckle imaging
NASA Astrophysics Data System (ADS)
Salambue, R.; Adnan, A.; Shiddiq, M.
2018-03-01
The ripeness of the oil palm Fresh Fruit Bunches (FFB) determines the yield of the oil produced. Traditionally there are two ways to determine FFB ripeness which are the number of loose fruits and the color changes. Nevertheless, one drawback of visual determination is subjective and qualitative judgment. In this study, the FFB ripeness was investigated using laser based image processing technique. The advantages of using this technique are non-destructive, simple and quantitative. The working principle of the investigation is that a FFB is inserted into a light tight box which contains a laser diode and a CMOS camera, the FFB is illuminated, and then an image is recorded. The FFB image recorder was performed on four FFB fractions i.e. F0, F3, F4 and F5 on the front and rear surfaces at three sections. The recorded images are speckled granules that have light intensity variation (bio-speckle imaging). The feature extracted from the specked image is the contrast value obtained from the average gray value intensity and the standard deviation. Based on the contrast values, the four fractions of FFB can be grouped into three levels of ripeness of unripe (F0), ripe (F3) and overripe (F4 and F5) on the front surface of base section of FFB by 75%.
NASA Astrophysics Data System (ADS)
Malik, Zvi; Dishi, M.
1995-05-01
The subcellular localization of endogenous protoporphyrin (endo- PP) during photosensitization in B-16 melanoma cells was analyzed by a novel spectral imaging system, the SpectraCube 1000. The melanoma cells were incubated with 5-aminolevulinic acid (ALA), and then the fluorescence of endo-PP was recorded in individual living cells by three modes: conventional fluorescence imaging, multipixel point by point fluorescence spectroscopy, and image processing, by operating a function of spectral similarity mapping and reconstructing new images derived from spectral information. The fluorescence image of ALA-treated cells revealed vesicular distribution of endo-PP all over the cytosol, with mitochondrial, lysosomal, as well as endoplasmic reticulum cisternael accumulation. Two main spectral fluorescence peaks were demonstrated at 635 and 705 nm, with intensities that differed from one subcellular site to another. Photoirradiation of the cells included point-specific subcellular fluorescence spectrum changes and demonstrated photoproduct formation. Spectral image reconstruction revealed the local distribution of a chosen spectrum in the photosensitized cells. On the other hand, B 16 cells treated with exogenous protoporphyrin (exo-PP) showed a dominant fluorescence peak at 670 nm and a minor peak at 630 nm. Fluorescence was localized at a perinuclear=Golgi region. Light exposure induced photobleaching and photoproduct-spectral changes followed by relocalization. The new localization at subcellular compartments showed pH dependent spectral shifts and photoproduct formation on a subcellular level.
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
Neuronal connectome of a sensory-motor circuit for visual navigation
Randel, Nadine; Asadulina, Albina; Bezares-Calderón, Luis A; Verasztó, Csaba; Williams, Elizabeth A; Conzelmann, Markus; Shahidi, Réza; Jékely, Gáspár
2014-01-01
Animals use spatial differences in environmental light levels for visual navigation; however, how light inputs are translated into coordinated motor outputs remains poorly understood. Here we reconstruct the neuronal connectome of a four-eye visual circuit in the larva of the annelid Platynereis using serial-section transmission electron microscopy. In this 71-neuron circuit, photoreceptors connect via three layers of interneurons to motorneurons, which innervate trunk muscles. By combining eye ablations with behavioral experiments, we show that the circuit compares light on either side of the body and stimulates body bending upon left-right light imbalance during visual phototaxis. We also identified an interneuron motif that enhances sensitivity to different light intensity contrasts. The Platynereis eye circuit has the hallmarks of a visual system, including spatial light detection and contrast modulation, illustrating how image-forming eyes may have evolved via intermediate stages contrasting only a light and a dark field during a simple visual task. DOI: http://dx.doi.org/10.7554/eLife.02730.001 PMID:24867217
NASA Astrophysics Data System (ADS)
Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun
2017-11-01
An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.
An FPGA-based heterogeneous image fusion system design method
NASA Astrophysics Data System (ADS)
Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong
2011-08-01
Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.
NASA Astrophysics Data System (ADS)
Hirayama, Heijiro; Nakamura, Sohichiro
2015-07-01
We have developed ultraviolet (UV)- and visible-light one-shot spectral domain (SD) optical coherence tomography (OCT) that enables in situ imaging of human skin with an arbitrary wavelength in the UV-visible-light region (370-800 nm). We alleviated the computational burden for each color OCT image by physically dispersing the irradiating light with a color filter. The system consists of SD-OCT with multicylindrical lenses; thus, mechanical scanning of the mirror or stage is unnecessary to obtain an OCT image. Therefore, only a few dozens of milliseconds are necessary to obtain single-image data. We acquired OCT images of one subject's skin in vivo and of a skin excision ex vivo for red (R, 650±20 nm), green (G, 550±20 nm), blue (B, 450±20 nm), and UV (397±5 nm) light. In the visible-light spectrum, R light penetrated the skin and was reflected at a lower depth than G or B light. On the skin excision, we demonstrated that UV light reached the dermal layer. We anticipated that basic knowledge about the spectral properties of human skin in the depth direction could be acquired with this system.
Hirayama, Heijiro; Nakamura, Sohichiro
2015-07-01
We have developed ultraviolet (UV)- and visible-light one-shot spectral domain (SD) optical coherence tomography (OCT) that enables in situ imaging of human skin with an arbitrary wavelength in the UV-visible-light region (370-800 nm). We alleviated the computational burden for each color OCT image by physically dispersing the irradiating light with a color filter. The system consists of SD-OCT with multicylindrical lenses; thus, mechanical scanning of the mirror or stage is unnecessary to obtain an OCT image. Therefore, only a few dozens of milliseconds are necessary to obtain single-image data. We acquired OCT images of one subject's skin in vivo and of a skin excision ex vivo for red (R, 650 ± 20 nm), green (G, 550 ± 20 nm), blue (B, 450 ± 20 nm), and UV (397 ± 5 nm) light. In the visible-light spectrum, R light penetrated the skin and was reflected at a lower depth than G or B light. On the skin excision, we demonstrated that UV light reached the dermal layer. We anticipated that basic knowledge about the spectral properties of human skin in the depth direction could be acquired with this system.
Correlation Plenoptic Imaging.
D'Angelo, Milena; Pepe, Francesco V; Garuccio, Augusto; Scarcelli, Giuliano
2016-06-03
Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.
NASA Astrophysics Data System (ADS)
D'Angelo, Milena; Pepe, Francesco V.; Garuccio, Augusto; Scarcelli, Giuliano
2016-06-01
Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.
Aziz, Mehak K; Ni, Aiguo; Esserman, Denise A; Chavala, Sai H
2014-07-01
To study spatiotemporal in vivo changes in retinal morphology and quantify thickness of retinal layers in a mouse model of light-induced retinal degeneration using spectral domain optical coherence tomography (SD-OCT). BALB/c mice were exposed to 5000 lux of constant light for 3 h. SD-OCT images were taken 3 h, 24 h, 3 days, 1 week and 1 month after light exposure and were compared with histology at the same time points. SD-OCT images were also taken at 0, 1 and 2 h after light exposure in order to analyse retinal changes at the earliest time points. The thickness of retinal layers was measured using the Bioptigen software InVivoVue Diver. SD-OCT demonstrated progressive outer retinal thinning. 3 h after light exposure, the outer nuclear layer converted from hyporeflective to hyper-reflective. At 24 h, outer retinal bands and nuclear layer demonstrated similar levels of hyper-reflectivity. Significant variations in outer retinal thickness, vitreous opacities and retinal detachments occurred within days of injury. Thinning of the retina was observed at 1 month after injury. It was also determined that outer nuclear layer changes precede photoreceptor segment structure disintegration and the greatest change in segment structure occurs between 1 and 2 h after light exposure. Longitudinal SD-OCT reveals intraretinal changes that cannot be observed by histopathology at early time points in the light injury model. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Computational model of lightness perception in high dynamic range imaging
NASA Astrophysics Data System (ADS)
Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter
2006-02-01
An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.
Fast imaging of live organisms with sculpted light sheets
NASA Astrophysics Data System (ADS)
Chmielewski, Aleksander K.; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T.; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F.
2015-04-01
Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed.
The use of near-infrared photography to image fired bullets and cartridge cases.
Stein, Darrell; Yu, Jorn Chi Chung
2013-09-01
An imaging technique that is capable of reducing glare, reflection, and shadows can greatly assist the process of toolmarks comparison. In this work, a camera with near-infrared (near-IR) photographic capabilities was fitted with an IR filter, mounted to a stereomicroscope, and used to capture images of toolmarks on fired bullets and cartridge cases. Fluorescent, white light-emitting diode (LED), and halogen light sources were compared for use with the camera. Test-fired bullets and cartridge cases from different makes and models of firearms were photographed under either near-IR or visible light. With visual comparisons, near-IR images and visible light images were comparable. The use of near-IR photography did not reveal more details and could not effectively eliminate reflections and glare associated with visible light photography. Near-IR photography showed little advantages in manual examination of fired evidence when it was compared with visible light (regular) photography. © 2013 American Academy of Forensic Sciences.
Multi-channel infrared thermometer
Ulrickson, M.A.
A device for measuring the two-dimensional temperature profile of a surface comprises imaging optics for generating an image of the light radiating from the surface; an infrared detector array having a plurality of detectors; and optical means positioned between the imaging optics and the detector array for sampling, transmitting, and distributing the image over the detector surfaces. The optical means may be a light pipe array having one light pipe for each detector in the detector array.
White-Light Optical Information Processing and Holography.
1982-05-03
artifact noise . I. wever, the deblurring spatial filter that we used were a narrow spectral band centered at 5154A green light. To compensate for the scaling...Processing, White-Light 11olographyv, Image Profcessing, Optical Signal Process inI, Image Subtraction, Image Deblurring . 70. A S’ R ACT (Continua on crow ad...optical processing technique, we had shown that the incoherent source techniques provides better image quality, and very low coherent artifact noise
Extended depth of field imaging for high speed object analysis
NASA Technical Reports Server (NTRS)
Frost, Keith (Inventor); Ortyn, William (Inventor); Basiji, David (Inventor); Bauer, Richard (Inventor); Liang, Luchuan (Inventor); Hall, Brian (Inventor); Perry, David (Inventor)
2011-01-01
A high speed, high-resolution flow imaging system is modified to achieve extended depth of field imaging. An optical distortion element is introduced into the flow imaging system. Light from an object, such as a cell, is distorted by the distortion element, such that a point spread function (PSF) of the imaging system is invariant across an extended depth of field. The distorted light is spectrally dispersed, and the dispersed light is used to simultaneously generate a plurality of images. The images are detected, and image processing is used to enhance the detected images by compensating for the distortion, to achieve extended depth of field images of the object. The post image processing preferably involves de-convolution, and requires knowledge of the PSF of the imaging system, as modified by the optical distortion element.
Brominated Luciferins Are Versatile Bioluminescent Probes
Steinhardt, Rachel C.; Rathbun, Colin M.; Krull, Brandon T.; ...
2016-12-08
Here, we report a set of brominated luciferins for bioluminescence imaging. These regioisomeric scaffolds were accessed by using a common synthetic route. All analogues produced light with firefly luciferase, although varying levels of emission were observed. Differences in photon output were analyzed by computation and photophysical measurements. The brightest brominated luciferin was further evaluated in cell and animal models. At low doses, the analogue outperformed the native substrate in cells. The remaining luciferins, although weak emitters with firefly luciferase, were inherently capable of light production and thus potential substrates for orthogonal mutant enzymes.
NASA Astrophysics Data System (ADS)
Karimova, L. N.; Berezin, A. N.; Shevchik, S. A.; Kharnas, S. S.; Kusmin, S. G.; Loschenov, V. B.
2005-08-01
In the given research the new method of fluorescent diagnostics (FD) and photodynamic therapy (PDT) control of acne disease is submitted. Method is based on simultaneous diagnostics in natural and fluorescent light. PDT was based on using 5-ALA (5- aminolevulinic acid) preparation and 600-730 nanometers radiation. If the examined site of a skin possessed a high endogenous porphyrin fluorescence level, PDT was carried out without 5-ALA. For FD and treatment control a dot spectroscopy and the fluorescent imaging of the affected skin were used.
Megapixel mythology and photospace: estimating photospace for camera phones from large image sets
NASA Astrophysics Data System (ADS)
Hultgren, Bror O.; Hertel, Dirk W.
2008-01-01
It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.
Scanning light-sheet microscopy in the whole mouse brain with HiLo background rejection.
Mertz, Jerome; Kim, Jinhyun
2010-01-01
It is well known that light-sheet illumination can enable optically sectioned wide-field imaging of macroscopic samples. However, the optical sectioning capacity of a light-sheet macroscope is undermined by sample-induced scattering or aberrations that broaden the thickness of the sheet illumination. We present a technique to enhance the optical sectioning capacity of a scanning light-sheet microscope by out-of-focus background rejection. The technique, called HiLo microscopy, makes use of two images sequentially acquired with uniform and structured sheet illumination. An optically sectioned image is then synthesized by fusing high and low spatial frequency information from both images. The benefits of combining light-sheet macroscopy and HiLo background rejection are demonstrated in optically cleared whole mouse brain samples, using both green fluorescent protein (GFP)-fluorescence and dark-field scattered light contrast.
Scanning light-sheet microscopy in the whole mouse brain with HiLo background rejection
NASA Astrophysics Data System (ADS)
Mertz, Jerome; Kim, Jinhyun
2010-01-01
It is well known that light-sheet illumination can enable optically sectioned wide-field imaging of macroscopic samples. However, the optical sectioning capacity of a light-sheet macroscope is undermined by sample-induced scattering or aberrations that broaden the thickness of the sheet illumination. We present a technique to enhance the optical sectioning capacity of a scanning light-sheet microscope by out-of-focus background rejection. The technique, called HiLo microscopy, makes use of two images sequentially acquired with uniform and structured sheet illumination. An optically sectioned image is then synthesized by fusing high and low spatial frequency information from both images. The benefits of combining light-sheet macroscopy and HiLo background rejection are demonstrated in optically cleared whole mouse brain samples, using both green fluorescent protein (GFP)-fluorescence and dark-field scattered light contrast.
Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.
Ripple, Dean C; Hu, Zhishang
2016-03-01
Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.
Cathodoluminescence for the 21st century: Learning more from light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coenen, T.; Haegel, N. M.
Cathodoluminescence (CL) is the emission of light from a material in response to excitation by incident electrons. The technique has had significant impact in the characterization of semiconductors, minerals, ceramics, and many nanostructured materials. Since 2010, there have been a number of innovative developments that have revolutionized and expanded the information that can be gained from CL and broadened the areas of application. While the primary historical application of CL was for spatial mapping of luminescence variations (e.g., imaging dark line defects in semiconductor lasers or providing high resolution imaging of compositional variations in geological materials), new ways to collectmore » and analyze the emitted light have expanded the science impact of CL, particularly at the intersection of materials science and nanotechnology. Current developments include (1) angular and polarized CL, (2) advances in time resolved CL, (3) far-field and near-field transport imaging that enable drift and diffusion information to be obtained through real space imaging, (4) increasing use of statistical analyses for the study of grain boundaries and interfaces, (5) 3D CL including tomography and combined work utilizing dual beam systems with CL, and (6) combined STEM/CL measurements that are reaching new levels of resolution and advancing single photon spectroscopy. This focused review will first summarize the fundamentals and then briefly describe the state-of-the-art in conventional CL imaging and spectroscopy. We also review these recent novel experimental approaches that enable added insight and information, providing a range of examples from nanophotonics, photovoltaics, plasmonics, and studies of individual defects and grain boundaries.« less