Accurate and cost-effective MTF measurement system for lens modules of digital cameras
NASA Astrophysics Data System (ADS)
Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu
2007-01-01
For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Optical correlator method and apparatus for particle image velocimetry processing
NASA Technical Reports Server (NTRS)
Farrell, Patrick V. (Inventor)
1991-01-01
Young's fringes are produced from a double exposure image of particles in a flowing fluid by passing laser light through the film and projecting the light onto a screen. A video camera receives the image from the screen and controls a spatial light modulator. The spatial modulator has a two dimensional array of cells the transmissiveness of which are controlled in relation to the brightness of the corresponding pixel of the video camera image of the screen. A collimated beam of laser light is passed through the spatial light modulator to produce a diffraction pattern which is focused onto another video camera, with the output of the camera being digitized and provided to a microcomputer. The diffraction pattern formed when the laser light is passed through the spatial light modulator and is focused to a point corresponds to the two dimensional Fourier transform of the Young's fringe pattern projected onto the screen. The data obtained fro This invention was made with U.S. Government support awarded by the Department of the Army (DOD) and NASA grand number(s): DOD #DAAL03-86-K0174 and NASA #NAG3-718. The U.S. Government has certain rights in this invention.
Optical stereo video signal processor
NASA Technical Reports Server (NTRS)
Craig, G. D. (Inventor)
1985-01-01
An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.
Study on real-time images compounded using spatial light modulator
NASA Astrophysics Data System (ADS)
Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang
2007-01-01
Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.
Parallel phase-sensitive three-dimensional imaging camera
Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.
2007-09-25
An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.
Texture-adaptive hyperspectral video acquisition system with a spatial light modulator
NASA Astrophysics Data System (ADS)
Fang, Xiaojing; Feng, Jiao; Wang, Yongjin
2014-10-01
We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.
Astronaut John Young in shadow of Lunar Module behind ultraviolet camera
1972-04-22
AS16-114-18439 (22 April 1972) --- Astronaut Charles M. Duke Jr., lunar module pilot, stands in the shadow of the Lunar Module (LM) behind the ultraviolet (UV) camera which is in operation. This photograph was taken by astronaut John W. Young, commander, during the mission's second extravehicular activity (EVA). The UV camera's gold surface is designed to maintain the correct temperature. The astronauts set the prescribed angles of azimuth and elevation (here 14 degrees for photography of the large Magellanic Cloud) and pointed the camera. Over 180 photographs and spectra in far-ultraviolet light were obtained showing clouds of hydrogen and other gases and several thousand stars. The United States flag and Lunar Roving Vehicle (LRV) are in the left background. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (LM) "Orion" to explore the Descartes highlands landing site on the moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.
Astronaut Charles M. Duke, Jr., in shadow of Lunar Module behind ultraviolet camera
NASA Technical Reports Server (NTRS)
1972-01-01
Astronaut Charles M. Duke, Jr., lunar module pilot, stands in the shadow of the Lunar Module (LM) behind the ultraviolet (UV) camera which is in operation. This photograph was taken by astronaut John W. Young, mission commander, during the mission's second extravehicular activity (EVA-2). The UV camera's gold surface is designed to maintain the correct temperature. The astronauts set the prescribed angles of azimuth and elevation (here 14 degrees for photography of the large Magellanic Cloud) and pointed the camera. Over 180 photographs and spectra in far-ultraviolet light were obtained showing clouds of hydrogen and other gases and several thousand stars. The United States flag and Lunar Roving Vehicle (LRV) are in the left background. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (lm) 'Orion' to explore the Descartes highlands landing site on the Moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (csm) 'Casper' in lunar orbit.
Programmable 10 MHz optical fiducial system for hydrodiagnostic cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huen, T.
1987-07-01
A solid state light control system was designed and fabricated for use with hydrodiagnostic streak cameras of the electro-optic type. With its use, the film containing the streak images will have on it two time scales simultaneously exposed with the signal. This allows timing and cross timing. The latter is achieved with exposure modulation marking onto the time tick marks. The purpose of using two time scales will be discussed. The design is based on a microcomputer, resulting in a compact and easy to use instrument. The light source is a small red light emitting diode. Time marking can bemore » programmed in steps of 0.1 microseconds, with a range of 255 steps. The time accuracy is based on a precision 100 MHz quartz crystal, giving a divided down 10 MHz system frequency. The light is guided by two small 100 micron diameter optical fibers, which facilitates light coupling onto the input slit of an electro-optic streak camera. Three distinct groups of exposure modulation of the time tick marks can be independently set anywhere onto the streak duration. This system has been successfully used in Fabry-Perot laser velocimeters for over four years in our Laboratory. The microcomputer control section is also being used in providing optical fids to mechanical rotor cameras.« less
Image Intensifier Modules For Use With Commercially Available Solid State Cameras
NASA Astrophysics Data System (ADS)
Murphy, Howard; Tyler, Al; Lake, Donald W.
1989-04-01
A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.
Lock-in imaging with synchronous digital mirror demodulation
NASA Astrophysics Data System (ADS)
Bush, Michael G.
2010-04-01
Lock-in imaging enables high contrast imaging in adverse conditions by exploiting a modulated light source and homodyne detection. We report results on a patent pending lock-in imaging system fabricated from commercial-off-theshelf parts utilizing standard cameras and a spatial light modulator. By leveraging the capabilities of standard parts we are able to present a low cost, high resolution, high sensitivity camera with applications in search and rescue, friend or foe identification (IFF), and covert surveillance. Different operating modes allow the same instrument to be utilized for dual band multispectral imaging or high dynamic range imaging, increasing the flexibility in different operational settings.
X-ray imaging using digital cameras
NASA Astrophysics Data System (ADS)
Winch, Nicola M.; Edgar, Andrew
2012-03-01
The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.
A photoelastic modulator-based birefringence imaging microscope for measuring biological specimens
NASA Astrophysics Data System (ADS)
Freudenthal, John; Leadbetter, Andy; Wolf, Jacob; Wang, Baoliang; Segal, Solomon
2014-11-01
The photoelastic modulator (PEM) has been applied to a variety of polarimetric measurements. However, nearly all such applications use point-measurements where each point (spot) on the sample is measured one at a time. The main challenge for employing the PEM in a camera-based imaging instrument is that the PEM modulates too fast for typical cameras. The PEM modulates at tens of KHz. To capture the specific polarization information that is carried on the modulation frequency of the PEM, the camera needs to be at least ten times faster. However, the typical frame rates of common cameras are only in the tens or hundreds frames per second. In this paper, we report a PEM-camera birefringence imaging microscope. We use the so-called stroboscopic illumination method to overcome the incompatibility of the high frequency of the PEM to the relatively slow frame rate of a camera. We trigger the LED light source using a field-programmable gate array (FPGA) in synchrony with the modulation of the PEM. We show the measurement results of several standard birefringent samples as a part of the instrument calibration. Furthermore, we show results observed in two birefringent biological specimens, a human skin tissue that contains collagen and a slice of mouse brain that contains bundles of myelinated axonal fibers. Novel applications of this PEM-based birefringence imaging microscope to both research communities and industrial applications are being tested.
The application of laser triangulation method on the blind guidance
NASA Astrophysics Data System (ADS)
Wu, Jih-Huah; Wang, Jinn-Der; Fang, Wei; Shan, Yi-Chia; Ma, Shih-Hsin; Kao, Hai-Ko; Jiang, Joe-Air; Lee, Yun-Parn
2011-08-01
A new apparatus for blind-guide is proposed in this paper. Optical triangulation method was used to realize the system. The main components comprise a notebook computer, a camera and two laser modules. One laser module emits a light line beam on the vertical axis. Another laser module emits a light line beam on the tilt horizontal axis. The track of the light line beam on the ground or on the object is captured by the camera, and the image is sent to the notebook computer for calculation. The system can calculate the object width and the distance between the object and the blind in terms of the light line positions on the image. Based on the experiment, the distance between the test object and the blind can be measured with a standard deviation of less than 3% within the range of 60 to 150 cm. The test object width can be measured with a standard deviation of less than 1% within the range of 60 to 150 cm. For saving the power consumption, the laser modules are switched on/off with a trigger pulse. And for reducing the complex computation, the two laser modules are switched on alternately. Besides this, a band pass filter is used to filter out the signal except the specific laser light, which can increase the signal to noise ratio.
NASA Astrophysics Data System (ADS)
Javh, Jaka; Slavič, Janko; Boltežar, Miha
2018-02-01
Instantaneous full-field displacement fields can be measured using cameras. In fact, using high-speed cameras full-field spectral information up to a couple of kHz can be measured. The trouble is that high-speed cameras capable of measuring high-resolution fields-of-view at high frame rates prove to be very expensive (from tens to hundreds of thousands of euro per camera). This paper introduces a measurement set-up capable of measuring high-frequency vibrations using slow cameras such as DSLR, mirrorless and others. The high-frequency displacements are measured by harmonically blinking the lights at specified frequencies. This harmonic blinking of the lights modulates the intensity changes of the filmed scene and the camera-image acquisition makes the integration over time, thereby producing full-field Fourier coefficients of the filmed structure's displacements.
LCD-based digital eyeglass for modulating spatial-angular information.
Bian, Zichao; Liao, Jun; Guo, Kaikai; Heng, Xin; Zheng, Guoan
2015-05-04
Using programmable aperture to modulate spatial-angular information of light field is well-known in computational photography and microscopy. Inspired by this concept, we report a digital eyeglass design that adaptively modulates light field entering human eyes. The main hardware includes a transparent liquid crystal display (LCD) and a mini-camera. The device analyzes the spatial-angular information of the camera image in real time and subsequently sends a command to form a certain pattern on the LCD. We show that, the eyeglass prototype can adaptively reduce light transmission from bright sources by ~80% and retain transparency to other dim objects meanwhile. One application of the reported device is to reduce discomforting glare caused by vehicle headlamps. To this end, we report the preliminary result of using the reported device in a road test. The reported device may also find applications in military operations (sniper scope), laser counter measure, STEM education, and enhancing visual contrast for visually impaired patients and elderly people with low vision.
Electronic recording of holograms with applications to holographic displays
NASA Technical Reports Server (NTRS)
Claspy, P. C.; Merat, F. L.
1979-01-01
The paper describes an electronic heterodyne recording which uses electrooptic modulation to introduce a sinusoidal phase shift between the object and reference wave. The resulting temporally modulated holographic interference pattern is scanned by a commercial image dissector camera, and the rejection of the self-interference terms is accomplished by heterodyne detection at the camera output. The electrical signal representing this processed hologram can then be used to modify the properties of a liquid crystal light valve or a similar device. Such display devices transform the displayed interference pattern into a phase modulated wave front rendering a three-dimensional image.
Visible camera cryostat design and performance for the SuMIRe Prime Focus Spectrograph (PFS)
NASA Astrophysics Data System (ADS)
Smee, Stephen A.; Gunn, James E.; Golebiowski, Mirek; Hope, Stephen C.; Madec, Fabrice; Gabriel, Jean-Francois; Loomis, Craig; Le fur, Arnaud; Dohlen, Kjetil; Le Mignant, David; Barkhouser, Robert; Carr, Michael; Hart, Murdock; Tamura, Naoyuki; Shimono, Atsushi; Takato, Naruhisa
2016-08-01
We describe the design and performance of the SuMIRe Prime Focus Spectrograph (PFS) visible camera cryostats. SuMIRe PFS is a massively multi-plexed ground-based spectrograph consisting of four identical spectrograph modules, each receiving roughly 600 fibers from a 2394 fiber robotic positioner at the prime focus. Each spectrograph module has three channels covering wavelength ranges 380 nm - 640 nm, 640 nm - 955 nm, and 955 nm - 1.26 um, with the dispersed light being imaged in each channel by a f/1.07 vacuum Schmidt camera. The cameras are very large, having a clear aperture of 300 mm at the entrance window, and a mass of 280 kg. In this paper we describe the design of the visible camera cryostats and discuss various aspects of cryostat performance.
NASA Technical Reports Server (NTRS)
Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.
1973-01-01
The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.
Modulated CMOS camera for fluorescence lifetime microscopy.
Chen, Hongtao; Holst, Gerhard; Gratton, Enrico
2015-12-01
Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.
Dual beam optical interferometer
NASA Technical Reports Server (NTRS)
Gutierrez, Roman C. (Inventor)
2003-01-01
A dual beam interferometer device is disclosed that enables moving an optics module in a direction, which changes the path lengths of two beams of light. The two beams reflect off a surface of an object and generate different speckle patterns detected by an element, such as a camera. The camera detects a characteristic of the surface.
Non-flickering 100 m RGB visible light communication transmission based on a CMOS image sensor.
Chow, Chi-Wai; Shiu, Ruei-Jie; Liu, Yen-Chun; Liu, Yang; Yeh, Chien-Hung
2018-03-19
We demonstrate a non-flickering 100 m long-distance RGB visible light communication (VLC) transmission based on a complementary-metal-oxide-semiconductor (CMOS) camera. Experimental bit-error rate (BER) measurements under different camera ISO values and different transmission distances are evaluated. Here, we also experimentally reveal that the rolling shutter effect- (RSE) based VLC system cannot work at long distance transmission, and the under-sampled modulation- (USM) based VLC system is a good choice.
MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera
NASA Astrophysics Data System (ADS)
Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.
2017-10-01
An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).
Large holographic 3D display for real-time computer-generated holography
NASA Astrophysics Data System (ADS)
Häussler, R.; Leister, N.; Stolle, H.
2017-06-01
SeeReal's concept of real-time holography is based on Sub-Hologram encoding and tracked Viewing Windows. This solution leads to significant reduction of pixel count and computation effort compared to conventional holography concepts. Since the first presentation of the concept, improved full-color holographic displays were built with dedicated components. The hologram is encoded on a spatial light modulator that is a sandwich of a phase-modulating and an amplitude-modulating liquid-crystal display and that modulates amplitude and phase of light. Further components are based on holographic optical elements for light collimation and focusing which are exposed in photopolymer films. Camera photographs show that only the depth region on which the focus of the camera lens is set is in focus while the other depth regions are out of focus. These photographs demonstrate that the 3D scene is reconstructed in depth and that accommodation of the eye lenses is supported. Hence, the display is a solution to overcome the accommodationconvergence conflict that is inherent for stereoscopic 3D displays. The main components, progress and results of the holographic display with 300 mm x 200 mm active area are described. Furthermore, photographs of holographic reconstructed 3D scenes are shown.
NASA Astrophysics Data System (ADS)
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Motionless active depth from defocus system using smart optics for camera autofocus applications
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2016-04-01
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
Electronic heterodyne recording of interference patterns
NASA Technical Reports Server (NTRS)
Merat, F. L.; Claspy, P. C.
1979-01-01
An electronic heterodyne technique is being investigated for video (i.e., television rate and format) recording of interference patterns. In the heterodyne technique electro-optic modulation is used to introduce a sinusoidal phase shift between the beams of an interferometer. For phase modulation frequencies between 0.1 and 15 MHz an image dissector camera may be used to scan the resulting temporally modulated interference pattern. Heterodyne detection of the camera output is used to selectively record the interference pattern. An advantage of such synchronous recording is that it permits recording of low-contrast fringes in high ambient light conditions. The application of this technique to the recording of holograms is discussed.
A direct-view customer-oriented digital holographic camera
NASA Astrophysics Data System (ADS)
Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.
2018-01-01
In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.
Fischer, Andreas; Kupsch, Christian; Gürtler, Johannes; Czarske, Jürgen
2015-09-21
Non-intrusive fast 3d measurements of volumetric velocity fields are necessary for understanding complex flows. Using high-speed cameras and spectroscopic measurement principles, where the Doppler frequency of scattered light is evaluated within the illuminated plane, each pixel allows one measurement and, thus, planar measurements with high data rates are possible. While scanning is one standard technique to add the third dimension, the volumetric data is not acquired simultaneously. In order to overcome this drawback, a high-speed light field camera is proposed for obtaining volumetric data with each single frame. The high-speed light field camera approach is applied to a Doppler global velocimeter with sinusoidal laser frequency modulation. As a result, a frequency multiplexing technique is required in addition to the plenoptic refocusing for eliminating the crosstalk between the measurement planes. However, the plenoptic refocusing is still necessary in order to achieve a large refocusing range for a high numerical aperture that minimizes the measurement uncertainty. Finally, two spatially separated measurement planes with 25×25 pixels each are simultaneously acquired with a measurement rate of 0.5 kHz with a single high-speed camera.
Project Physics Handbook 4, Light and Electromagnetism.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
Seven experiments and 40 activities are presented in this handbook. The experiments are related to Young's experiment, electric forces, forces on currents, electron-beam tubes, and wave modulation and communication. The activities are primarily concerned with aspects of scattered and polarized light, colors, image formation, lenses, cameras,…
NASA Astrophysics Data System (ADS)
Whyte, Refael; Streeter, Lee; Cree, Michael J.; Dorrington, Adrian A.
2015-11-01
Time of flight (ToF) range cameras illuminate the scene with an amplitude-modulated continuous wave light source and measure the returning modulation envelopes: phase and amplitude. The phase change of the modulation envelope encodes the distance travelled. This technology suffers from measurement errors caused by multiple propagation paths from the light source to the receiving pixel. The multiple paths can be represented as the summation of a direct return, which is the return from the shortest path length, and a global return, which includes all other returns. We develop the use of a sinusoidal pattern from which a closed form solution for the direct and global returns can be computed in nine frames with the constraint that the global return is a spatially lower frequency than the illuminated pattern. In a demonstration on a scene constructed to have strong multipath interference, we find the direct return is not significantly different from the ground truth in 33/136 pixels tested; where for the full-field measurement, it is significantly different for every pixel tested. The variance in the estimated direct phase and amplitude increases by a factor of eight compared with the standard time of flight range camera technique.
Scannerless laser range imaging using loss modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandusky, John V
2011-08-09
A scannerless 3-D imaging apparatus is disclosed which utilizes an amplitude modulated cw light source to illuminate a field of view containing a target of interest. Backscattered light from the target is passed through one or more loss modulators which are modulated at the same frequency as the light source, but with a phase delay .delta. which can be fixed or variable. The backscattered light is demodulated by the loss modulator and detected with a CCD, CMOS or focal plane array (FPA) detector to construct a 3-D image of the target. The scannerless 3-D imaging apparatus, which can operate inmore » the eye-safe wavelength region 1.4-1.7 .mu.m and which can be constructed as a flash LADAR, has applications for vehicle collision avoidance, autonomous rendezvous and docking, robotic vision, industrial inspection and measurement, 3-D cameras, and facial recognition.« less
Scannerless laser range imaging using loss modulation
Sandusky, John V [Albuquerque, NM
2011-08-09
A scannerless 3-D imaging apparatus is disclosed which utilizes an amplitude modulated cw light source to illuminate a field of view containing a target of interest. Backscattered light from the target is passed through one or more loss modulators which are modulated at the same frequency as the light source, but with a phase delay .delta. which can be fixed or variable. The backscattered light is demodulated by the loss modulator and detected with a CCD, CMOS or focal plane array (FPA) detector to construct a 3-D image of the target. The scannerless 3-D imaging apparatus, which can operate in the eye-safe wavelength region 1.4-1.7 .mu.m and which can be constructed as a flash LADAR, has applications for vehicle collision avoidance, autonomous rendezvous and docking, robotic vision, industrial inspection and measurement, 3-D cameras, and facial recognition.
A multi-modal stereo microscope based on a spatial light modulator.
Lee, M P; Gibson, G M; Bowman, R; Bernet, S; Ritsch-Marte, M; Phillips, D B; Padgett, M J
2013-07-15
Spatial Light Modulators (SLMs) can emulate the classic microscopy techniques, including differential interference (DIC) contrast and (spiral) phase contrast. Their programmability entails the benefit of flexibility or the option to multiplex images, for single-shot quantitative imaging or for simultaneous multi-plane imaging (depth-of-field multiplexing). We report the development of a microscope sharing many of the previously demonstrated capabilities, within a holographic implementation of a stereo microscope. Furthermore, we use the SLM to combine stereo microscopy with a refocusing filter and with a darkfield filter. The instrument is built around a custom inverted microscope and equipped with an SLM which gives various imaging modes laterally displaced on the same camera chip. In addition, there is a wide angle camera for visualisation of a larger region of the sample.
NASA Astrophysics Data System (ADS)
Pozzi, Paolo; Wilding, Dean; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel
2018-02-01
In this work, we present a new confocal laser scanning microscope capable to perform sensorless wavefront optimization in real time. The device is a parallelized laser scanning microscope in which the excitation light is structured in a lattice of spots by a spatial light modulator, while a deformable mirror provides aberration correction and scanning. A binary DMD is positioned in an image plane of the detection optical path, acting as a dynamic array of reflective confocal pinholes, images by a high performance cmos camera. A second camera detects images of the light rejected by the pinholes for sensorless aberration correction.
Ultrafast Imaging using Spectral Resonance Modulation
NASA Astrophysics Data System (ADS)
Huang, Eric; Ma, Qian; Liu, Zhaowei
2016-04-01
CCD cameras are ubiquitous in research labs, industry, and hospitals for a huge variety of applications, but there are many dynamic processes in nature that unfold too quickly to be captured. Although tradeoffs can be made between exposure time, sensitivity, and area of interest, ultimately the speed limit of a CCD camera is constrained by the electronic readout rate of the sensors. One potential way to improve the imaging speed is with compressive sensing (CS), a technique that allows for a reduction in the number of measurements needed to record an image. However, most CS imaging methods require spatial light modulators (SLMs), which are subject to mechanical speed limitations. Here, we demonstrate an etalon array based SLM without any moving elements that is unconstrained by either mechanical or electronic speed limitations. This novel spectral resonance modulator (SRM) shows great potential in an ultrafast compressive single pixel camera.
Shaul, Oren; Fanrazi-Kahana, Michal; Meitav, Omri; Pinhasi, Gad A; Abookasis, David
2017-11-10
Heat stress (HS) is a medical emergency defined by abnormally elevated body temperature that causes biochemical, physiological, and hematological changes. The goal of the present research was to detect variations in optical properties (absorption, reduced scattering, and refractive index coefficients) of mouse brain tissue during HS by using near-infrared (NIR) spatial light modulation. NIR spatial patterns with different spatial phases were used to differentiate the effects of tissue scattering from those of absorption. Decoupling optical scattering from absorption enabled the quantification of a tissue's chemical constituents (related to light absorption) and structural properties (related to light scattering). Technically, structured light patterns at low and high spatial frequencies of six wavelengths ranging between 690 and 970 nm were projected onto the mouse scalp surface while diffuse reflected light was recorded by a CCD camera positioned perpendicular to the mouse scalp. Concurrently to pattern projection, brain temperature was measured with a thermal camera positioned slightly off angle from the mouse head while core body temperature was monitored by thermocouple probe. Data analysis demonstrated variations from baseline measurements in a battery of intrinsic brain properties following HS.
A real-time monitoring system for night glare protection
NASA Astrophysics Data System (ADS)
Ma, Jun; Ni, Xuxiang
2010-11-01
When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.
NASA Astrophysics Data System (ADS)
Pospisil, J.; Jakubik, P.; Machala, L.
2005-11-01
This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.
Toward real-time quantum imaging with a single pixel camera
Lawrie, B. J.; Pooser, R. C.
2013-03-19
In this paper, we present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively pass macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. Finally, in low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imagingmore » with sensitivity below the photon shot noise limit.« less
Camera Development for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Moncada, Roberto Jose
2017-01-01
With the Cherenkov Telescope Array (CTA), the very-high-energy gamma-ray universe, between 30 GeV and 300 TeV, will be probed at an unprecedented resolution, allowing deeper studies of known gamma-ray emitters and the possible discovery of new ones. This exciting project could also confirm the particle nature of dark matter by looking for the gamma rays produced by self-annihilating weakly interacting massive particles (WIMPs). The telescopes will use the imaging atmospheric Cherenkov technique (IACT) to record Cherenkov photons that are produced by the gamma-ray induced extensive air shower. One telescope design features dual-mirror Schwarzschild-Couder (SC) optics that allows the light to be finely focused on the high-resolution silicon photomultipliers of the camera modules starting from a 9.5-meter primary mirror. Each camera module will consist of a focal plane module and front-end electronics, and will have four TeV Array Readout with GSa/s Sampling and Event Trigger (TARGET) chips, giving them 64 parallel input channels. The TARGET chip has a self-trigger functionality for readout that can be used in higher logic across camera modules as well as across individual telescopes, which will each have 177 camera modules. There will be two sites, one in the northern and the other in the southern hemisphere, for full sky coverage, each spanning at least one square kilometer. A prototype SC telescope is currently under construction at the Fred Lawrence Whipple Observatory in Arizona. This work was supported by the National Science Foundation's REU program through NSF award AST-1560016.
The GCT camera for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium
2017-12-01
The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.
Modeling of digital information optical encryption system with spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.
2015-10-01
State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.
Frequency division multiplexed multi-color fluorescence microscope system
NASA Astrophysics Data System (ADS)
Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan
2017-10-01
Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.
N'Gom, Moussa; Lien, Miao-Bin; Estakhri, Nooshin M; Norris, Theodore B; Michielssen, Eric; Nadakuditi, Raj Rao
2017-05-31
Complex Semi-Definite Programming (SDP) is introduced as a novel approach to phase retrieval enabled control of monochromatic light transmission through highly scattering media. In a simple optical setup, a spatial light modulator is used to generate a random sequence of phase-modulated wavefronts, and the resulting intensity speckle patterns in the transmitted light are acquired on a camera. The SDP algorithm allows computation of the complex transmission matrix of the system from this sequence of intensity-only measurements, without need for a reference beam. Once the transmission matrix is determined, optimal wavefronts are computed that focus the incident beam to any position or sequence of positions on the far side of the scattering medium, without the need for any subsequent measurements or wavefront shaping iterations. The number of measurements required and the degree of enhancement of the intensity at focus is determined by the number of pixels controlled by the spatial light modulator.
Singh, M Suheshkumar; Yalavarthy, Phaneendra K; Vasu, R M; Rajan, K
2010-07-01
To assess the effect of ultrasound modulation of near infrared (NIR) light on the quantification of scattering coefficient in tissue-mimicking biological phantoms. A unique method to estimate the phase of the modulated NIR light making use of only time averaged intensity measurements using a charge coupled device camera is used in this investigation. These experimental measurements from tissue-mimicking biological phantoms are used to estimate the differential pathlength, in turn leading to estimation of optical scattering coefficient. A Monte-Carlo model based numerical estimation of phase in lieu of ultrasound modulation is performed to verify the experimental results. The results indicate that the ultrasound modulation of NIR light enhances the effective scattering coefficient. The observed effective scattering coefficient enhancement in tissue-mimicking viscoelastic phantoms increases with increasing ultrasound drive voltage. The same trend is noticed as the ultrasound modulation frequency approaches the natural vibration frequency of the phantom material. The contrast enhancement is less for the stiffer (larger storage modulus) tissue, mimicking tumor necrotic core, compared to the normal tissue. The ultrasound modulation of the insonified region leads to an increase in the effective number of scattering events experienced by NIR light, increasing the measured phase, causing the enhancement in the effective scattering coefficient. The ultrasound modulation of NIR light could provide better estimation of scattering coefficient. The observed local enhancement of the effective scattering coefficient, in the ultrasound focal region, is validated using both experimental measurements and Monte-Carlo simulations.
NASA Astrophysics Data System (ADS)
Rohrbach, Scott O.; Irvin, Ryan G.; Seals, Lenward T.; Skelton, Dennis L.
2016-09-01
This paper describes an integrated stray light model of each Science Instrument (SI) in the Integrated Science Instrument Module (ISIM) of the James Webb Space Telescope (JWST) and the Optical Telescope Element Simulator (OSIM), the light source used to characterize the performance of ISIM in cryogenic-vacuum tests at the Goddard Space Flight Center (GSFC). We present three cases where this stray light model was integral to solving questions that arose during the testing campaign - 1) ghosting and coherent diffraction from hardware surfaces in the Near Infrared Imager and Slitless Spectrograph (NIRISS) GR700XD grism mode, 2) ghost spots in the Near Infrared Camera (NIRCam) GRISM modes, and 3) scattering from knife edges of the NIRCam focal plane array masks.
Absolute colorimetric characterization of a DSLR camera
NASA Astrophysics Data System (ADS)
Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo
2014-03-01
A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.
Development of infrared scene projectors for testing fire-fighter cameras
NASA Astrophysics Data System (ADS)
Neira, Jorge E.; Rice, Joseph P.; Amon, Francine K.
2008-04-01
We have developed two types of infrared scene projectors for hardware-in-the-loop testing of thermal imaging cameras such as those used by fire-fighters. In one, direct projection, images are projected directly into the camera. In the other, indirect projection, images are projected onto a diffuse screen, which is then viewed by the camera. Both projectors use a digital micromirror array as the spatial light modulator, in the form of a Micromirror Array Projection System (MAPS) engine having resolution of 800 x 600 with mirrors on a 17 micrometer pitch, aluminum-coated mirrors, and a ZnSe protective window. Fire-fighter cameras are often based upon uncooled microbolometer arrays and typically have resolutions of 320 x 240 or lower. For direct projection, we use an argon-arc source, which provides spectral radiance equivalent to a 10,000 Kelvin blackbody over the 7 micrometer to 14 micrometer wavelength range, to illuminate the micromirror array. For indirect projection, an expanded 4 watt CO II laser beam at a wavelength of 10.6 micrometers illuminates the micromirror array and the scene formed by the first-order diffracted light from the array is projected onto a diffuse aluminum screen. In both projectors, a well-calibrated reference camera is used to provide non-uniformity correction and brightness calibration of the projected scenes, and the fire-fighter cameras alternately view the same scenes. In this paper, we compare the two methods for this application and report on our quantitative results. Indirect projection has an advantage of being able to more easily fill the wide field of view of the fire-fighter cameras, which typically is about 50 degrees. Direct projection more efficiently utilizes the available light, which will become important in emerging multispectral and hyperspectral applications.
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
Study of plant phototropic responses to different LEDs illumination in microgravity
NASA Astrophysics Data System (ADS)
Zyablova, Natalya; Berkovich, Yuliy A.; Skripnikov, Alexander; Nikitin, Vladimir
2012-07-01
The purpose of the experiment planned for Russian BION-M #1, 2012, biosatellite is research of Physcomitrella patens (Hedw.) B.S.G. phototropic responses to different light stimuli in microgravity. The moss was chosen as small-size higher plant. The experimental design involves five lightproof culture flasks with moss gametophores fixed inside the cylindrical container (diameter 120 mm; height 240 mm). The plants in each flask are illuminated laterally by one of the following LEDs: white, blue (475 nm), red (625 nm), far red (730 nm), infrared (950 nm). The gametophores growth and bending are captured periodically by means of five analogue video cameras and recorder. The programmable command module controls power supply of each camera and each light source, commutation of the cameras and functioning of video recorder. Every 20 minutes the recorder is sequentially connecting to one of the cameras. This results in a clip, containing 5 sets of frames in a row. After landing time-lapse films are automatically created. As a result we will have five time-lapse films covering transformations in each of the five culture flasks. Onground experiments demonstrated that white light induced stronger gametophores phototropic bending as compared to red and blue stimuli. The comparison of time-lapse recordings in the experiments will provide useful information to optimize lighting assemblies for space plant growth facilities.
Design of an Optical System for Phase Retrieval based on a Spatial Light Modulator
NASA Astrophysics Data System (ADS)
Falldorf, Claas; Agour, Mostafa; von Kopylow, Christoph; Bergmann, Ralf B.
2010-04-01
We present an optical configuration for phase retrieval from a sequence of intensity measurements. The setup is based on a 4f-configuration with a phase modulating spatial light modulator (SLM) located in the Fourier domain. The SLM is used to modulate the incoming light with the transfer function of propagation, thus a sequence of propagated representations of the subjected wave field can be captured across a common sensor plane. The main advantage of this technique is the greatly reduced measurement time, since no mechanical adjustment of the camera sensor is required throughout the measurement process. The treatise is focused on the analysis of the wave field in the sensor domain. From the discussion a set of parameters is derived in order to minimize disturbing effects arising from the discrete nature of the SLM. Finally, the big potential of this approach is demonstrated by means of experimental investigations with regard to wave field sensing.
System for fuel rod removal from a reactor module
Matchett, R.L.; Fodor, G.; Kikta, T.J.; Bacvinsicas, W.S.; Roof, D.R.; Nilsen, R.J.; Wilczynski, R.
1988-07-28
A robotic system for remote underwater withdrawal of the fuel rods from fuel modules of a light water breeder reactor includes a collet/grapple assembly for gripping and removing fuel rods in each module, which is positioned by use of a winch and a radial support means attached to a vertical support tube which is mounted over the fuel module. A programmable logic controller in conjunction with a microcomputer, provides control for the accurate positioning and pulling force of the rod grapple assembly. Closed circuit television cameras are provided which aid in operator interface with the robotic system. 7 figs.
System for fuel rod removal from a reactor module
Matchett, Richard L.; Roof, David R.; Kikta, Thomas J.; Wilczynski, Rosemarie; Nilsen, Roy J.; Bacvinskas, William S.; Fodor, George
1990-01-01
A robotic system for remote underwater withdrawal of the fuel rods from fuel modules of a light water breeder reactor includes a collet/grapple assembly for gripping and removing fuel rods in each module, which is positioned by use of a winch and a radial support means attached to a vertical support tube which is mounted over the fuel module. A programmable logic controller in conjunction with a microcomputer, provides control for the accurate positioning and pulling force of the rod grapple assembly. Closed circuit television cameras are provided which aid in operator interface with the robotic system.
NASA Astrophysics Data System (ADS)
Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.
2013-05-01
The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/
NASA Astrophysics Data System (ADS)
Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador
2012-01-01
A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.
NASA Astrophysics Data System (ADS)
Strömberg, Tomas; Saager, Rolf B.; Kennedy, Gordon T.; Fredriksson, Ingemar; Salerud, Göran; Durkin, Anthony J.; Larsson, Marcus
2018-02-01
Spatial frequency domain imaging (SFDI) utilizes a digital light processing (DLP) projector for illuminating turbid media with sinusoidal patterns. The tissue absorption (μa) and reduced scattering coefficient (μ,s) are calculated by analyzing the modulation transfer function for at least two spatial frequencies. We evaluated different illumination strategies with a red, green and blue light emitting diodes (LED) in the DLP, while imaging with a filter mosaic camera, XiSpec, with 16 different multi-wavelength sensitive pixels in the 470-630 nm wavelength range. Data were compared to SFDI by a multispectral camera setup (MSI) consisting of four cameras with bandpass filters centered at 475, 560, 580 and 650 nm. A pointwise system for comprehensive microcirculation analysis was used (EPOS) for comparison. A 5-min arterial occlusion and release protocol on the forearm of a Caucasian male with fair skin was analyzed by fitting the absorption spectra of the chromophores HbO2, Hb and melanin to the estimatedμa. The tissue fractions of red blood cells (fRBC), melanin (/mel) and the Hb oxygenation (S02 ) were calculated at baseline, end of occlusion, early after release and late after release. EPOS results showed a decrease in S02 during the occlusion and hyperemia during release (S02 = 40%, 5%, 80% and 51%). The fRBC showed an increase during occlusion and release phases. The best MSI resemblance to the EPOS was for green LED illumination (S02 = 53%, 9%, 82%, 65%). Several illumination and analysis strategies using the XiSpec gave un-physiological results (e.g. negative S02 ). XiSpec with green LED illumination gave the expected change in /RBC , while the dynamics in S02 were less than those for EPOS. These results may be explained by the calculation of modulation using an illumination and detector setup with a broad spectral transmission bandwidth, with considerable variation in μa of included chromophores. Approaches for either reducing the effective bandwidth of the XiSpec filters or by including their characteristic in a light transport model for SFDI modulation, are proposed.
NASA Technical Reports Server (NTRS)
Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.
2011-01-01
We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.
A wide-angle camera module for disposable endoscopy
NASA Astrophysics Data System (ADS)
Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee
2016-08-01
A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.
Lightdrum—Portable Light Stage for Accurate BTF Measurement on Site
Havran, Vlastimil; Hošek, Jan; Němcová, Šárka; Čáp, Jiří; Bittner, Jiří
2017-01-01
We propose a miniaturised light stage for measuring the bidirectional reflectance distribution function (BRDF) and the bidirectional texture function (BTF) of surfaces on site in real world application scenarios. The main principle of our lightweight BTF acquisition gantry is a compact hemispherical skeleton with cameras along the meridian and with light emitting diode (LED) modules shining light onto a sample surface. The proposed device is portable and achieves a high speed of measurement while maintaining high degree of accuracy. While the positions of the LEDs are fixed on the hemisphere, the cameras allow us to cover the range of the zenith angle from 0∘ to 75∘ and by rotating the cameras along the axis of the hemisphere we can cover all possible camera directions. This allows us to take measurements with almost the same quality as existing stationary BTF gantries. Two degrees of freedom can be set arbitrarily for measurements and the other two degrees of freedom are fixed, which provides a tradeoff between accuracy of measurements and practical applicability. Assuming that a measured sample is locally flat and spatially accessible, we can set the correct perpendicular direction against the measured sample by means of an auto-collimator prior to measuring. Further, we have designed and used a marker sticker method to allow for the easy rectification and alignment of acquired images during data processing. We show the results of our approach by images rendered for 36 measured material samples. PMID:28241466
Uav Photogrammetric Solution Using a Raspberry pi Camera Module and Smart Devices: Test and Results
NASA Astrophysics Data System (ADS)
Piras, M.; Grasso, N.; Jabbar, A. Abdul
2017-08-01
Nowadays, smart technologies are an important part of our action and life, both in indoor and outdoor environment. There are several smart devices very friendly to be setting, where they can be integrated and embedded with other sensors, having a very low cost. Raspberry allows to install an internal camera called Raspberry Pi Camera Module, both in RGB band and NIR band. The advantage of this system is the limited cost (< 60 euro), their light weight and their simplicity to be used and embedded. This paper will describe a research where a Raspberry Pi with the Camera Module was installed onto a UAV hexacopter based on arducopter system, with purpose to collect pictures for photogrammetry issue. Firstly, the system was tested with aim to verify the performance of RPi camera in terms of frame per second/resolution and the power requirement. Moreover, a GNSS receiver Ublox M8T was installed and connected to the Raspberry platform in order to collect real time position and the raw data, for data processing and to define the time reference. IMU was also tested to see the impact of UAV rotors noise on different sensors like accelerometer, Gyroscope and Magnetometer. A comparison of the achieved results (accuracy) on some check points of the point clouds obtained by the camera will be reported as well in order to analyse in deeper the main discrepancy on the generated point cloud and the potentiality of these proposed approach. In this contribute, the assembling of the system is described, in particular the dataset acquired and the results carried out will be analysed.
NASA Astrophysics Data System (ADS)
Koudelka, Petr; Hanulak, Patrik; Jaros, Jakub; Papes, Martin; Latal, Jan; Siska, Petr; Vasinek, Vladimir
2015-07-01
This paper discusses the implementation of a light emitting diode based visible light communication system for optical vehicle-to-vehicle (V2V) communications in road safety applications. The widespread use of LEDs as light sources has reached into automotive fields. For example, LEDs are used for taillights, daytime running lights, brake lights, headlights, and traffic signals. Future in the optical vehicle-to-vehicle (V2V) communications will be based on an optical wireless communication technology that using LED transmitter and a camera receiver (OCI; optical communication image sensor). Utilization of optical V2V communication systems in automotive industry naturally brings a lot of problems. Among them belongs necessity of circuit implementation into the current concepts of electronic LED lights control that allows LED modulation. These circuits are quite complicated especially in case of luxury cars. Other problem is correct design of modulation circuits so that final vehicle lightning using optical vehicle-to-vehicle (V2V) communication meets standard requirements on Photometric Quantities and Beam Homogeneity. Authors of this article performed research on optical vehicle-to-vehicle (V2V) communication possibilities of headlight (Jaguar) and taillight (Skoda) in terms of modulation circuits (M-PSK, M-QAM) implementation into the lamp concepts and final fulfilment of mandatory standards on Photometric Quantities and Beam Homogeneity.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
Differential high-speed digital micromirror device based fluorescence speckle confocal microscopy.
Jiang, Shihong; Walker, John
2010-01-20
We report a differential fluorescence speckle confocal microscope that acquires an image in a fraction of a second by exploiting the very high frame rate of modern digital micromirror devices (DMDs). The DMD projects a sequence of predefined binary speckle patterns to the sample and modulates the intensity of the returning fluorescent light simultaneously. The fluorescent light reflecting from the DMD's "on" and "off" pixels is modulated by correlated speckle and anticorrelated speckle, respectively, to form two images on two CCD cameras in parallel. The sum of the two images recovers a widefield image, but their difference gives a near-confocal image in real time. Experimental results for both low and high numerical apertures are shown.
A study on a portable fluorescence imaging system
NASA Astrophysics Data System (ADS)
Chang, Han-Chao; Wu, Wen-Hong; Chang, Chun-Li; Huang, Kuo-Cheng; Chang, Chung-Hsing; Chiu, Shang-Chen
2011-09-01
The fluorescent reaction is that an organism or dye, excited by UV light (200-405 nm), emits a specific frequency of light; the light is usually a visible or near infrared light (405-900 nm). During the UV light irradiation, the photosensitive agent will be induced to start the photochemical reaction. In addition, the fluorescence image can be used for fluorescence diagnosis and then photodynamic therapy can be given to dental diseases and skin cancer, which has become a useful tool to provide scientific evidence in many biomedical researches. However, most of the methods on acquiring fluorescence biology traces are still stay in primitive stage, catching by naked eyes and researcher's subjective judgment. This article presents a portable camera to obtain the fluorescence image and to make up a deficit from observer competence and subjective judgment. Furthermore, the portable camera offers the 375nm UV-LED exciting light source for user to record fluorescence image and makes the recorded image become persuasive scientific evidence. In addition, when the raising the rate between signal and noise, the signal processing module will not only amplify the fluorescence signal up to 70 %, but also decrease the noise significantly from environmental light on bill and nude mouse testing.
Novel, full 3D scintillation dosimetry using a static plenoptic camera.
Goulet, Mathieu; Rilling, Madison; Gingras, Luc; Beddar, Sam; Beaulieu, Luc; Archambault, Louis
2014-08-01
Patient-specific quality assurance (QA) of dynamic radiotherapy delivery would gain from being performed using a 3D dosimeter. However, 3D dosimeters, such as gels, have many disadvantages limiting to quality assurance, such as tedious read-out procedures and poor reproducibility. The purpose of this work is to develop and validate a novel type of high resolution 3D dosimeter based on the real-time light acquisition of a plastic scintillator volume using a plenoptic camera. This dosimeter would allow for the QA of dynamic radiation therapy techniques such as intensity-modulated radiation therapy (IMRT) or volumetric-modulated arc therapy (VMAT). A Raytrix R5 plenoptic camera was used to image a 10 × 10 × 10 cm(3) EJ-260 plastic scintillator embedded inside an acrylic phantom at a rate of one acquisition per second. The scintillator volume was irradiated with both an IMRT and VMAT treatment plan on a Clinac iX linear accelerator. The 3D light distribution emitted by the scintillator volume was reconstructed at a 2 mm resolution in all dimensions by back-projecting the light collected by each pixel of the light-field camera using an iterative reconstruction algorithm. The latter was constrained by a beam's eye view projection of the incident dose acquired using the portal imager integrated with the linac and by physical consideration of the dose behavior as a function of depth in the phantom. The absolute dose difference between the reconstructed 3D dose and the expected dose calculated using the treatment planning software Pinnacle(3) was on average below 1.5% of the maximum dose for both integrated IMRT and VMAT deliveries, and below 3% for each individual IMRT incidences. Dose agreement between the reconstructed 3D dose and a radiochromic film acquisition in the same experimental phantom was on average within 2.1% and 1.2% of the maximum recorded dose for the IMRT and VMAT delivery, respectively. Using plenoptic camera technology, the authors were able to perform millimeter resolution, water-equivalent dosimetry of an IMRT and VMAT plan over a whole 3D volume. Since no moving parts are required in the dosimeter, the incident dose distribution can be acquired as a function of time, thus enabling the validation of static and dynamic radiation delivery with photons, electrons, and heavier ions.
Novel, full 3D scintillation dosimetry using a static plenoptic camera
Goulet, Mathieu; Rilling, Madison; Gingras, Luc; Beddar, Sam; Beaulieu, Luc; Archambault, Louis
2014-01-01
Purpose: Patient-specific quality assurance (QA) of dynamic radiotherapy delivery would gain from being performed using a 3D dosimeter. However, 3D dosimeters, such as gels, have many disadvantages limiting to quality assurance, such as tedious read-out procedures and poor reproducibility. The purpose of this work is to develop and validate a novel type of high resolution 3D dosimeter based on the real-time light acquisition of a plastic scintillator volume using a plenoptic camera. This dosimeter would allow for the QA of dynamic radiation therapy techniques such as intensity-modulated radiation therapy (IMRT) or volumetric-modulated arc therapy (VMAT). Methods: A Raytrix R5 plenoptic camera was used to image a 10 × 10 × 10 cm3 EJ-260 plastic scintillator embedded inside an acrylic phantom at a rate of one acquisition per second. The scintillator volume was irradiated with both an IMRT and VMAT treatment plan on a Clinac iX linear accelerator. The 3D light distribution emitted by the scintillator volume was reconstructed at a 2 mm resolution in all dimensions by back-projecting the light collected by each pixel of the light-field camera using an iterative reconstruction algorithm. The latter was constrained by a beam's eye view projection of the incident dose acquired using the portal imager integrated with the linac and by physical consideration of the dose behavior as a function of depth in the phantom. Results: The absolute dose difference between the reconstructed 3D dose and the expected dose calculated using the treatment planning software Pinnacle3 was on average below 1.5% of the maximum dose for both integrated IMRT and VMAT deliveries, and below 3% for each individual IMRT incidences. Dose agreement between the reconstructed 3D dose and a radiochromic film acquisition in the same experimental phantom was on average within 2.1% and 1.2% of the maximum recorded dose for the IMRT and VMAT delivery, respectively. Conclusions: Using plenoptic camera technology, the authors were able to perform millimeter resolution, water-equivalent dosimetry of an IMRT and VMAT plan over a whole 3D volume. Since no moving parts are required in the dosimeter, the incident dose distribution can be acquired as a function of time, thus enabling the validation of static and dynamic radiation delivery with photons, electrons, and heavier ions. PMID:25086549
Pham, Quang Duc; Hayasaki, Yoshio
2015-01-01
We demonstrate an optical frequency comb profilometer with a single-pixel camera to measure the position and profile of an object's surface that exceeds far beyond light wavelength without 2π phase ambiguity. The present configuration of the single-pixel camera can perform the profilometry with an axial resolution of 3.4 μm at 1 GHz operation corresponding to a wavelength of 30 cm. Therefore, the axial dynamic range was increased to 0.87×105. It was found from the experiments and computer simulations that the improvement was derived from higher modulation contrast of digital micromirror devices. The frame rate was also increased to 20 Hz.
A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i
Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.
2015-01-01
We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity.
Reflective correctors for the Hubble Space Telescope axial instruments
NASA Technical Reports Server (NTRS)
Bottema, Murk
1993-01-01
Reflective correctors to compensate the spherical aberration in the Hubble Space Telescope are placed in front of three of the axial scientific instruments (a camera and two spectrographs) during the first scheduled refurbishment mission. The five correctors required are deployed from a new module that replaces the fourth axial instrument. Each corrector consists of a field mirror and an aspherical, aberration-correcting reimaging mirror. In the camera the angular resolution capability is restored, be it in reduced fields, and in the spectrographs the potential for observations in crowded areas is regained along with effective light collection at the slits.
Structured light stereo catadioptric scanner based on a spherical mirror
NASA Astrophysics Data System (ADS)
Barone, S.; Neri, P.; Paoli, A.; Razionale, A. V.
2018-08-01
The present paper describes the development and characterization of a structured light stereo catadioptric scanner for the omnidirectional reconstruction of internal surfaces. The proposed approach integrates two digital cameras, a multimedia projector and a spherical mirror, which is used to project the structured light patterns generated by the light emitter and, at the same time, to reflect into the cameras the modulated fringe patterns diffused from the target surface. The adopted optical setup defines a non-central catadioptric system, thus relaxing any geometrical constraint in the relative placement between optical devices. An analytical solution for the reflection on a spherical surface is proposed with the aim at modelling forward and backward projection tasks for a non-central catadioptric setup. The feasibility of the proposed active catadioptric scanner has been verified by reconstructing various target surfaces. Results demonstrated a great influence of the target surface distance from the mirror's centre on the measurement accuracy. The adopted optical configuration allows the definition of a metrological 3D scanner for surfaces disposed within 120 mm from the mirror centre.
Two-dimensional angular transmission characterization of CPV modules.
Herrero, R; Domínguez, C; Askins, S; Antón, I; Sala, G
2010-11-08
This paper proposes a fast method to characterize the two-dimensional angular transmission function of a concentrator photovoltaic (CPV) system. The so-called inverse method, which has been used in the past for the characterization of small optical components, has been adapted to large-area CPV modules. In the inverse method, the receiver cell is forward biased to produce a Lambertian light emission, which reveals the reverse optical path of the optics. Using a large-area collimator mirror, the light beam exiting the optics is projected on a Lambertian screen to create a spatially resolved image of the angular transmission function. An image is then obtained using a CCD camera. To validate this method, the angular transmission functions of a real CPV module have been measured by both direct illumination (flash CPV simulator and sunlight) and the inverse method, and the comparison shows good agreement.
Ultrasound-modulated optical tomography with intense acoustic bursts.
Zemp, Roger J; Kim, Chulhong; Wang, Lihong V
2007-04-01
Ultrasound-modulated optical tomography (UOT) detects ultrasonically modulated light to spatially localize multiply scattered photons in turbid media with the ultimate goal of imaging the optical properties in living subjects. A principal challenge of the technique is weak modulated signal strength. We discuss ways to push the limits of signal enhancement with intense acoustic bursts while conforming to optical and ultrasonic safety standards. A CCD-based speckle-contrast detection scheme is used to detect acoustically modulated light by measuring changes in speckle statistics between ultrasound-on and ultrasound-off states. The CCD image capture is synchronized with the ultrasound burst pulse sequence. Transient acoustic radiation force, a consequence of bursts, is seen to produce slight signal enhancement over pure ultrasonic-modulation mechanisms for bursts and CCD exposure times of the order of milliseconds. However, acoustic radiation-force-induced shear waves are launched away from the acoustic sample volume, which degrade UOT spatial resolution. By time gating the CCD camera to capture modulated light before radiation force has an opportunity to accumulate significant tissue displacement, we reduce the effects of shear-wave image degradation, while enabling very high signal-to-noise ratios. Additionally, we maintain high-resolution images representative of optical and not mechanical contrast. Signal-to-noise levels are sufficiently high so as to enable acquisition of 2D images of phantoms with one acoustic burst per pixel.
A cylindrical SPECT camera with de-centralized readout scheme
NASA Astrophysics Data System (ADS)
Habte, F.; Stenström, P.; Rillbert, A.; Bousselham, A.; Bohm, C.; Larsson, S. A.
2001-09-01
An optimized brain single photon emission computed tomograph (SPECT) camera is being designed at Stockholm University and Karolinska Hospital. The design goal is to achieve high sensitivity, high-count rate and high spatial resolution. The sensitivity is achieved by using a cylindrical crystal, which gives a closed geometry with large solid angles. A de-centralized readout scheme where only a local environment around the light excitation is readout supports high-count rates. The high resolution is achieved by using an optimized crystal configuration. A 12 mm crystal plus 12 mm light guide combination gave an intrinsic spatial resolution better than 3.5 mm (140 keV) in a prototype system. Simulations show that a modified configuration can improve this value. A cylindrical configuration with a rotating collimator significantly simplifies the mechanical design of the gantry. The data acquisition and control system uses early digitization and subsequent digital signal processing to extract timing and amplitude information, and monitors the position of the collimator. The readout system consists of 12 or more modules each based on programmable logic and a digital signal processor. The modules send data to a PC file server-reconstruction engine via a Firewire (IEEE-1394) network.
Effects of red light camera enforcement on fatal crashes in large U.S. cities.
Hu, Wen; McCartt, Anne T; Teoh, Eric R
2011-08-01
To estimate the effects of red light camera enforcement on per capita fatal crash rates at intersections with signal lights. From the 99 large U.S. cities with more than 200,000 residents in 2008, 14 cities were identified with red light camera enforcement programs for all of 2004-2008 but not at any time during 1992-1996, and 48 cities were identified without camera programs during either period. Analyses compared the citywide per capita rate of fatal red light running crashes and the citywide per capita rate of all fatal crashes at signalized intersections during the two study periods, and rate changes then were compared for cities with and without cameras programs. Poisson regression was used to model crash rates as a function of red light camera enforcement, land area, and population density. The average annual rate of fatal red light running crashes declined for both study groups, but the decline was larger for cities with red light camera enforcement programs than for cities without camera programs (35% vs. 14%). The average annual rate of all fatal crashes at signalized intersections decreased by 14% for cities with camera programs and increased slightly (2%) for cities without cameras. After controlling for population density and land area, the rate of fatal red light running crashes during 2004-2008 for cities with camera programs was an estimated 24% lower than what would have been expected without cameras. The rate of all fatal crashes at signalized intersections during 2004-2008 for cities with camera programs was an estimated 17% lower than what would have been expected without cameras. Red light camera enforcement programs were associated with a statistically significant reduction in the citywide rate of fatal red light running crashes and a smaller but still significant reduction in the rate of all fatal crashes at signalized intersections. The study adds to the large body of evidence that red light camera enforcement can prevent the most serious crashes. Communities seeking to reduce crashes at intersections should consider this evidence. Copyright © 2011 Elsevier Ltd. All rights reserved.
Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera
NASA Astrophysics Data System (ADS)
Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li
2014-09-01
With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.
MPPT Algorithm Development for Laser Powered Surveillance Camera Power Supply Unit
NASA Astrophysics Data System (ADS)
Zhang, Yungui; Dushantha Chaminda, P. R.; Zhao, Kun; Cheng, Lin; Jiang, Yi; Peng, Kai
2018-03-01
Photovoltaics (PV) cells, modules which are semiconducting materials, convert light energy into electricity. Operation of a PV cell requires 3 basic features. When the light is absorbed it generate pairs of electron holes or excitons. An external circuit carrier opposite types of electrons irrespective of the source (sunlight or LASER light). The PV arrays have photovoltaic effect and the PV cells are defined as a device which has electrical characteristics: such as current, voltage and resistance. It varies when exposed to light, that the power output is depend on direct Laser-light. In this paper Laser-light to electricity by direct conversion with the use of PV cells and its concept of Band gap Energy, Series Resistance, Conversion Efficiency and Maximum Power Point Tracking (MPPT) methods [1].
Perfect Lighting for Facial Photography in Aesthetic Surgery: Ring Light.
Dölen, Utku Can; Çınar, Selçuk
2016-04-01
Photography is indispensable for plastic surgery. On-camera flashes can result in bleached out detail and colour. This is why most of the plastic surgery clinics prefer studio lighting similar to professional photographers'. In this article, we want to share a simple alternative to studio lighting that does not need extra space: Ring light. We took five different photographs of the same person with five different camera and lighting settings: Smartphone and ring light; point and shoot camera and on-camera flash; point and shoot camera and studio lighting; digital single-lens reflex (DLSR) camera and studio lighting; DSLR and ring light. Then, those photographs were assessed objectively with an online survey of five questions answered by three distinct populations: plastic surgeons (n: 28), professional portrait photographers (n: 24) and patients (n: 22) who had facial aesthetic procedures. Compared to the on-camera flash, studio lighting better showed the wrinkles of the subject. The ring light facilitated the perception of the wrinkles by providing homogenous soft light in a circular shape rather than bursting flashes. The combination of a DSLR camera and ring light gave the oldest looking subject according to 64 % of responders. The DSLR camera and the studio lighting demonstrated the youngest looking subject according to 70 % of the responders. The majority of the responders (78 %) chose the combination of DSLR camera and ring light that exhibited the wrinkles the most. We suggest using a ring light to obtain well-lit photographs without loss of detail, with any type of cameras. However, smartphones must be avoided if standard pictures are desired. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Advanced illumination control algorithm for medical endoscopy applications
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Morgado-Dias, F.
2015-05-01
CMOS image sensor manufacturer, AWAIBA, is providing the world's smallest digital camera modules to the world market for minimally invasive surgery and one time use endoscopic equipment. Based on the world's smallest digital camera head and the evaluation board provided to it, the aim of this paper is to demonstrate an advanced fast response dynamic control algorithm of the illumination LED source coupled to the camera head, over the LED drivers embedded on the evaluation board. Cost efficient and small size endoscopic camera modules nowadays embed minimal size image sensors capable of not only adjusting gain and exposure time but also LED illumination with adjustable illumination power. The LED illumination power has to be dynamically adjusted while navigating the endoscope over changing illumination conditions of several orders of magnitude within fractions of the second to guarantee a smooth viewing experience. The algorithm is centered on the pixel analysis of selected ROIs enabling it to dynamically adjust the illumination intensity based on the measured pixel saturation level. The control core was developed in VHDL and tested in a laboratory environment over changing light conditions. The obtained results show that it is capable of achieving correction speeds under 1 s while maintaining a static error below 3% relative to the total number of pixels on the image. The result of this work will allow the integration of millimeter sized high brightness LED sources on minimal form factor cameras enabling its use in endoscopic surgical robotic or micro invasive surgery.
Robust and efficient modulation transfer function measurement with CMOS color sensors
NASA Astrophysics Data System (ADS)
Farsani, Raziyeh A.; Sure, Thomas; Apel, Uwe
2017-06-01
Increasing challenges of the industry to improve camera performance with control and test of the alignment process will be discussed in this paper. The major difficulties, such as special CFAs that have white/clear pixels instead of a Bayer pattern and non-homogeneous back light illumination of the targets, used for such tests, will be outlined and strategies on how to handle them will be presented. The proposed algorithms are applied to synthetically generated edges, as well as to experimental images taken from ADAS cameras in standard illumination conditions, to validate the approach. In addition, to consider the influence of the chromatic aberration of the lens and the CFA's influence on the total system MTF, the on-axis focus behavior of the camera module will be presented for each pixel class separately. It will be shown that the repeatability of the measurement results of the system MTF is improved, as a result of a more accurate and robust edge angle detection, elimination of systematic errors, using an improved lateral shift of the pixels and analytical modeling of the edge transition. Results also show the necessity to have separated measurements of contrast in the different pixel classes to ensure a precise focus position.
Patankar, S.; Gumbrell, E. T.; Robinson, T. S.; ...
2017-08-17
Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less
NASA Astrophysics Data System (ADS)
Shaul, Oren; Fanrazi-Kahana, Michal; Meitav, Omri; Pinhasi, Gad A.; Abookasis, David
2018-03-01
Optical properties of biological tissues are valuable diagnostic parameters which can provide necessary information regarding tissue state during disease pathogenesis and therapy. However, different sources of interference, such as temperature changes may modify these properties, introducing confounding factors and artifacts to data, consequently skewing their interpretation and misinforming clinical decision-making. In the current study, we apply spatial light modulation, a type of diffuse reflectance hyperspectral imaging technique, to monitor the variation in optical properties of highly scattering turbid media in the presence varying levels of the following sources of interference: scattering concentration, temperature, and pressure. Spatial near-infrared (NIR) light modulation is a wide-field, non-contact emerging optical imaging platform capable of separating the effects of tissue scattering from those of absorption, thereby accurately estimating both parameters. With this technique, periodic NIR illumination patterns at alternately low and high spatial frequencies, at six discrete wavelengths between 690 to 970 nm, were sequentially projected upon the medium while a CCD camera collects the diffusely reflected light. Data analysis based assumptions is then performed off-line to recover the medium's optical properties. We conducted a series of experiments demonstrating the changes in absorption and reduced scattering coefficients of commercially available fresh milk and chicken breast tissue under different interference conditions. In addition, information on the refractive index was study under increased pressure. This work demonstrates the utility of NIR spatial light modulation to detect varying sources of interference upon the optical properties of biological samples.
1972-04-16
The sixth marned lunar landing mission, the Apollo 16 (SA-511), carrying three astronauts: Mission commander John W. Young, Command Module pilot Thomas K. Mattingly II, and Lunar Module pilot Charles M. Duke, lifted off on April 16, 1972. The Apollo 16 continued the broad-scale geological, geochemical, and geophysical mapping of the Moon's crust, begun by the Apollo 15, from lunar orbit. This mission marked the first use of the Moon as an astronomical observatory by using the ultraviolet camera/spectrograph. It photographed ultraviolet light emitted by Earth and other celestial objects. The Lunar Roving Vehicle was also used. The mission ended on April 27, 1972.
NASA Astrophysics Data System (ADS)
Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael
2006-06-01
The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.
Voss with video camera in Service Module
2001-04-08
ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.
NASA Astrophysics Data System (ADS)
Liu, Yan; Ma, Cheng; Shen, Yuecheng; Wang, Lihong V.
2017-02-01
Optical phase conjugation based wavefront shaping techniques are being actively developed to focus light through or inside scattering media such as biological tissue, and they promise to revolutionize optical imaging, manipulation, and therapy. The speed of digital optical phase conjugation (DOPC) has been limited by the low speeds of cameras and spatial light modulators (SLMs), preventing DOPC from being applied to thick living tissue. Recently, a fast DOPC system was developed based on a single-shot wavefront measurement method, a field programmable gate array (FPGA) for data processing, and a digital micromirror device (DMD) for fast modulation. However, this system has the following limitations. First, the reported single-shot wavefront measurement method does not work when our goal is to focus light inside, instead of through, scattering media. Second, the DMD performed binary amplitude modulation, which resulted in a lower focusing contrast compared with that of phase modulations. Third, the optical fluence threshold causing DMDs to malfunction under pulsed laser illumination is lower than that of liquid crystal based SLMs, and the system alignment is significantly complicated by the oblique reflection angle of the DMD. Here, we developed a simple but high-speed DOPC system using a ferroelectric liquid crystal based SLM (512 × 512 pixels), and focused light through three diffusers within 4.7 ms. Using focused-ultrasound-guided DOPC along with a double exposure scheme, we focused light inside a scattering medium containing two diffusers within 7.7 ms, thus achieving the fastest digital time-reversed ultrasonically encoded (TRUE) optical focusing to date.
The true Blazhko behaviour of DM Cyg
NASA Astrophysics Data System (ADS)
Hurta, Zs.
2009-03-01
We present preliminary results of our work on DM Cyg, an RRab star with steadily increasing pulsation period. The Blazhko modulation of the light curve of DM Cyg has not been undoubtedly confirmed yet. A reanalysis of the original data (Sódor & Jurcsik 2005) could not confirm the 26 d periodicity found by Lysova & Firmanyuk (1980) in the timings of maximum brightness data of visual observations. Neither the scarce photoelectric observations (Fitch 1966, Sturch 1966, Hipparcos 1997) nor the CCD data of the NSVS (Woźniak 2004) survey suggested a notable light curve modulation. In order to get a definite answer whether the light curve of DM Cyg is stable or it shows any kind of modulation it was observed in the course of the Konkoly Blazhko Survey in the 2007 and 2008 seasons. Using the automated 60 cm telescope of the Konkoly Observatory, Svábhegy, Budapest, equipped with a Wright 750 x 1100 CCD camera and BVI_C filters we obtained more than 3000 data points on about 80 nights in each band. Archive photoelectric and photographic observations obtained with the 60 cm telescope and a 16 cm astrograph of the Konkoly Observatory in 1978 and between 1934 and 1958 were also analyzed. The photoelectric and photographic photometry provided 75 B,V and 1031 pg data points from 4 and 40 nights, respectively. The CCD observations revealed that the light curve of DM Cyg is in fact modulated, but with very small amplitude. The maximum brightness variation hardly exceeds 0.05 mag in the V band, while no definite phase modulation of the light curve and/or maximum timings is evident. The amplitudes of the modulation frequencies that form equidistant triplets around the pulsation frequency and its harmonics are below 15 mmag. There is some indication of light curve modulation in the Konkoly photographic data as well. Our data confirm that DM Cyg shows Blazhko modulation but with significantly different period and character (amplitude/phase modulation) than it was found by Lysova & Firmanyuk (2000). A detailed analysis of our observations of DM Cyg with its true Blazhko period will be submitted to MNRAS in early 2009.
Natural 3D content on glasses-free light-field 3D cinema
NASA Astrophysics Data System (ADS)
Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.
2013-03-01
This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua-Kuang (Inventor)
1990-01-01
System for optically recognizing and tracking a plurality of objects within a field of vision. Laser (46) produces a coherent beam (48). Beam splitter (24) splits the beam into object (26) and reference (28) beams. Beam expanders (50) and collimators (52) transform the beams (26, 28) into coherent collimated light beams (26', 28'). A two-dimensional SLM (54), disposed in the object beam (26'), modulates the object beam with optical information as a function of signals from a first camera (16) which develops X and Y signals reflecting the contents of its field of vision. A hololens (38), positioned in the object beam (26') subsequent to the modulator (54), focuses the object beam at a plurality of focal points (42). A planar transparency-forming film (32), disposed with the focal points on an exposable surface, forms a multiple position interference filter (62) upon exposure of the surface and development processing of the film (32). A reflector (53) directing the reference beam (28') onto the film (32), exposes the surface, with images focused by the hololens (38), to form interference patterns on the surface. There is apparatus (16', 64) for sensing and indicating light passage through respective ones of the positions of the filter (62), whereby recognition of objects corresponding to respective ones of the positions of the filter (62) is affected. For tracking, apparatus (64) focuses light passing through the filter (62) onto a matrix of CCD's in a second camera (16') to form a two-dimensional display of the recognized objects.
Time-of-flight camera via a single-pixel correlation image sensor
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua
2018-04-01
A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.
Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung
2017-07-08
A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patankar, S.; Gumbrell, E. T.; Robinson, T. S.
Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less
3-D Flow Visualization with a Light-field Camera
NASA Astrophysics Data System (ADS)
Thurow, B.
2012-12-01
Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.
Chen, Brian R; Poon, Emily; Alam, Murad
2018-01-01
Lighting is an important component of consistent, high-quality dermatologic photography. There are different types of lighting solutions available. To evaluate currently available lighting equipment and methods suitable for procedural dermatology. Overhead lighting, built-in camera flashes, external flash units, studio strobes, and light-emitting diode (LED) light panels were evaluated with regard to their utility for dermatologic surgeons. A set of ideal lighting characteristics was used to examine the capabilities and limitations of each type of lighting solution. Recommendations regarding lighting solutions and optimal usage configurations were made in terms of the context of the clinical environment and the purpose of the image. Overhead lighting may be a convenient option for general documentation. An on-camera lighting solution using a built-in camera flash or a camera-mounted external flash unit provides portability and consistent lighting with minimal training. An off-camera lighting solution with studio strobes, external flash units, or LED light panels provides versatility and even lighting with minimal shadows and glare. The selection of an optimal lighting solution is contingent on practical considerations and the purpose of the image.
1972-04-27
The Apollo 16 Command Module splashed down in the Pacific Ocean on April 27, 1972 after an 11-day moon exploration mission. The sixth manned lunar landing mission, the Apollo 16 (SA-511), carrying three astronauts: Mission Commander John W. Young, Command Module pilot Thomas K. Mattingly II, and Lunar Module pilot Charles M. Duke, lifted off on April 16, 1972. The Apollo 16 continued the broad-scale geological, geochemical, and geophysical mapping of the Moon’s crust, begun by the Apollo 15, from lunar orbit. This mission marked the first use of the Moon as an astronomical observatory by using the ultraviolet camera/spectrograph which photographed ultraviolet light emitted by Earth and other celestial objects. The Lunar Roving Vehicle, developed by the Marshall Space Flight Center, was also used.
Mach-zehnder based optical marker/comb generator for streak camera calibration
Miller, Edward Kirk
2015-03-03
This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.
[A Method for Selecting Self-Adoptive Chromaticity of the Projected Markers].
Zhao, Shou-bo; Zhang, Fu-min; Qu, Xing-hua; Zheng, Shi-wei; Chen, Zhe
2015-04-01
The authors designed a self-adaptive projection system which is composed of color camera, projector and PC. In detail, digital micro-mirror device (DMD) as a spatial light modulator for the projector was introduced in the optical path to modulate the illuminant spectrum based on red, green and blue light emitting diodes (LED). However, the color visibility of active markers is affected by the screen which has unknown reflective spectrum as well. Here active markers are projected spot array. And chromaticity feature of markers is sometimes submerged in similar spectral screen. In order to enhance the color visibility of active markers relative to screen, a method for selecting self-adaptive chromaticity of the projected markers in 3D scanning metrology is described. Color camera with 3 channels limits the accuracy of device characterization. For achieving interconversion of device-independent color space and device-dependent color space, high-dimensional linear model of reflective spectrum was built. Prior training samples provide additional constraints to yield high-dimensional linear model with more than three degrees of freedom. Meanwhile, spectral power distribution of ambient light was estimated. Subsequently, markers' chromaticity in CIE color spaces was selected via maximization principle of Euclidean distance. The setting values of RGB were easily estimated via inverse transform. Finally, we implemented a typical experiment to show the performance of the proposed approach. An 24 Munsell Color Checker was used as projective screen. Color difference in the chromaticity coordinates between the active marker and the color patch was utilized to evaluate the color visibility of active markers relative to the screen. The result comparison between self-adaptive projection system and traditional diode-laser light projector was listed and discussed to highlight advantage of our proposed method.
Optical design of portable nonmydriatic fundus camera
NASA Astrophysics Data System (ADS)
Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang
2016-03-01
Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.
1972-04-16
The sixth manned lunar landing mission, the Apollo 16 (SA-511), carrying three astronauts: Mission Commander John W. Young, Command Module pilot Thomas K. Mattingly II, and Lunar Module pilot Charles M. Duke, lifted off on April 16, 1972. The Apollo 16 mission continued the broad-scale geological, geochemical, and geophysical mapping of the Moon’s crust, begun by the Apollo 15, from lunar orbit. This mission marked the first use of the Moon as an astronomical observatory by using the ultraviolet camera/spectrograph which photographed ultraviolet light emitted by Earth and other celestial objects. The Lunar Roving Vehicle, developed by the Marshall Space Flight Center, was also used. The mission ended on April 27, 1972.
Polarimetric Imaging using Two Photoelastic Modulators
NASA Technical Reports Server (NTRS)
Wang, Yu; Cunningham, Thomas; Diner, David; Davis, Edgar; Sun, Chao; Hancock, Bruce; Gutt, Gary; Zan, Jason; Raouf, Nasrat
2009-01-01
A method of polarimetric imaging, now undergoing development, involves the use of two photoelastic modulators in series, driven at equal amplitude but at different frequencies. The net effect on a beam of light is to cause (1) the direction of its polarization to rotate at the average of two excitation frequencies and (2) the amplitude of its polarization to be modulated at the beat frequency (the difference between the two excitation frequencies). The resulting modulated optical light beam is made to pass through a polarizing filter and is detected at the beat frequency, which can be chosen to equal the frame rate of an electronic camera or the rate of sampling the outputs of photodetectors in an array. The method was conceived to satisfy a need to perform highly accurate polarimetric imaging, without cross-talk between polarization channels, at frame rates of the order of tens of hertz. The use of electro-optical modulators is necessitated by a need to obtain accuracy greater than that attainable by use of static polarizing filters over separate fixed detectors. For imaging, photoelastic modulators are preferable to such other electrio-optical modulators as Kerr cells and Pockels cells in that photoelastic modulators operate at lower voltages, have greater angular acceptances, and are easier to use. Prior to the conception of the present method, polarimetric imaging at frame rates of tens of hertz using photoelastic modulators was not possible because the resonance frequencies of photoelastic modulators usually lie in the range from about 20 to about 100 kHz.
Høye, Gudrun; Fridman, Andrei
2013-05-06
Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
Dynamic 3D measurement of modulated radiotherapy: a scintillator-based approach
NASA Astrophysics Data System (ADS)
Archambault, Louis; Rilling, Madison; Roy-Pomerleau, Xavier; Thibault, Simon
2017-05-01
With the rise of high-conformity dynamic radiotherapy, such as volumetric modulated arc therapy and robotic radiosurgery, the temporal dimension of dose measurement is becoming increasingly important. It must be possible to tell both ‘where’ and ‘when’ a discrepancy occurs between the plan and its delivery. A 3D scintillation-based dosimetry system could be ideal for such a thorough, end-to-end verification; however, the challenge lies in retrieving the volumetric information of the light-emitting volume. This paper discusses the motivation, from an optics point of view, of using the images acquired with a plenoptic camera, or light field imager, of an irradiated plastic scintillator volume to reconstruct the delivered 3D dose distribution. Current work focuses on the optimization of the optical design as well as the data processing that is involved in the ongoing development of a clinically viable, second generation dosimetry system.
Vision-based surface defect inspection for thick steel plates
NASA Astrophysics Data System (ADS)
Yun, Jong Pil; Kim, Dongseob; Kim, KyuHwan; Lee, Sang Jun; Park, Chang Hyun; Kim, Sang Woo
2017-05-01
There are several types of steel products, such as wire rods, cold-rolled coils, hot-rolled coils, thick plates, and electrical sheets. Surface stains on cold-rolled coils are considered defects. However, surface stains on thick plates are not considered defects. A conventional optical structure is composed of a camera and lighting module. A defect inspection system that uses a dual lighting structure to distinguish uneven defects and color changes by surface noise is proposed. In addition, an image processing algorithm that can be used to detect defects is presented in this paper. The algorithm consists of a Gabor filter that detects the switching pattern and employs the binarization method to extract the shape of the defect. The optics module and detection algorithm optimized using a simulator were installed at a real plant, and the experimental results conducted on thick steel plate images obtained from the steel production line show the effectiveness of the proposed method.
Development of blood vessel searching system for HMS
NASA Astrophysics Data System (ADS)
Kandani, Hirofumi; Uenoya, Toshiyuki; Uetsuji, Yasutomo; Nakamachi, Eiji
2008-08-01
In this study, we develop a new 3D miniature blood vessel searching system by using near-infrared LED light, a CMOS camera module with an image processing unit for a health monitoring system (HMS), a drug delivery system (DDS) which requires very high performance for automatic micro blood volume extraction and automatic blood examination. Our objective is to fabricate a highly reliable micro detection system by utilizing image capturing, image processing, and micro blood extraction devices. For the searching system to determine 3D blood vessel location, we employ the stereo method. The stereo method is a common photogrammetric method. It employs the optical path principle to detect 3D location of the disparity between two cameras. The principle for blood vessel visualization is derived from the ratio of hemoglobin's absorption of the near-infrared LED light. To get a high quality blood vessel image, we adopted an LED, with peak a wavelength of 940nm. The LED is set on the dorsal side of the finger and it irradiates the human finger. A blood vessel image is captured by a CMOS camera module, which is set below the palmer side of the finger. 2D blood vessel location can be detected by the luminance distribution of a one pixel line. To examine the accuracy of our detecting system, we carried out experiments using finger phantoms with blood vessel diameters of 0.5, 0.75, 1.0mm, at the depths of 0.5 ~ 2.0 mm from the phantom's surface. The experimental results of the estimated depth obtained by our detecting system shows good agreements with the given depths, and the viability of this system is confirmed.
Energy-efficient lighting system for television
Cawthorne, Duane C.
1987-07-21
A light control system for a television camera comprises an artificial light control system which is cooperative with an iris control system. This artificial light control system adjusts the power to lamps illuminating the camera viewing area to provide only sufficient artificial illumination necessary to provide a sufficient video signal when the camera iris is substantially open.
Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
Coaxial fundus camera for opthalmology
NASA Astrophysics Data System (ADS)
de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.
2015-09-01
A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.
NASA Astrophysics Data System (ADS)
Manjavacas, Elena; Apai, Dániel; Zhou, Yifan; Karalidi, Theodora; Lew, Ben W. P.; Schneider, Glenn; Cowan, Nicolas; Metchev, Stan; Miles-Páez, Paulo A.; Burgasser, Adam J.; Radigan, Jacqueline; Bedin, Luigi R.; Lowrance, Patrick J.; Marley, Mark S.
2018-01-01
Observations of rotational modulations of brown dwarfs and giant exoplanets allow the characterization of condensate cloud properties. As of now, rotational spectral modulations have only been seen in three L-type brown dwarfs. We report here the discovery of rotational spectral modulations in LP261-75B, an L6-type intermediate surface gravity companion to an M4.5 star. As a part of the Cloud Atlas Treasury program, we acquired time-resolved Wide Field Camera 3 grism spectroscopy (1.1–1.69 μm) of LP261-75B. We find gray spectral variations with the relative amplitude displaying only a weak wavelength dependence and no evidence for lower-amplitude modulations in the 1.4 μm water band than in the adjacent continuum. The likely rotational modulation period is 4.78 ± 0.95 hr, although the rotational phase is not well sampled. The minimum relative amplitude in the white light curve measured over the whole wavelength range is 2.41% ± 0.14%. We report an unusual light curve, which seems to have three peaks approximately evenly distributed in rotational phase. The spectral modulations suggests that the upper atmosphere cloud properties in LP261-75B are similar to two other mid-L dwarfs of typical infrared colors, but differ from that of the extremely red L-dwarf WISE0047.
NASA Astrophysics Data System (ADS)
Turko, Nir A.; Isbach, Michael; Ketelhut, Steffi; Greve, Burkhard; Schnekenburger, Jürgen; Shaked, Natan T.; Kemper, Björn
2017-02-01
We explored photothermal quantitative phase imaging (PTQPI) of living cells with functionalized nanoparticles (NPs) utilizing a cost-efficient setup based on a cell culture microscope. The excitation light was modulated by a mechanical chopper wheel with low frequencies. Quantitative phase imaging (QPI) was performed with Michelson interferometer-based off-axis digital holographic microscopy and a standard industrial camera. We present results from PTQPI observations on breast cancer cells that were incubated with functionalized gold NPs binding to the epidermal growth factor receptor. Moreover, QPI was used to quantify the impact of the NPs and the low frequency light excitation on cell morphology and viability.
Laser-speckle-visibility acoustic spectroscopy in soft turbid media.
Wintzenrieth, Frédéric; Cohen-Addad, Sylvie; Le Merrer, Marie; Höhler, Reinhard
2014-01-01
We image the evolution in space and time of an acoustic wave propagating along the surface of turbid soft matter by shining coherent light on the sample. The wave locally modulates the speckle interference pattern of the backscattered light, which is recorded using a camera. We show both experimentally and theoretically how the temporal and spatial correlations in this pattern can be analyzed to obtain the acoustic wavelength and attenuation length. The technique is validated using shear waves propagating in aqueous foam. It may be applied to other kinds of acoustic waves in different forms of turbid soft matter such as biological tissues, pastes, or concentrated emulsions.
Laser-speckle-visibility acoustic spectroscopy in soft turbid media
NASA Astrophysics Data System (ADS)
Wintzenrieth, Frédéric; Cohen-Addad, Sylvie; Le Merrer, Marie; Höhler, Reinhard
2014-01-01
We image the evolution in space and time of an acoustic wave propagating along the surface of turbid soft matter by shining coherent light on the sample. The wave locally modulates the speckle interference pattern of the backscattered light, which is recorded using a camera. We show both experimentally and theoretically how the temporal and spatial correlations in this pattern can be analyzed to obtain the acoustic wavelength and attenuation length. The technique is validated using shear waves propagating in aqueous foam. It may be applied to other kinds of acoustic waves in different forms of turbid soft matter such as biological tissues, pastes, or concentrated emulsions.
Development of electronic cinema projectors
NASA Astrophysics Data System (ADS)
Glenn, William E.
2001-03-01
All of the components for the electronic cinema are now commercially available. Sony has a high definition progressively scanned 24 frame per second electronic cinema camera. This can be recorded digitally on tape or film on hard drives in RAID recorders. Much of the post production processing is now done digitally by scanning film, processing it digitally, and recording it on film for release. Fiber links and satellites can transmit cinema program material to theaters in real time. RAID or tape recorders can play programs for viewing at a much lower cost than storage on film. Two companies now have electronic cinema projectors on the market. Of all of the components, the electronic cinema projector is the most challenging. Achieving the resolution, light, output, contrast ratio, and color rendition all at the same time without visible artifacts is a difficult task. Film itself is, of course, a form of light-valve. However, electronically modulated light uses other techniques rather than changes in density to control the light. The optical techniques that have been the basis for many electronic light-valves have been under development for over 100 years. Many of these techniques are based on optical diffraction to modulate the light. This paper will trace the history of these techniques and show how they may be extended to produce electronic cinema projectors in the future.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Speckle-learning-based object recognition through scattering media.
Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun
2015-12-28
We experimentally demonstrated object recognition through scattering media based on direct machine learning of a number of speckle intensity images. In the experiments, speckle intensity images of amplitude or phase objects on a spatial light modulator between scattering plates were captured by a camera. We used the support vector machine for binary classification of the captured speckle intensity images of face and non-face data. The experimental results showed that speckles are sufficient for machine learning.
A portable fluorescence microscopic imaging system for cholecystectomy
NASA Astrophysics Data System (ADS)
Ye, Jian; Yang, Chaoyu; Gan, Qi; Ma, Rong; Zhang, Zeshu; Chang, Shufang; Shao, Pengfei; Zhang, Shiwu; Liu, Chenhai; Xu, Ronald
2016-03-01
In this paper we proposed a portable fluorescence microscopic imaging system to prevent iatrogenic biliary injuries from occurring during cholecystectomy due to misidentification of the cystic structures. The system consisted of a light source module, a CMOS camera, a Raspberry Pi computer and a 5 inch HDMI LCD. Specifically, the light source module was composed of 690 nm and 850 nm LEDs, allowing the CMOS camera to simultaneously acquire both fluorescence and background images. The system was controlled by Raspberry Pi using Python programming with the OpenCV library under Linux. We chose Indocyanine green(ICG) as a fluorescent contrast agent and then tested fluorescence intensities of the ICG aqueous solution at different concentration levels by our fluorescence microscopic system compared with the commercial Xenogen IVIS system. The spatial resolution of the proposed fluorescence microscopic imaging system was measured by a 1951 USAF resolution target and the dynamic response was evaluated quantitatively with an automatic displacement platform. Finally, we verified the technical feasibility of the proposed system in mouse models of bile duct, performing both correct and incorrect gallbladder resection. Our experiments showed that the proposed system can provide clear visualization of the confluence between the cystic duct and common bile duct or common hepatic duct, suggesting that this is a potential method for guiding cholecystectomy. The proposed portable system only cost a total of $300, potentially promoting its use in resource-limited settings.
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
Martial, Franck P.; Hartell, Nicholas A.
2012-01-01
Confocal microscopy is routinely used for high-resolution fluorescence imaging of biological specimens. Most standard confocal systems scan a laser across a specimen and collect emitted light passing through a single pinhole to produce an optical section of the sample. Sequential scanning on a point-by-point basis limits the speed of image acquisition and even the fastest commercial instruments struggle to resolve the temporal dynamics of rapid cellular events such as calcium signals. Various approaches have been introduced that increase the speed of confocal imaging. Nipkov disk microscopes, for example, use arrays of pinholes or slits on a spinning disk to achieve parallel scanning which significantly increases the speed of acquisition. Here we report the development of a microscope module that utilises a digital micromirror device as a spatial light modulator to provide programmable confocal optical sectioning with a single camera, at high spatial and axial resolution at speeds limited by the frame rate of the camera. The digital micromirror acts as a solid state Nipkov disk but with the added ability to change the pinholes size and separation and to control the light intensity on a mirror-by-mirror basis. The use of an arrangement of concave and convex mirrors in the emission pathway instead of lenses overcomes the astigmatism inherent with DMD devices, increases light collection efficiency and ensures image collection is achromatic so that images are perfectly aligned at different wavelengths. Combined with non-laser light sources, this allows low cost, high-speed, multi-wavelength image acquisition without the need for complex wavelength-dependent image alignment. The micromirror can also be used for programmable illumination allowing spatially defined photoactivation of fluorescent proteins. We demonstrate the use of this system for high-speed calcium imaging using both a single wavelength calcium indicator and a genetically encoded, ratiometric, calcium sensor. PMID:22937130
Martial, Franck P; Hartell, Nicholas A
2012-01-01
Confocal microscopy is routinely used for high-resolution fluorescence imaging of biological specimens. Most standard confocal systems scan a laser across a specimen and collect emitted light passing through a single pinhole to produce an optical section of the sample. Sequential scanning on a point-by-point basis limits the speed of image acquisition and even the fastest commercial instruments struggle to resolve the temporal dynamics of rapid cellular events such as calcium signals. Various approaches have been introduced that increase the speed of confocal imaging. Nipkov disk microscopes, for example, use arrays of pinholes or slits on a spinning disk to achieve parallel scanning which significantly increases the speed of acquisition. Here we report the development of a microscope module that utilises a digital micromirror device as a spatial light modulator to provide programmable confocal optical sectioning with a single camera, at high spatial and axial resolution at speeds limited by the frame rate of the camera. The digital micromirror acts as a solid state Nipkov disk but with the added ability to change the pinholes size and separation and to control the light intensity on a mirror-by-mirror basis. The use of an arrangement of concave and convex mirrors in the emission pathway instead of lenses overcomes the astigmatism inherent with DMD devices, increases light collection efficiency and ensures image collection is achromatic so that images are perfectly aligned at different wavelengths. Combined with non-laser light sources, this allows low cost, high-speed, multi-wavelength image acquisition without the need for complex wavelength-dependent image alignment. The micromirror can also be used for programmable illumination allowing spatially defined photoactivation of fluorescent proteins. We demonstrate the use of this system for high-speed calcium imaging using both a single wavelength calcium indicator and a genetically encoded, ratiometric, calcium sensor.
NASA Astrophysics Data System (ADS)
Gribble, Adam; Alali, Sanaz; Vitkin, Alex
2016-03-01
Polarized light has many applications in biomedical imaging. The interaction of a biological sample with polarized light reveals information about its composition, both structural and functional. For example, the polarimetry-derived metric of linear retardance (birefringence) is dependent on tissue structural organization (anisotropy) and can be used to diagnose myocardial infarct; circular birefringence (optical rotation) can measure glucose concentrations. The most comprehensive type of polarimetry analysis is to measure the Mueller matrix, a polarization transfer function that completely describes how a sample interacts with polarized light. To derive this 4x4 matrix it is necessary to observe how a tissue interacts with different polarizations. A well-suited approach for tissue polarimetry is to use photoelastic modulators (PEMs), which dynamically modulate the polarization of light. Previously, we have demonstrated a rapid time-gated Stokes imaging system that is capable of characterizing the state of polarized light (the Stokes vector) over a large field, after interacting with any turbid media. This was accomplished by synchronizing CCD camera acquisition times relative to two PEMs using a field-programmable gate array (FPGA). Here, we extend this technology to four PEMs, yielding a polarimetry system that is capable of rapidly measuring the complete sample Mueller matrix over a large field of view, with no moving parts and no beam steering. We describe the calibration procedure and evaluate the accuracy of the measurements. Results are shown for tissue-mimicking phantoms, as well as initial biological samples.
Cloud Atlas: Rotational Modulations in the L/T Transition Brown Dwarf Companion HN Peg B
NASA Technical Reports Server (NTRS)
Zhou, Yifan; Apai, Daniel; Metchev, Stanimir; Lew, Ben W. P.; Schneider, Glenn; Marley, Mark S.; Karalidi, Theodora; Manjavacas, Elena; Bedin, Luigi R.; Cowan, Nicolas B.;
2018-01-01
Time-resolved observations of brown dwarfs' rotational modulations provide powerful insights into the properties of condensate clouds in ultra-cool atmospheres. Multi-wavelength light curves reveal cloud vertical structures, condensate particle sizes, and cloud morphology, which directly constrain condensate cloud and atmospheric circulation models. We report results from Hubble Space Telescope/Wide Field Camera 3 (WFC3) near-infrared G141 taken in six consecutive orbits observations of HNPeg B, an L/T transition brown dwarf companion to a G0V type star. The best-fit sine wave to the 1.1 to 1.7 micron broadband light curve has the amplitude of and period of hour. The modulation amplitude has no detectable wavelength dependence except in the 1.4 micron water absorption band, indicating that the characteristic condensate particle sizes are large (greater than 1 micron). We detect significantly (4.4 sigma) lower modulation amplitude in the 1.4 micron water absorption band, and find that HN Peg B's spectral modulation resembles those of early T type brown dwarfs. We also describe a new empirical interpolation method to remove spectral contamination from the bright host star. This method may be applied in other high-contrast time-resolved observations with WFC3.
Cloud Atlas: Rotational Modulations in the L/T Transition Brown Dwarf Companion HN Peg B
NASA Astrophysics Data System (ADS)
Zhou, Yifan; Apai, Dániel; Metchev, Stanimir; Lew, Ben W. P.; Schneider, Glenn; Marley, Mark S.; Karalidi, Theodora; Manjavacas, Elena; Bedin, Luigi R.; Cowan, Nicolas B.; Miles-Páez, Paulo A.; Lowrance, Patrick J.; Radigan, Jacqueline; Burgasser, Adam J.
2018-03-01
Time-resolved observations of brown dwarfs’ rotational modulations provide powerful insights into the properties of condensate clouds in ultra-cool atmospheres. Multi-wavelength light curves reveal cloud vertical structures, condensate particle sizes, and cloud morphology, which directly constrain condensate cloud and atmospheric circulation models. We report results from Hubble Space Telescope/Wide Field Camera 3 near-infrared G141 taken in six consecutive orbits observations of HN Peg B, an L/T transition brown dwarf companion to a G0V type star. The best-fit sine wave to the 1.1–1.7 μm broadband light curve has an amplitude of 1.206% ± 0.025% and period of 15.4 ± 0.5 hr. The modulation amplitude has no detectable wavelength dependence except in the 1.4 μm water absorption band, indicating that the characteristic condensate particle sizes are large (>1 μm). We detect significantly (4.4σ) lower modulation amplitude in the 1.4 μm water absorption band and find that HN Peg B’s spectral modulation resembles those of early T type brown dwarfs. We also describe a new empirical interpolation method to remove spectral contamination from the bright host star. This method may be applied in other high-contrast time-resolved observations with WFC3.
Note: Simple hysteresis parameter inspector for camera module with liquid lens
NASA Astrophysics Data System (ADS)
Chen, Po-Jui; Liao, Tai-Shan; Hwang, Chi-Hung
2010-05-01
A method to inspect hysteresis parameter is presented in this article. The hysteresis of whole camera module with liquid lens can be measured rather than a single lens merely. Because the variation in focal length influences image quality, we propose utilizing the sharpness of images which is captured from camera module for hysteresis evaluation. Experiments reveal that the profile of sharpness hysteresis corresponds to the characteristic of contact angle of liquid lens. Therefore, it can infer that the hysteresis of camera module is induced by the contact angle of liquid lens. An inspection process takes only 20 s to complete. Thus comparing with other instruments, this inspection method is more suitable to integrate into the mass production lines for online quality assurance.
Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted
2012-12-01
We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.
SU-F-T-463: Light-Field Based Dynalog Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atwal, P; Ramaseshan, R
2016-06-15
Purpose: To independently verify leaf positions in so-called dynalog files for a Varian iX linac with a Millennium 120 MLC. This verification provides a measure of confidence that the files can be used directly as part of a more extensive intensity modulated radiation therapy / volumetric modulated arc therapy QA program. Methods: Initial testing used white paper placed at the collimator plane and a standard hand-held digital camera to image the light and shadow of a static MLC field through the paper. Known markings on the paper allow for image calibration. Noise reduction was attempted with removal of ‘inherent noise’more » from an open-field light image through the paper, but the method was found to be inconsequential. This is likely because the environment could not be controlled to the precision required for the sort of reproducible characterization of the quantum noise needed in order to meaningfully characterize and account for it. A multi-scale iterative edge detection algorithm was used for localizing the leaf ends. These were compared with the planned locations from the treatment console. Results: With a very basic setup, the image of the central bank A leaves 15–45, which are arguably the most important for beam modulation, differed from the planned location by [0.38±0.28] mm. Similarly, for bank B leaves 15–45 had a difference of [0.42±0.28] mm Conclusion: It should be possible to determine leaf position accurately with not much more than a modern hand-held camera and some software. This means we can have a periodic and independent verification of the dynalog file information. This is indicated by the precision already achieved using a basic setup and analysis methodology. Currently, work is being done to reduce imaging and setup errors, which will bring the leaf position error down further, and allow meaningful analysis over the full range of leaves.« less
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The imaging system design of three-line LMCCD mapping camera
NASA Astrophysics Data System (ADS)
Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da
2011-08-01
In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
TOPDAQ Acquisition Utility Beta version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
MOreno, Mario; & Barret, Keith
2010-01-07
This TOPDAQ Acquisition Utility uses 5 digital cameras mounted on a vertical pole, maintained in a vertical position using sensors and actuators, to take photographs of an RP-2 or RP-3 module, one camera for each row (4) and one in the center for driving, when the module is at 0 degrees, or facing the eastern horizon. These photographs and other data collected at the same time the pictures are taken are analyzed by the TOPAAP Analysis Utility. The TOPCAT implemented by the TOPDAQ Acquisition Utility and TOPAAP Analysis Utility programs optimizes the alignment of each RP in a module onmore » a parabolic trough solar collector array (SCA) to maximize the amount of solar energy intercepted by the solar receiver. The camera fixture and related hardware are mounted on a pickup truck and driven between rows in a parabolic trough solar power plant. An ultrasonic distance meter is used to maintain the correct distance between the cameras and the RP module. Along with the two leveling actuators, a third actuator is used to maintain a proper relative vertical position between the cameras and the RP module. The TOPDAQ Acquisition Utility facilitates file management by keeping track of which RP module data is being taken and also controls the exposure levels for each camera to maintain a high contract ratio in the photograph even as the available daylight changes throughout the day. The theoretical TOPCAT hardware and software support the current industry standard RP-2 and RP-3 module geometries.« less
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
NASA Astrophysics Data System (ADS)
Daakir, M.; Pierrot-Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C.; Rabot, Y.; Martin, O.
2017-05-01
This article presents a coupled system consisting of a single-frequency GPS receiver and a light photogrammetric quality camera embedded in an Unmanned Aerial Vehicle (UAV). The aim is to produce high quality data that can be used in metrology applications. The issue of Integrated Sensor Orientation (ISO) of camera poses using only GPS measurements is presented and discussed. The accuracy reached by our system based on sensors developed at the French Mapping Agency (IGN) Opto-Electronics, Instrumentation and Metrology Laboratory (LOEMI) is qualified. These sensors are specially designed for close-range aerial image acquisition with a UAV. Lever-arm calibration and time synchronization are explained and performed to reach maximum accuracy. All processing steps are detailed from data acquisition to quality control of final products. We show that an accuracy of a few centimeters can be reached with this system which uses low-cost UAV and GPS module coupled with the IGN-LOEMI home-made camera.
Robust free-space optical communication for indoor information environment
NASA Astrophysics Data System (ADS)
Nakada, Toyohisa; Itoh, Hideo; Kunifuji, Susumu; Nakashima, Hideyuki
2003-10-01
The purpose of our study is to establish a robust communication, while keeping security and privacy, between a handheld communicator and the surrounding information environment. From the viewpoint of low power consumption, we have been developing a reflectivity modulating communication module composed of a liquid crystal light modulator and a corner-reflecting mirror sheet. We installed a corner-reflecting sheet instead of light scattering sheet in a handheld videogame machine with a display screen with a reflection-type liquid crystal. Infrared (IR) LED illuminator attached next to the IR camera of a base station illuminates all the room, and the terminal send their data to the base station by switching ON and OFF of the reflected IR beam. Intensity of reflected light differs with the position and the direction of the terminal, and sometimes the intensity of OFF signal at a certain condition is brighter than that of ON signal at another condition. To improve the communication quality, use of machine learning technique is a possibility of the solution. In this paper, we compare various machine learning techniques for the purpose of free space optical communication, and propose a new algorithm that improves the robustness of the data link. Evaluation using an actual free-space communication system is also described.
Timing Calibration in PET Using a Time Alignment Probe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, William W.; Thompson, Christopher J.
2006-05-05
We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less
MS Kavandi with camera in Service Module
2001-07-16
STS104-E-5125 (16 July 2001) --- Astronaut Janet L. Kavandi, STS-104 mission specialist, uses a camera as she floats through the Zvezda service module aboard the International Space Station (ISS). The five STS-104 crew members were visiting the orbital outpost to perform various tasks. The image was recorded with a digital still camera.
Low-intensity calibration source for optical imaging systems
NASA Astrophysics Data System (ADS)
Holdsworth, David W.
2017-03-01
Laboratory optical imaging systems for fluorescence and bioluminescence imaging have become widely available for research applications. These systems use an ultra-sensitive CCD camera to produce quantitative measurements of very low light intensity, detecting signals from small-animal models labeled with optical fluorophores or luminescent emitters. Commercially available systems typically provide quantitative measurements of light output, in units of radiance (photons s-1 cm-2 SR-1) or intensity (photons s-1 cm-2). One limitation to current systems is that there is often no provision for routine quality assurance and performance evaluation. We describe such a quality assurance system, based on an LED-illuminated thin-film transistor (TFT) liquid-crystal display module. The light intensity is controlled by pulse-width modulation of the backlight, producing radiance values ranging from 1.8 x 106 photons s-1 cm-2 SR-1 to 4.2 x 1013 photons s-1 cm-2 SR-1. The lowest light intensity values are produced by very short backlight pulses (i.e. approximately 10 μs), repeated every 300 s. This very low duty cycle is appropriate for laboratory optical imaging systems, which typically operate with long-duration exposures (up to 5 minutes). The low-intensity light source provides a stable, traceable radiance standard that can be used for routine quality assurance of laboratory optical imaging systems.
NectarCAM, a camera for the medium sized telescopes of the Cherenkov telescope array
NASA Astrophysics Data System (ADS)
Glicenstein, J.-F.; Shayduk, M.
2017-01-01
NectarCAM is a camera proposed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) which covers the core energy range of 100 GeV to 30 TeV. It has a modular design and is based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 8 degrees. Each module includes photomultiplier bases, high voltage supply, pre-amplifier, trigger, readout and Ethernet transceiver. The recorded events last between a few nanoseconds and tens of nanoseconds. The expected performance of the camera are discussed. Prototypes of NectarCAM components have been built to validate the design. Preliminary results of a 19-module mini-camera are presented, as well as future plans for building and testing a full size camera.
Relating transverse ray error and light fields in plenoptic camera images
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim; Tyo, J. Scott
2013-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
Kotov during Albedo Experiment in the SM
2013-11-18
ISS038-E-005022 (20 Nov. 2013) --- At a window in the International Space Station?s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth?s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station?s power supply. The light reflection phenomenon is measured in units called albedo.
Kotov during Albedo Experiment in the SM
2013-11-18
ISS038-E-005014 (20 Nov. 2013) --- At a window in the International Space Station’s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth’s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station’s power supply. The light reflection phenomenon is measured in units called albedo.
Kotov during Albedo Experiment in the SM
2013-11-18
ISS038-E-005023 (20 Nov. 2013) --- At a window in the International Space Station?s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth?s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station?s power supply. The light reflection phenomenon is measured in units called albedo.
Kotov during Albedo Experiment in the SM
2013-11-18
ISS038-E-005031 (20 Nov. 2013) --- At a window in the International Space Station?s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth?s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station?s power supply. The light reflection phenomenon is measured in units called albedo.
Kotov during Albedo Experiment in the SM
2013-11-18
ISS038-E-005016 (20 Nov. 2013) --- At a window in the International Space Station?s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth?s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station?s power supply. The light reflection phenomenon is measured in units called albedo.
Kotov during Albedo Experiment in the SM
2013-11-18
ISS038-E-005019 (20 Nov. 2013) --- At a window in the International Space Station?s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth?s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station?s power supply. The light reflection phenomenon is measured in units called albedo.
Portable, stand-off spectral imaging camera for detection of effluents and residues
NASA Astrophysics Data System (ADS)
Goldstein, Neil; St. Peter, Benjamin; Grot, Jonathan; Kogan, Michael; Fox, Marsha; Vujkovic-Cvijin, Pajo; Penny, Ryan; Cline, Jason
2015-06-01
A new, compact and portable spectral imaging camera, employing a MEMs-based encoded imaging approach, has been built and demonstrated for detection of hazardous contaminants including gaseous effluents and solid-liquid residues on surfaces. The camera is called the Thermal infrared Reconfigurable Analysis Camera for Effluents and Residues (TRACER). TRACER operates in the long wave infrared and has the potential to detect a wide variety of materials with characteristic spectral signatures in that region. The 30 lb. camera is tripod mounted and battery powered. A touch screen control panel provides a simple user interface for most operations. The MEMS spatial light modulator is a Texas Instruments Digital Microarray Array with custom electronics and firmware control. Simultaneous 1D-spatial and 1Dspectral dimensions are collected, with the second spatial dimension obtained by scanning the internal spectrometer slit. The sensor can be configured to collect data in several modes including full hyperspectral imagery using Hadamard multiplexing, panchromatic thermal imagery, and chemical-specific contrast imagery, switched with simple user commands. Matched filters and other analog filters can be generated internally on-the-fly and applied in hardware, substantially reducing detection time and improving SNR over HSI software processing, while reducing storage requirements. Results of preliminary instrument evaluation and measurements of flame exhaust are presented.
Variable field-of-view visible and near-infrared polarization compound-eye endoscope.
Kagawa, K; Shogenji, R; Tanaka, E; Yamada, K; Kawahito, S; Tanida, J
2012-01-01
A multi-functional compound-eye endoscope enabling variable field-of-view and polarization imaging as well as extremely deep focus is presented, which is based on a compact compound-eye camera called TOMBO (thin observation module by bound optics). Fixed and movable mirrors are introduced to control the field of view. Metal-wire-grid polarizer thin film applicable to both of visible and near-infrared lights is attached to the lenses in TOMBO and light sources. Control of the field-of-view, polarization and wavelength of the illumination realizes several observation modes such as three-dimensional shape measurement, wide field-of-view, and close-up observation of the superficial tissues and structures beneath the skin.
Characteristics of an Imaging Polarimeter for the Powell Observatory
NASA Astrophysics Data System (ADS)
Hall, Shannon W.; Henson, Gary D.
2010-07-01
A dual-beam imaging polarimeter has been built for use on the 14-inch Schmidt-Cassegrain telescope at the ETSU Harry D. Powell Observatory. The polarimeter includes a rotating half-wave plate and a Wollaston prism to separate light into two orthogonal linearly polarized rays. A thermoelectrically cooled CCD camera is used to detect the modulated polarized light. We present here measurements of the polarization of polarimetric standard stars. By measuring unpolarized and polarized standard stars we are able to establish the instrumental polarization and the efficiency of the instrument. The polarimeter will initially be used as a dedicated instrument in an ongoing project to monitor the eclipsing binary star, Epsilon Aurigae.
1962-04-27
The Apollo 16 Command Module splashed down in the Pacific Ocean on April 27, 1972 after an 11-day moon exploration mission. The 3-man crew is shown here aboard the rescue ship, USS Horton. From left to right are: Mission Commander John W. Young, Lunar Module pilot Charles M. Duke, and Command Module pilot Thomas K. Mattingly II. The sixth manned lunar landing mission, the Apollo 16 (SA-511) lifted off on April 16, 1972. The Apollo 16 mission continued the broad-scale geological, geochemical, and geophysical mapping of the Moon’s crust, begun by the Apollo 15, from lunar orbit. This mission marked the first use of the Moon as an astronomical observatory by using the ultraviolet camera/spectrograph which photographed ultraviolet light emitted by Earth and other celestial objects. The Lunar Roving Vehicle, developed by the Marshall Space Flight Center, was also used.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-03-23
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-01-01
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690
Seeing Red: Discourse, Metaphor, and the Implementation of Red Light Cameras in Texas
ERIC Educational Resources Information Center
Hayden, Lance Alan
2009-01-01
This study examines the deployment of automated red light camera systems in the state of Texas from 2003 through late 2007. The deployment of new technologies in general, and surveillance infrastructures in particular, can prove controversial and challenging for the formation of public policy. Red light camera surveillance during this period in…
A compact 16-module camera using 64-pixel CsI(Tl)/Si p-i-n photodiode imaging modules
NASA Astrophysics Data System (ADS)
Choong, W.-S.; Gruber, G. J.; Moses, W. W.; Derenzo, S. E.; Holland, S. E.; Pedrali-Noy, M.; Krieger, B.; Mandelli, E.; Meddeler, G.; Wang, N. W.; Witt, E. K.
2002-10-01
We present a compact, configurable scintillation camera employing a maximum of 16 individual 64-pixel imaging modules resulting in a 1024-pixel camera covering an area of 9.6 cm/spl times/9.6 cm. The 64-pixel imaging module consists of optically isolated 3 mm/spl times/3 mm/spl times/5 mm CsI(Tl) crystals coupled to a custom array of Si p-i-n photodiodes read out by a custom integrated circuit (IC). Each imaging module plugs into a readout motherboard that controls the modules and interfaces with a data acquisition card inside a computer. For a given event, the motherboard employs a custom winner-take-all IC to identify the module with the largest analog output and to enable the output address bits of the corresponding module's readout IC. These address bits identify the "winner" pixel within the "winner" module. The peak of the largest analog signal is found and held using a peak detect circuit, after which it is acquired by an analog-to-digital converter on the data acquisition card. The camera is currently operated with four imaging modules in order to characterize its performance. At room temperature, the camera demonstrates an average energy resolution of 13.4% full-width at half-maximum (FWHM) for the 140-keV emissions of /sup 99m/Tc. The system spatial resolution is measured using a capillary tube with an inner diameter of 0.7 mm and located 10 cm from the face of the collimator. Images of the line source in air exhibit average system spatial resolutions of 8.7- and 11.2-mm FWHM when using an all-purpose and high-sensitivity parallel hexagonal holes collimator, respectively. These values do not change significantly when an acrylic scattering block is placed between the line source and the camera.
SeeStar: an open-source, low-cost imaging system for subsea observations
NASA Astrophysics Data System (ADS)
Cazenave, F.; Kecy, C. D.; Haddock, S.
2016-02-01
Scientists and engineers at the Monterey Bay Aquarium Research Institute (MBARI) have collaborated to develop SeeStar, a modular, light weight, self-contained, low-cost subsea imaging system for short- to long-term monitoring of marine ecosystems. SeeStar is composed of separate camera, battery, and LED lighting modules. Two versions of the system exist: one rated to 300 meters depth, the other rated to 1500 meters. Users can download plans and instructions from an online repository and build the system using low-cost off-the-shelf components. The system utilizes an easily programmable Arduino based controller, and the widely distributed GoPro camera. The system can be deployed in a variety of scenarios taking still images and video and can be operated either autonomously or tethered on a range of platforms, including ROVs, AUVs, landers, piers, and moorings. Several Seestar systems have been built and used for scientific studies and engineering tests. The long-term goal of this project is to have a widely distributed marine imaging network across thousands of locations, to develop baselines of biological information.
Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy
Brooker, Gary; Siegel, Nisan; Wang, Victor; Rosen, Joseph
2011-01-01
Fresnel Incoherent Correlation Holography (FINCH) enables holograms and 3D images to be created from incoherent light with just a camera and spatial light modulator (SLM). We previously described its application to microscopic incoherent fluorescence wherein one complex hologram contains all the 3D information in the microscope field, obviating the need for scanning or serial sectioning. We now report experiments which have led to the optimal optical, electro-optic, and computational conditions necessary to produce holograms which yield high quality 3D images from fluorescent microscopic specimens. An important improvement from our previous FINCH configurations capitalizes on the polarization sensitivity of the SLM so that the same SLM pixels which create the spherical wave simulating the microscope tube lens, also pass the plane waves from the infinity corrected microscope objective, so that interference between the two wave types at the camera creates a hologram. This advance dramatically improves the resolution of the FINCH system. Results from imaging a fluorescent USAF pattern and a pollen grain slide reveal resolution which approaches the Rayleigh limit by this simple method for 3D fluorescent microscopic imaging. PMID:21445140
NASA Astrophysics Data System (ADS)
Ozolinsh, Maris; Paulins, Paulis
2017-09-01
An experimental setup allowing the modeling of conditions in optical devices and in the eye at various degrees of scattering such as cataract pathology in human eyes is presented. The scattering in cells of polymer-dispersed liquid crystals (PDLCs) and ‘Smart Glass’ windows is used in the modeling experiments. Both applications are used as optical obstacles placed in different positions of the optical information flow pathway either directly on the stimuli demonstration computer screen or mounted directly after the image-formation lens of a digital camera. The degree of scattering is changed continuously by applying an AC voltage of up to 30-80 V to the PDLC cell. The setup uses a camera with 14 bit depth and a 24 mm focal length lens. Light-emitting diodes and diode-pumped solid-state lasers emitting radiation of different wavelengths are used as portable small-divergence light sources in the experiments. Image formation, optical system point spread function, modulation transfer functions, and system resolution limits are determined for such sample optical systems in student optics and optometry experimental exercises.
Blind guidance system based on laser triangulation
NASA Astrophysics Data System (ADS)
Wu, Jih-Huah; Wang, Jinner-Der; Fang, Wei; Lee, Yun-Parn; Shan, Yi-Chia; Kao, Hai-Ko; Ma, Shih-Hsin; Jiang, Joe-Air
2012-05-01
We propose a new guidance system for the blind. An optical triangulation method is used in the system. The main components of the proposed system comprise of a notebook computer, a camera, and two laser modules. The track image of the light beam on the ground or on the object is captured by the camera and then the image is sent to the notebook computer for further processing and analysis. Using a developed signal-processing algorithm, our system can determine the object width and the distance between the object and the blind person through the calculation of the light line positions on the image. A series of feasibility tests of the developed blind guidance system were conducted. The experimental results show that the distance between the test object and the blind can be measured with a standard deviation of less than 8.5% within the range of 40 and 130 cm, while the test object width can be measured with a standard deviation of less than 4.5% within the range of 40 and 130 cm. The application potential of the designed system to the blind guidance can be expected.
LAMOST CCD camera-control system based on RTS2
NASA Astrophysics Data System (ADS)
Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng
2018-05-01
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.
Remote interferometry by digital holography for shape control
NASA Astrophysics Data System (ADS)
Baumbach, Torsten; Osten, Wolfgang; Falldorf, Claas; Jueptner, Werner P. O.
2002-06-01
Modern production requires more and more effective methods for the inspection and quality control at the production place. Outsourcing and globalization result in possible large distances between co-operating partners. This may cause serious problems with respect to the just-in-time exchange of information and the response to possible violations of quality standards. Consequently new challenges arise for optical measurement techniques especially in the field of industrial shape control. A possible solution for these problems can be delivered by a technique that stores optically the full 3D information of the objects to be compared and where the data can be transported over large distances. In this paper we describe the progress in implementing a new technique for the direct comparison of the shape and deformation of two objects with different microstructure where it is not necessary that both samples are located at the same place. This is done by creating a coherent mask for the illumination of the sample object. The coherent mask is created by Digital Holography to enable the instant access to the complete optical information of the master object at any wanted place. The transmission of the digital master holograms to this place can be done via digital telecommunication networks. The comparison can be done in a digital or analogue way. Both methods result in a disappearance of the object shape and the appearance of the shape or deformation difference between the two objects only. The analogue reconstruction of the holograms with a liquid crystal spatial light modulator can be done by using the light modulator as an intensity modulator or as an phase modulator. The reconstruction technique and the space bandwidth of the light modulator will influence the quality of the result. Therefore the paper describes the progress in applying modern spatial light modulators and digital cameras for the effective storage and optical reconstruction of coherent masks.
Condenser for illuminating a ringfield camera with synchrotron emission light
Sweatt, W.C.
1996-04-30
The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors. 9 figs.
Condenser for illuminating a ringfield camera with synchrotron emission light
Sweatt, William C.
1996-01-01
The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors.
Reductions in injury crashes associated with red light camera enforcement in oxnard, california.
Retting, Richard A; Kyrychenko, Sergey Y
2002-11-01
This study estimated the impact of red light camera enforcement on motor vehicle crashes in one of the first US communities to employ such cameras-Oxnard, California. Crash data were analyzed for Oxnard and for 3 comparison cities. Changes in crash frequencies were compared for Oxnard and control cities and for signalized and nonsignalized intersections by means of a generalized linear regression model. Overall, crashes at signalized intersections throughout Oxnard were reduced by 7% and injury crashes were reduced by 29%. Right-angle crashes, those most associated with red light violations, were reduced by 32%; right-angle crashes involving injuries were reduced by 68%. Because red light cameras can be a permanent component of the transportation infrastructure, crash reductions attributed to camera enforcement should be sustainable.
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
The research of adaptive-exposure on spot-detecting camera in ATP system
NASA Astrophysics Data System (ADS)
Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu
2013-08-01
High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.
NASA Astrophysics Data System (ADS)
Zhou, Yifan; Apai, Dániel; Schneider, Glenn H.; Marley, Mark S.; Showman, Adam P.
2016-02-01
Rotational modulations of brown dwarfs have recently provided powerful constraints on the properties of ultra-cool atmospheres, including longitudinal and vertical cloud structures and cloud evolution. Furthermore, periodic light curves directly probe the rotational periods of ultra-cool objects. We present here, for the first time, time-resolved high-precision photometric measurements of a planetary-mass companion, 2M1207b. We observed the binary system with Hubble Space Telescope/Wide Field Camera 3 in two bands and with two spacecraft roll angles. Using point-spread function-based photometry, we reach a nearly photon-noise limited accuracy for both the primary and the secondary. While the primary is consistent with a flat light curve, the secondary shows modulations that are clearly detected in the combined light curve as well as in different subsets of the data. The amplitudes are 1.36% in the F125W and 0.78% in the F160W filters, respectively. By fitting sine waves to the light curves, we find a consistent period of {10.7}-0.6+1.2 hr and similar phases in both bands. The J- and H-band amplitude ratio of 2M1207b is very similar to a field brown dwarf that has identical spectral type but different J-H color. Importantly, our study also measures, for the first time, the rotation period for a directly imaged extra-solar planetary-mass companion.
Iglehart, Brian
2018-05-01
Laboratory automation improves test reproducibility, which is vital to patient care in clinical laboratories. Many small and specialty laboratories are excluded from the benefits of automation due to low sample number, cost, space, and/or lack of automation expertise. The Minimum Viable Option (MVO) automation platform was developed to address these hurdles and fulfill an unmet need. Consumer 3D printing enabled rapid iterative prototyping to allow for a variety of instrumentation and assay setups and procedures. Three MVO versions have been produced. MVOv1.1 successfully performed part of a clinical assay, and results were comparable to those of commercial automation. Raspberry Pi 3 Model B (RPI3) single-board computers with Sense Hardware Attached on Top (HAT) and Raspberry Pi Camera Module V2 hardware were remotely accessed and evaluated for their suitability to qualify the latest MVOv1.2 platform. Sense HAT temperature, barometric pressure, and relative humidity sensors were stable in climate-controlled environments and are useful in identifying appropriate laboratory spaces for automation placement. The RPI3 with camera plus digital dial indicator logged axis travel experiments. RPI3 with camera and Sense HAT as a light source showed promise when used for photometric dispensing tests. Individual well standard curves were necessary for well-to-well light and path length compensations.
An autonomous sensor module based on a legacy CCTV camera
NASA Astrophysics Data System (ADS)
Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.
2016-10-01
A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Image quality testing of assembled IR camera modules
NASA Astrophysics Data System (ADS)
Winters, Daniel; Erichsen, Patrik
2013-10-01
Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.
1972-04-18
This view of the back side of the Moon was captured by the Apollo 16 mission crew. The sixth manned lunar landing mission, the Apollo 16 (SA-511), carrying three astronauts: Mission Commander John W. Young, Command Module pilot Thomas K. Mattingly II, and Lunar Module pilot Charles M. Duke, lifted off on April 16, 1972. The Apollo 16 continued the broad-scale geological, geochemical, and geophysical mapping of the Moon’s crust, begun by the Apollo 15, from lunar orbit. This mission marked the first use of the Moon as an astronomical observatory by using the ultraviolet camera/spectrograph which photographed ultraviolet light emitted by Earth and other celestial objects. The Lunar Roving Vehicle, developed by the Marshall Space Flight Center, was also used. The mission ended on April 27, 1972.
3-dimensional telepresence system for a robotic environment
Anderson, Matthew O.; McKay, Mark D.
2000-01-01
A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.
Safety Evaluation of Red Light Running Camera Intersections in Illinois
DOT National Transportation Integrated Search
2017-04-01
As a part of this research, the safety performance of red light running (RLR) camera systems was evaluated for a sample of 41 intersections and 60 RLR camera approaches located on state routes under IDOTs jurisdiction in the Chicago suburbs. Compr...
Pinhole Cameras: For Science, Art, and Fun!
ERIC Educational Resources Information Center
Button, Clare
2007-01-01
A pinhole camera is a camera without a lens. A tiny hole replaces the lens, and light is allowed to come in for short amount of time by means of a hand-operated shutter. The pinhole allows only a very narrow beam of light to enter, which reduces confusion due to scattered light on the film. This results in an image that is focused, reversed, and…
Biological applications of an LCoS-based programmable array microscope (PAM)
NASA Astrophysics Data System (ADS)
Hagen, Guy M.; Caarls, Wouter; Thomas, Martin; Hill, Andrew; Lidke, Keith A.; Rieger, Bernd; Fritsch, Cornelia; van Geest, Bert; Jovin, Thomas M.; Arndt-Jovin, Donna J.
2007-02-01
We report on a new generation, commercial prototype of a programmable array optical sectioning fluorescence microscope (PAM) for rapid, light efficient 3D imaging of living specimens. The stand-alone module, including light source(s) and detector(s), features an innovative optical design and a ferroelectric liquid-crystal-on-silicon (LCoS) spatial light modulator (SLM) instead of the DMD used in the original PAM design. The LCoS PAM (developed in collaboration with Cairn Research, Ltd.) can be attached to a port of a(ny) unmodified fluorescence microscope. The prototype system currently operated at the Max Planck Institute incorporates a 6-position high-intensity LED illuminator, modulated laser and lamp light sources, and an Andor iXon emCCD camera. The module is mounted on an Olympus IX71 inverted microscope with 60-150X objectives with a Prior Scientific x,y, and z high resolution scanning stages. Further enhancements recently include: (i) point- and line-wise spectral resolution and (ii) lifetime imaging (FLIM) in the frequency domain. Multiphoton operation and other nonlinear techniques should be feasible. The capabilities of the PAM are illustrated by several examples demonstrating single molecule as well as lifetime imaging in live cells, and the unique capability to perform photoconversion with arbitrary patterns and high spatial resolution. Using quantum dot coupled ligands we show real-time binding and subsequent trafficking of individual ligand-growth factor receptor complexes on and in live cells with a temporal resolution and sensitivity exceeding those of conventional CLSM systems. The combined use of a blue laser and parallel LED or visible laser sources permits photoactivation and rapid kinetic analysis of cellular processes probed by photoswitchable visible fluorescent proteins such as DRONPA.
A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.
Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi
2016-08-30
This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.
Performance prediction of optical image stabilizer using SVM for shaker-free production line
NASA Astrophysics Data System (ADS)
Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo
2016-04-01
Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.
Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.
2014-10-01
A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.
3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
Enhancing swimming pool safety by the use of range-imaging cameras
NASA Astrophysics Data System (ADS)
Geerardyn, D.; Boulanger, S.; Kuijk, M.
2015-05-01
Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.
Skupsch, C; Chaves, H; Brücker, C
2011-08-01
The Cranz-Schardin camera utilizes a Q-switched Nd:YAG laser and four single CCD cameras. Light pulse energy in the range of 25 mJ and pulse duration of about 5 ns is provided by the laser. The laser light is converted to incoherent light by Rhodamine-B fluorescence dye in a cuvette. The laser beam coherence is intentionally broken in order to avoid speckle. Four light fibers collect the fluorescence light and are used for illumination. Different light fiber lengths enable a delay of illumination between consecutive images. The chosen interframe time is 25 ns, corresponding to 40 × 10(6) frames per second. Exemplarily, the camera is applied to observe the bow shock in front of a water jet, propagating in air at supersonic speed. The initial phase of the formation of a jet structure is recorded.
1972-06-06
S72-40820 (21 April 1972) --- A color enhancement of a photograph taken on ultra-violet light showing the spectrum of the upper atmosphere of Earth and geocorona. The bright horizontal line is far ultra-violet emission (1216 angstrom) of hydrogen extending 10 degrees (40,000 miles) either side of Earth. The knobby vertical line shows several ultra-violet emissions from Earth's sunlit atmosphere, each "lump" being produced by one type gas (oxygen, nitrogen, helium, etc.). The spectral dispersion is about 10 angstrom per millimeter on this enlargement. The UV camera/spectrograph was operated on the lunar surface by astronaut John W. Young, commander of the Apollo 16 lunar landing mission. It was designed and built at the Naval Research Laboratory, Washington, D.C. While astronauts Young and Charles M. Duke Jr., lunar module pilot, descended in the Lunar Module (LM) "Orion" to explore the Descartes highlands region of the moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.
Phase-stepping fiber-optic projected fringe system for surface topography measurements
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R. (Inventor); Beheim, Glenn (Inventor)
1992-01-01
A projected fringe interferometer for measuring the topography of an object is presented. The interferometer periodically steps the phase angle between a pair of light beams emanating from a common source. The steps are pi/2 radians (90 deg) apart, and at each step a video image of the fringes is recorded and stored. Photodetectors measure either the phase and theta of the beams or 2(theta). Either of the measures can be used to control one of the light beams so that the 90 deg theta is accurately maintained. A camera, a computer, a phase controller, and a phase modulator established closed-loop control of theta. Measuring the phase map of a flat surface establishes a calibration reference.
Laser speckle visibility acoustic spectroscopy in soft turbid media
NASA Astrophysics Data System (ADS)
Wintzenrieth, Frédéric; Cohen-Addad, Sylvie; Le Merrer, Marie; Höhler, Reinhard
2014-03-01
We image the evolution in space and time of an acoustic wave propagating along the surface of turbid soft matter by shining coherent light on the sample. The wave locally modulates the speckle interference pattern of the backscattered light and the speckle visibility[2] is recorded using a camera. We show both experimentally and theoretically how the temporal and spatial correlations in this pattern can be analyzed to obtain the acoustic wavelength and attenuation length. The technique is validated using shear waves propagating in aqueous foam.[3] It may be applied to other kinds of acoustic wave in different forms of turbid soft matter, such as biological tissues, pastes or concentrated emulsions. Now at Université Lyon 1 (ILM).
Unmanned Aircraft Systems For CryoSat-2 Validation
NASA Astrophysics Data System (ADS)
Crocker, Roger Ian; Maslanik, James A.
2011-02-01
A suite of sensors has been assembled to map surface elevation with fine-resolution from small unmanned aircraft systems (UAS). The sensor package consists of a light detecting and ranging (LIDAR) instrument, an inertial measurement unit (IMU), a GPS module, and digital still and video cameras. It has been utilized to map ice sheet topography in Greenland and to measure sea ice freeboard and roughness in Fram Strait. Data collected during these campaigns illustrate its potential to compliment ongoing CryoSat-2 (CS-2) calibration and validation efforts.
2013-06-24
ISS036-E-019783 (24 June 2013) --- In the International Space Station’s Destiny laboratory, a fisheye lens attached to an electronic still camera was used to capture this image of NASA astronaut Karen Nyberg, Expedition 36 flight engineer, as she conducts a session with the Advanced Colloids Experiment (ACE)-1 sample preparation at the Light Microscopy Module (LMM) in the Fluids Integrated Rack / Fluids Combustion Facility (FIR/FCF). ACE-1 is a series of microscopic imaging investigations that uses the microgravity environment to examine flow characteristics and the evolution and ordering effects within a group of colloidal materials.
A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer.
Shen, Bailey Y; Mukai, Shizuo
2017-01-01
Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.
A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer
Shen, Bailey Y.
2017-01-01
Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient. PMID:28396802
DOT National Transportation Integrated Search
2006-03-01
This report presents results from an analysis of about 47,000 red light violation records collected from 11 intersections in the : City of Sacramento, California, by red light photo enforcement cameras between May 1999 and June 2003. The goal of this...
A digital ISO expansion technique for digital cameras
NASA Astrophysics Data System (ADS)
Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong
2010-01-01
Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.
Studies on a silicon-photomultiplier-based camera for Imaging Atmospheric Cherenkov Telescopes
NASA Astrophysics Data System (ADS)
Arcaro, C.; Corti, D.; De Angelis, A.; Doro, M.; Manea, C.; Mariotti, M.; Rando, R.; Reichardt, I.; Tescaro, D.
2017-12-01
Imaging Atmospheric Cherenkov Telescopes (IACTs) represent a class of instruments which are dedicated to the ground-based observation of cosmic VHE gamma ray emission based on the detection of the Cherenkov radiation produced in the interaction of gamma rays with the Earth atmosphere. One of the key elements of such instruments is a pixelized focal-plane camera consisting of photodetectors. To date, photomultiplier tubes (PMTs) have been the common choice given their high photon detection efficiency (PDE) and fast time response. Recently, silicon photomultipliers (SiPMs) are emerging as an alternative. This rapidly evolving technology has strong potential to become superior to that based on PMTs in terms of PDE, which would further improve the sensitivity of IACTs, and see a price reduction per square millimeter of detector area. We are working to develop a SiPM-based module for the focal-plane cameras of the MAGIC telescopes to probe this technology for IACTs with large focal plane cameras of an area of few square meters. We will describe the solutions we are exploring in order to balance a competitive performance with a minimal impact on the overall MAGIC camera design using ray tracing simulations. We further present a comparative study of the overall light throughput based on Monte Carlo simulations and considering the properties of the major hardware elements of an IACT.
NASA Astrophysics Data System (ADS)
Masciotti, James M.; Rahim, Shaheed; Grover, Jarrett; Hielscher, Andreas H.
2007-02-01
We present a design for frequency domain instrument that allows for simultaneous gathering of magnetic resonance and diffuse optical tomographic imaging data. This small animal imaging system combines the high anatomical resolution of magnetic resonance imaging (MRI) with the high temporal resolution and physiological information provided by diffuse optical tomography (DOT). The DOT hardware comprises laser diodes and an intensified CCD camera, which are modulated up to 1 GHz by radio frequency (RF) signal generators. An optical imaging head is designed to fit inside the 4 cm inner diameter of a 9.4 T MRI system. Graded index fibers are used to transfer light between the optical hardware and the imaging head within the RF coil. Fiducial markers are integrated into the imaging head to allow the determination of the positions of the source and detector fibers on the MR images and to permit co-registration of MR and optical tomographic images. Detector fibers are arranged compactly and focused through a camera lens onto the photocathode of the intensified CCD camera.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Gul, M. Shahzeb Khan; Gunturk, Bahadir K.
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.
Gul, M Shahzeb Khan; Gunturk, Bahadir K
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
NASA Astrophysics Data System (ADS)
Hahn, A.; Mazin, D.; Bangale, P.; Dettlaff, A.; Fink, D.; Grundner, F.; Haberer, W.; Maier, R.; Mirzoyan, R.; Podkladkin, S.; Teshima, M.; Wetteskind, H.
2017-02-01
The MAGIC collaboration operates two 17 m diameter Imaging Atmospheric Cherenkov Telescopes (IACTs) on the Canary Island of La Palma. Each of the two telescopes is currently equipped with a photomultiplier tube (PMT) based imaging camera. Due to the advances in the development of Silicon Photomultipliers (SiPMs), they are becoming a widely used alternative to PMTs in many research fields including gamma-ray astronomy. Within the Otto-Hahn group at the Max Planck Institute for Physics, Munich, we are developing a SiPM based detector module for a possible upgrade of the MAGIC cameras and also for future experiments as, e.g., the Large Size Telescopes (LST) of the Cherenkov Telescope Array (CTA). Because of the small size of individual SiPM sensors (6 mm×6 mm) with respect to the 1-inch diameter PMTs currently used in MAGIC, we use a custom-made matrix of SiPMs to cover the same detection area. We developed an electronic circuit to actively sum up and amplify the SiPM signals. Existing non-imaging hexagonal light concentrators (Winston cones) used in MAGIC have been modified for the angular acceptance of the SiPMs by using C++ based ray tracing simulations. The first prototype based detector module includes seven channels and was installed into the MAGIC camera in May 2015. We present the results of the first prototype and its performance as well as the status of the project and discuss its challenges.
External Mask Based Depth and Light Field Camera
2013-12-08
laid out in the previous light field cameras. A good overview of the sampling of the plenoptic function can be found in the survey work by Wetzstein et...view is shown in Figure 6. 5. Applications High spatial resolution depth and light fields are a rich source of information about the plenoptic ...http://www.pelicanimaging.com/. [4] E. Adelson and J. Wang. Single lens stereo with a plenoptic camera. Pattern Analysis and Machine Intelligence
An improved apparatus of infrared videopupillography for monitoring pupil size
NASA Astrophysics Data System (ADS)
Huang, T.-.; Ko, M.-.; Ouyang, Y.; Chen, Y.-.; Sone, B.-.; Ou-Yang, M.; Chiou, J.-.
2014-10-01
The intraocular pressure (IOP) that can diagnose or track glaucoma generally because it is one of the physiology parameters that are associated with glaucoma. But IOP is not easy and consistence to be measured under different measure conditions. Besides, diabetes is associated with diabetic autonomic neuropathy (DAN). Pupil size response might provide an indirect means about neuronal pathways, so the abnormal pupil size may relate with DAN. Hence an infrared videopupillography is needed for tracking glaucoma and exploring the relation between pupil size and DAN. Our previous research proposed an infrared videopupillography to monitoring the pupil size of different light stimulus in dark room. And this portable infrared videopupillography contains a camera, a beam splitter, the visible-light LEDs for stimulating the eyes, and the infrared LEDs for lighting the eyes. It can be mounted on any eyeglass frame. But it can modulate only two dimensions, we cannot zoom in/out the eyes. Moreover, the eye diameter curves were not smooth and jagged because of the light spots, lone eyelashes, and blink. Therefore, we redesign the optical path of our device to have three dimension modulation. Then we can zoom in the eye to increase the eye resolution and to avoid the LED light spots. The light spot could be solved by defining the distance between IR LED and CCD. This device smaller volume and less prices of our previous videopupillography. We hope this new infrared videopupillography proposed in this paper can achieving early detection about autonomic neuropathy in the future.
NASA Astrophysics Data System (ADS)
Baruch, Daniel; Abookasis, David
2017-04-01
The application of optical techniques as tools for biomedical research has generated substantial interest for the ability of such methodologies to simultaneously measure biochemical and morphological parameters of tissue. Ongoing optimization of optical techniques may introduce such tools as alternative or complementary to conventional methodologies. The common approach shared by current optical techniques lies in the independent acquisition of tissue's optical properties (i.e., absorption and reduced scattering coefficients) from reflected or transmitted light. Such optical parameters, in turn, provide detailed information regarding both the concentrations of clinically relevant chromophores and macroscopic structural variations in tissue. We couple a noncontact optical setup with a simple analysis algorithm to obtain absorption and scattering coefficients of biological samples under test. Technically, a portable picoprojector projects serial sinusoidal patterns at low and high spatial frequencies, while a spectrometer and two independent CCD cameras simultaneously acquire the reflected diffuse light through a single spectrometer and two separate CCD cameras having different bandpass filters at nonisosbestic and isosbestic wavelengths in front of each. This configuration fills the gaps in each other's capabilities for acquiring optical properties of tissue at high spectral and spatial resolution. Experiments were performed on both tissue-mimicking phantoms as well as hands of healthy human volunteers to quantify their optical properties as proof of concept for the present technique. In a separate experiment, we derived the optical properties of the hand skin from the measured diffuse reflectance, based on a recently developed camera model. Additionally, oxygen saturation levels of tissue measured by the system were found to agree well with reference values. Taken together, the present results demonstrate the potential of this integrated setup for diagnostic and research applications.
NASA Astrophysics Data System (ADS)
Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James
2017-01-01
A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldenbuerger, S.; Brandt, C.; Brochard, F.
2010-06-15
Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the goodmore » correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.« less
NASA Astrophysics Data System (ADS)
Oldenbürger, S.; Brandt, C.; Brochard, F.; Lemoine, N.; Bonhomme, G.
2010-06-01
Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the good correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.
Evaluating the Impacts of Red Light Camera Deployment on Intersection Traffic Safety
DOT National Transportation Integrated Search
2018-06-01
Red-light cameras (RLC) are a popular countermeasure to reduce red-light running and improve intersection safety. Studies show that the reduction in side impact crashes at RLC intersections are often accompanied by no-change or an increase in the num...
Prediction of Viking lander camera image quality
NASA Technical Reports Server (NTRS)
Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.
1976-01-01
Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.
Imaging of gaseous oxygen through DFB laser illumination
NASA Astrophysics Data System (ADS)
Cocola, L.; Fedel, M.; Tondello, G.; Poletto, L.
2016-05-01
A Tunable Diode Laser Absorption Spectroscopy setup with Wavelength Modulation has been used together with a synchronous sampling imaging sensor to obtain two-dimensional transmission-mode images of oxygen content. Modulated laser light from a 760nm DFB source has been used to illuminate a scene from the back while image frames were acquired with a high dynamic range camera. Thanks to synchronous timing between the imaging device and laser light modulation, the traditional lock-in approach used in Wavelength Modulation Spectroscopy was replaced by image processing techniques, and many scanning periods were averaged together to allow resolution of small intensity variation over the already weak absorption signals from oxygen absorption band. After proper binning and filtering, the time-domain waveform obtained from each pixel in a set of frames representing the wavelength scan was used as the single detector signal in a traditional TDLAS-WMS setup, and so processed through a software defined digital lock-in demodulation and a second harmonic signal fitting routine. In this way the WMS artifacts of a gas absorption feature were obtained from each pixel together with intensity normalization parameter, allowing a reconstruction of oxygen distribution in a two-dimensional scene regardless from broadband transmitted intensity. As a first demonstration of the effectiveness of this setup, oxygen absorption images of similar containers filled with either oxygen or nitrogen were acquired and processed.
Graafland, Maurits; Bok, Kiki; Schreuder, Henk W R; Schijven, Marlies P
2014-06-01
Untrained laparoscopic camera assistants in minimally invasive surgery (MIS) may cause suboptimal view of the operating field, thereby increasing risk for errors. Camera navigation is often performed by the least experienced member of the operating team, such as inexperienced surgical residents, operating room nurses, and medical students. The operating room nurses and medical students are currently not included as key user groups in structured laparoscopic training programs. A new virtual reality laparoscopic camera navigation (LCN) module was specifically developed for these key user groups. This multicenter prospective cohort study assesses face validity and construct validity of the LCN module on the Simendo virtual reality simulator. Face validity was assessed through a questionnaire on resemblance to reality and perceived usability of the instrument among experts and trainees. Construct validity was assessed by comparing scores of groups with different levels of experience on outcome parameters of speed and movement proficiency. The results obtained show uniform and positive evaluation of the LCN module among expert users and trainees, signifying face validity. Experts and intermediate experience groups performed significantly better in task time and camera stability during three repetitions, compared to the less experienced user groups (P < .007). Comparison of learning curves showed significant improvement of proficiency in time and camera stability for all groups during three repetitions (P < .007). The results of this study show face validity and construct validity of the LCN module. The module is suitable for use in training curricula for operating room nurses and novice surgical trainees, aimed at improving team performance in minimally invasive surgery. © The Author(s) 2013.
ERIC Educational Resources Information Center
Fisher, Diane K.; Novati, Alexander
2009-01-01
On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…
A Simple Spectrophotometer Using Common Materials and a Digital Camera
ERIC Educational Resources Information Center
Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal
2011-01-01
A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…
Creating and Using a Camera Obscura
ERIC Educational Resources Information Center
Quinnell, Justin
2012-01-01
The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material.…
Differential dynamic microscopy to characterize Brownian motion and bacteria motility
NASA Astrophysics Data System (ADS)
Germain, David; Leocmach, Mathieu; Gibaud, Thomas
2016-03-01
We have developed a lab module for undergraduate students, which involves the process of quantifying the dynamics of a suspension of microscopic particles using Differential Dynamic Microscopy (DDM). DDM is a relatively new technique that constitutes an alternative method to more classical techniques such as dynamic light scattering (DLS) or video particle tracking (VPT). The technique consists of imaging a particle dispersion with a standard light microscope and a camera and analyzing the images using a digital Fourier transform to obtain the intermediate scattering function, an autocorrelation function that characterizes the dynamics of the dispersion. We first illustrate DDM in the textbook case of colloids under Brownian motion, where we measure the diffusion coefficient. Then we show that DDM is a pertinent tool to characterize biological systems such as motile bacteria.
Leveraging traffic and surveillance video cameras for urban traffic.
DOT National Transportation Integrated Search
2014-12-01
The objective of this project was to investigate the use of existing video resources, such as traffic : cameras, police cameras, red light cameras, and security cameras for the long-term, real-time : collection of traffic statistics. An additional ob...
X-ray detectors at the Linac Coherent Light Source.
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; Carron, Sebastian; Dragone, Angelo; Freytag, Dietrich; Haller, Gunther; Hart, Philip; Hasi, Jasmine; Herbst, Ryan; Herrmann, Sven; Kenney, Chris; Markovic, Bojan; Nishimura, Kurtis; Osier, Shawn; Pines, Jack; Reese, Benjamin; Segal, Julie; Tomada, Astrid; Weaver, Matt
2015-05-01
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a new generation of cameras under development at SLAC, is introduced.
X-ray detectors at the Linac Coherent Light Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a newmore » generation of cameras under development at SLAC, is introduced.« less
X-ray detectors at the Linac Coherent Light Source
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; ...
2015-04-21
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a newmore » generation of cameras under development at SLAC, is introduced.« less
Space telescope low scattered light camera - A model
NASA Technical Reports Server (NTRS)
Breckinridge, J. B.; Kuper, T. G.; Shack, R. V.
1982-01-01
A design approach for a camera to be used with the space telescope is given. Camera optics relay the system pupil onto an annular Gaussian ring apodizing mask to control scattered light. One and two dimensional models of ripple on the primary mirror were calculated. Scattered light calculations using ripple amplitudes between wavelength/20 wavelength/200 with spatial correlations of the ripple across the primary mirror between 0.2 and 2.0 centimeters indicate that the detection of an object a billion times fainter than a bright source in the field is possible. Detection of a Jovian type planet in orbit about alpha Centauri with a camera on the space telescope may be possible.
X-ray detectors at the Linac Coherent Light Source
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; Carron, Sebastian; Dragone, Angelo; Freytag, Dietrich; Haller, Gunther; Hart, Philip; Hasi, Jasmine; Herbst, Ryan; Herrmann, Sven; Kenney, Chris; Markovic, Bojan; Nishimura, Kurtis; Osier, Shawn; Pines, Jack; Reese, Benjamin; Segal, Julie; Tomada, Astrid; Weaver, Matt
2015-01-01
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a new generation of cameras under development at SLAC, is introduced. PMID:25931071
Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera
NASA Astrophysics Data System (ADS)
Endo, Yutaka; Wakunami, Koki; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ichihashi, Yasuyuki; Yamamoto, Kenji; Ito, Tomoyoshi
2015-12-01
This paper shows the process used to calculate a computer-generated hologram (CGH) for real scenes under natural light using a commercial portable plenoptic camera. In the CGH calculation, a light field captured with the commercial plenoptic camera is converted into a complex amplitude distribution. Then the converted complex amplitude is propagated to a CGH plane. We tested both numerical and optical reconstructions of the CGH and showed that the CGH calculation from captured data with the commercial plenoptic camera was successful.
NASA Technical Reports Server (NTRS)
1996-01-01
PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.
Image quality prediction - An aid to the Viking lander imaging investigation on Mars
NASA Technical Reports Server (NTRS)
Huck, F. O.; Wall, S. D.
1976-01-01
Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).
MS Lucid places samples in the TEHOF aboard the Spektr module
1997-03-26
STS079-S-082 (16-26 Sept. 1996) --- Cosmonaut guest researcher Shannon W. Lucid and Valeri G. Korzun, her Mir-22 commander, are pictured on the Spektr Module aboard Russia's Earth-orbiting Mir Space Station. Korzun was the third of four commanders that Lucid served with during her record-setting 188 consecutive days in space. Later, Lucid returned to Earth with her fourth commander-astronaut William F. Readdy-and five other NASA astronauts to complete the STS-79 mission. During the STS-79 mission, the crew used an IMAX camera to document activities aboard the space shuttle Atlantis and the various Mir modules. A hand-held version of the 65mm camera system accompanied the STS-79 crew into space in Atlantis' crew cabin. NASA has flown IMAX camera systems on many Shuttle missions, including a special cargo bay camera's coverage of other recent Shuttle-Mir rendezvous and/or docking missions.
Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2017-05-08
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.
Characterization of Vegetation using the UC Davis Remote Sensing Testbed
NASA Astrophysics Data System (ADS)
Falk, M.; Hart, Q. J.; Bowen, K. S.; Ustin, S. L.
2006-12-01
Remote sensing provides information about the dynamics of the terrestrial biosphere with continuous spatial and temporal coverage on many different scales. We present the design and construction of a suite of instrument modules and network infrastructure with size, weight and power constraints suitable for small scale vehicles, anticipating vigorous growth in unmanned aerial vehicles (UAV) and other mobile platforms. Our approach provides the rapid deployment and low cost acquisition of high aerial imagery for applications requiring high spatial resolution and revisits. The testbed supports a wide range of applications, encourages remote sensing solutions in new disciplines and demonstrates the complete range of engineering knowledge required for the successful deployment of remote sensing instruments. The initial testbed is deployed on a Sig Kadet Senior remote controlled plane. It includes an onboard computer with wireless radio, GPS, inertia measurement unit, 3-axis electronic compass and digital cameras. The onboard camera is either a RGB digital camera or a modified digital camera with red and NIR channels. Cameras were calibrated using selective light sources, an integrating spheres and a spectrometer, allowing for the computation of vegetation indices such as the NDVI. Field tests to date have investigated technical challenges in wireless communication bandwidth limits, automated image geolocation, and user interfaces; as well as image applications such as environmental landscape mapping focusing on Sudden Oak Death and invasive species detection, studies on the impact of bird colonies on tree canopies, and precision agriculture.
Dynamics of a two-phase flow through a minichannel: Transition from churn to slug flow
NASA Astrophysics Data System (ADS)
Górski, Grzegorz; Litak, Grzegorz; Mosdorf, Romuald; Rysak, Andrzej
2016-04-01
The churn-to-slug flow bifurcations of two-phase (air-water) flow patterns in a 2mm diameter minichannel were investigated. With increasing a water flow rate, we observed the transition of slugs to bubbles of different sizes. The process was recorded by a digital camera. The sequences of light transmission time series were recorded by a laser-phototransistor sensor, and then analyzed using the recurrence plots and recurrence quantification analysis (RQA). Due to volume dependence of bubbles velocities, we observed the formation of periodic modulations in the laser signal.
ARTIST CONCEPT - ASTRONAUT WORDEN'S EXTRAVEHICULAR ACTIVITY (EVA) (APOLLO XV)
1971-07-09
S71-39614 (July 1971) --- An artist's concept of the Apollo 15 Command and Service Modules (CSM), showing two crewmembers performing a new-to-Apollo extravehicular activity (EVA). The figure at left represents astronaut Alfred M. Worden, command module pilot, connected by an umbilical tether to the CM, at right, where a figure representing astronaut James B. Irwin, lunar module pilot, stands at the open CM hatch. Worden is working with the panoramic camera in the Scientific Instrument Module (SIM). Behind Irwin is the 16mm data acquisition camera. Artwork by North American Rockwell.
[Communication subsystem design of tele-screening system for diabetic retinopathy].
Chen, Jian; Pan, Lin; Zheng, Shaohua; Yu, Lun
2013-12-01
A design scheme of a tele-screening system for diabetic retinopathy (DR) has been proposed, especially the communication subsystem. The scheme uses serial communication module consisting of ARM 7 microcontroller and relays to connect remote computer and fundus camera, and also uses C++ programming language based on MFC to design the communication software consisting of therapy and diagnostic information module, video/audio surveillance module and fundus camera control module. The scheme possesses universal property in some remote medical treatment systems which are similar to the system.
Navigating surgical fluorescence cameras using near-infrared optical tracking.
van Oosterom, Matthias; den Houting, David; van de Velde, Cornelis; van Leeuwen, Fijs
2018-05-01
Fluorescence guidance facilitates real-time intraoperative visualization of the tissue of interest. However, due to attenuation, the application of fluorescence guidance is restricted to superficial lesions. To overcome this shortcoming, we have previously applied three-dimensional surgical navigation to position the fluorescence camera in reach of the superficial fluorescent signal. Unfortunately, in open surgery, the near-infrared (NIR) optical tracking system (OTS) used for navigation also induced an interference during NIR fluorescence imaging. In an attempt to support future implementation of navigated fluorescence cameras, different aspects of this interference were characterized and solutions were sought after. Two commercial fluorescence cameras for open surgery were studied in (surgical) phantom and human tissue setups using two different NIR OTSs and one OTS simulating light-emitting diode setup. Following the outcome of these measurements, OTS settings were optimized. Measurements indicated the OTS interference was caused by: (1) spectral overlap between the OTS light and camera, (2) OTS light intensity, (3) OTS duty cycle, (4) OTS frequency, (5) fluorescence camera frequency, and (6) fluorescence camera sensitivity. By optimizing points 2 to 4, navigation of fluorescence cameras during open surgery could be facilitated. Optimization of the OTS and camera compatibility can be used to support navigated fluorescence guidance concepts. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Binary Colloidal Alloy Test Conducted on Mir
NASA Technical Reports Server (NTRS)
Hoffmann, Monica I.; Ansari, Rafat R.
1999-01-01
Colloids are tiny (submicron) particles suspended in fluid. Paint, ink, and milk are examples of colloids found in everyday life. The Binary Colloidal Alloy Test (BCAT) is part of an extensive series of experiments planned to investigate the fundamental properties of colloids so that scientists can make colloids more useful for technological applications. Some of the colloids studied in BCAT are made of two different sized particles (binary colloidal alloys) that are very tiny, uniform plastic spheres. Under the proper conditions, these colloids can arrange themselves in a pattern to form crystals. These crystals may form the basis of new classes of light switches, displays, and optical devices. Windows made of liquid crystals are already in the marketplace. These windows change their appearance from transparent to opaque when a weak electric current is applied. In the future, if the colloidal crystals can be made to control the passage of light through them, such products could be made much more cheaply. These experiments require the microgravity environment of space because good quality crystals are difficult to produce on Earth because of sedimentation and convection in the fluid. The BCAT experiment hardware included two separate modules for two different experiments. The "Slow Growth" hardware consisted of a 35-mm camera with a 250- exposure photo film cartridge. The camera was aimed toward the sample module, which contained 10 separate colloid samples. A rack of small lights provided backlighting for the photographs. The BCAT hardware was launched on the shuttle and was operated aboard the Russian space station Mir by American astronauts John Blaha and David Wolf (launched September 1996 and returned January 1997; reflown September 1997 and returned January 1998). To begin the experiment, one of these astronauts would mix the samples to disperse the colloidal particles and break up any crystals that might have already formed. Once the samples were mixed and the experiment was powered on, the hardware operated autonomously, taking photos of the colloidal samples over a 90-day period.
1971-08-01
S71-58222 (31 July-2 Aug. 1971) --- During the lunar eclipse that occurred during the Apollo 15 lunar landing mission, astronaut Alfred M. Worden, command module pilot, used a 35mm Nikon camera to obtain a series of 15 photographs while the moon was entering and exiting Earth's umbra. Although it might seem that there should be no light on the moon when it is in Earth's shadow, sunlight is scattered into this region by Earth's atmosphere. This task was an attempt to measure by photographic photometry the amount of scattered light reaching the moon. The four views from upper left to lower right were selected to show the moon as it entered Earth's umbra. The first is a four-second exposure which was taken at the moment when the moon had just entered umbra; the second is a 15-second exposure taken two minutes after entry; the third, a 30-second exposure three minutes after entry; and the fourth is a 60-second exposure four minutes after entry. In all cases the light reaching the moon was so bright on the very high speed film (Eastman Kodak type 2485 emulsion) that the halation obscures the lunar image, which should be about one-third as big as the circle of light. The background star field is clearly evident, and this is very important for these studies. The spacecraft was in full sunlight when these photographs were taken, and it was pointed almost directly away from the sun so that the windows and a close-in portion of the camera's line-of-sight were in shadow. The environment around the vehicle at this time appears to be very "clean" with no light scattering particles noticeable.
Fluorescent image tracking velocimeter
Shaffer, Franklin D.
1994-01-01
A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.
Interferometric phase measurement techniques for coherent beam combining
NASA Astrophysics Data System (ADS)
Antier, Marie; Bourderionnet, Jérôme; Larat, Christian; Lallier, Eric; Primot, Jérôme; Brignon, Arnaud
2015-03-01
Coherent beam combining of fiber amplifiers provides an attractive mean of reaching high power laser. In an interferometric phase measurement the beams issued for each fiber combined are imaged onto a sensor and interfere with a reference plane wave. This registration of interference patterns on a camera allows the measurement of the exact phase error of each fiber beam in a single shot. Therefore, this method is a promising candidate toward very large number of combined fibers. Based on this technique, several architectures can be proposed to coherently combine a high number of fibers. The first one based on digital holography transfers directly the image of the camera to spatial light modulator (SLM). The generated hologram is used to compensate the phase errors induced by the amplifiers. This architecture has therefore a collective phase measurement and correction. Unlike previous digital holography technique, the probe beams measuring the phase errors between the fibers are co-propagating with the phase-locked signal beams. This architecture is compatible with the use of multi-stage isolated amplifying fibers. In that case, only 20 pixels per fiber on the SLM are needed to obtain a residual phase shift error below λ/10rms. The second proposed architecture calculates the correction applied to each fiber channel by tracking the relative position of the interference finges. In this case, a phase modulator is placed on each channel. In that configuration, only 8 pixels per fiber on the camera is required for a stable close loop operation with a residual phase error of λ/20rms, which demonstrates the scalability of this concept.
Digital optical correlator x-ray telescope alignment monitoring system
NASA Astrophysics Data System (ADS)
Lis, Tomasz; Gaskin, Jessica; Jasper, John; Gregory, Don A.
2018-01-01
The High-Energy Replicated Optics to Explore the Sun (HEROES) program is a balloon-borne x-ray telescope mission to observe hard x-rays (˜20 to 70 keV) from the sun and multiple astrophysical targets. The payload consists of eight mirror modules with a total of 114 optics that are mounted on a 6-m-long optical bench. Each mirror module is complemented by a high-pressure xenon gas scintillation proportional counter. Attached to the payload is a camera that acquires star fields and then matches the acquired field to star maps to determine the pointing of the optical bench. Slight misalignments between the star camera, the optical bench, and the telescope elements attached to the optical bench may occur during flight due to mechanical shifts, thermal gradients, and gravitational effects. These misalignments can result in diminished imaging and reduced photon collection efficiency. To monitor these misalignments during flight, a supplementary Bench Alignment Monitoring System (BAMS) was added to the payload. BAMS hardware comprises two cameras mounted directly to the optical bench and rings of light-emitting diodes (LEDs) mounted onto the telescope components. The LEDs in these rings are mounted in a predefined, asymmetric pattern, and their positions are tracked using an optical/digital correlator. The BAMS analysis software is a digital adaption of an optical joint transform correlator. The aim is to enhance the observational proficiency of HEROES while providing insight into the magnitude of mechanically and thermally induced misalignments during flight. Results from a preflight test of the system are reported.
Wei, Hsiang-Chun; Su, Guo-Dung John
2012-01-01
Conventional camera modules with image sensors manipulate the focus or zoom by moving lenses. Although motors, such as voice-coil motors, can move the lens sets precisely, large volume, high power consumption, and long moving time are critical issues for motor-type camera modules. A deformable mirror (DM) provides a good opportunity to improve these issues. The DM is a reflective type optical component which can alter the optical power to focus the lights on the two dimensional optical image sensors. It can make the camera system operate rapidly. Ionic polymer metal composite (IPMC) is a promising electro-actuated polymer material that can be used in micromachining devices because of its large deformation with low actuation voltage. We developed a convenient simulation model based on Young's modulus and Poisson's ratio. We divided an ion exchange polymer, also known as Nafion®, into two virtual layers in the simulation model: one was expansive and the other was contractive, caused by opposite constant surface forces on each surface of the elements. Therefore, the deformation for different IPMC shapes can be described more easily. A standard experiment of voltage vs. tip displacement was used to verify the proposed modeling. Finally, a gear shaped IPMC actuator was designed and tested. Optical power of the IPMC deformable mirror is experimentally demonstrated to be 17 diopters with two volts. The needed voltage was about two orders lower than conventional silicon deformable mirrors and about one order lower than the liquid lens. PMID:23112648
NASA Astrophysics Data System (ADS)
Ye, Jian; Liu, Guanghui; Liu, Peng; Zhang, Shiwu; Shao, Pengfei; Smith, Zachary J.; Liu, Chenhai; Xu, Ronald X.
2018-02-01
We propose a portable fluorescence microscopic imaging system (PFMS) for intraoperative display of biliary structure and prevention of iatrogenic injuries during cholecystectomy. The system consists of a light source module, a camera module, and a Raspberry Pi computer with an LCD. Indocyanine green (ICG) is used as a fluorescent contrast agent for experimental validation of the system. Fluorescence intensities of the ICG aqueous solution at different concentration levels are acquired by our PFMS and compared with those of a commercial Xenogen IVIS system. We study the fluorescence detection depth by superposing different thicknesses of chicken breast on an ICG-loaded agar phantom. We verify the technical feasibility for identifying potential iatrogenic injury in cholecystectomy using a rat model in vivo. The proposed PFMS system is portable, inexpensive, and suitable for deployment in resource-limited settings.
Welcome ceremony and gift exchange in the Mir Base Module
1996-03-24
S76-E-5157 (24 March 1996) --- Two Russian cosmonauts and five of six NASA astronauts exchange gifts soon after reuniting in the Base Block Module of Russia's Mir Space Station. From the left are Linda M. Godwin, Kevin P. Chilton, Yury V. Usachev, Shannon W. Lucid, Yury I. Onufrienko, Ronald M. Sega and Richard A. Searfoss. Not pictured is astronaut Michael R. (Rich) Clifford. In a light moment around this time, ground controllers informed Chilton, the STS-76 mission commander, that Lucid, who will spend several months onboard Mir as a cosmonaut guest researcher, should now be considered a Mir-21 crew member, along with Onufrienko and Usachev, Mir-21 flight engineer. The image was recorded with a 35mm Electronic Still Camera (ESC) and downlinked at a later time to ground controllers in Houston, Texas.
Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
New light field camera based on physical based rendering tracing
NASA Astrophysics Data System (ADS)
Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung
2014-03-01
Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.
NASA Technical Reports Server (NTRS)
Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)
1998-01-01
A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.
Spectroscopic imaging using acousto-optic tunable filters
NASA Astrophysics Data System (ADS)
Bouhifd, Mounir; Whelan, Maurice
2007-07-01
We report on novel hyper-spectral imaging filter-modules based on acousto-optic tuneable filters (AOTF). The AOTF functions as a full-field tuneable bandpass filter which offers fast continuous or random access tuning with high filtering efficiency. Due to the diffractive nature of the device, the unfiltered zero-order and the filtered first-order images are geometrically separated. The modules developed exploit this feature to simultaneously route both the transmitted white-light image and the filtered fluorescence image to two separate cameras. Incorporation of prisms in the optical paths and careful design of the relay optics in the filter module have overcome a number of aberrations inherent to imaging through AOTFs, leading to excellent spatial resolution. A number of practical uses of this technique, both for in vivo auto-fluorescence endoscopy and in vitro fluorescence microscopy were demonstrated. We describe the operational principle and design of recently improved prototype instruments for fluorescence-based diagnostics and demonstrate their performance by presenting challenging hyper-spectral fluorescence imaging applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Hao; Apai, Dániel; Karalidi, Theodora
We present Spitzer /Infrared Array Camera Ch1 and Ch2 monitoring of six brown dwarfs during eight different epochs over the course of 20 months. For four brown dwarfs, we also obtained simulataneous Hubble Space Telescope ( HST )/WFC3 G141 grism spectra during two epochs and derived light curves in five narrowband filters. Probing different pressure levels in the atmospheres, the multiwavelength light curves of our six targets all exhibit variations, and the shape of the light curves evolves over the timescale of a rotation period, ranging from 1.4 to 13 hr. We compare the shapes of the light curves andmore » estimate the phase shifts between the light curves observed at different wavelengths by comparing the phase of the primary Fourier components. We use state-of-the-art atmosphere models to determine the flux contribution of different pressure layers to the observed flux in each filter. We find that the light curves that probe higher pressures are similar and in phase, but are offset and often different from the light curves that probe lower pressures. The phase differences between the two groups of light curves suggest that the modulations seen at lower and higher pressures may be introduced by different cloud layers.« less
Safety evaluation of red-light cameras
DOT National Transportation Integrated Search
2005-04-01
The objective of this final study was to determine the effectiveness of red-light-camera (RLC) systems in reducing crashes. The study used empirical Bayes before-and-after research using data from seven jurisdictions across the United States at 132 t...
NASA Astrophysics Data System (ADS)
Raghavan, Ajay; Saha, Bhaskar
2013-03-01
Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.
NASA Astrophysics Data System (ADS)
Yamauchi, Toyohiko; Yamada, Hidenao; Matsui, Hisayuki; Yasuhiko, Osamu; Ueda, Yukio
2018-02-01
We developed a compact Mach-Zehnder interferometer module to be used as a replacement of the objective lens in a conventional inverted microscope (Nikon, TS100-F) in order to make them quantitative phase microscopes. The module has a 90-degree-flipped U-shape; the dimensions of the module are 160 mm by 120 mm by 40 mm and the weight is 380 grams. The Mach-Zehnder interferometer equipped with the separate reference and sample arms was implemented in this U-shaped housing and the path-length difference between the two arms was manually adjustable. The sample under test was put on the stage of the microscope and a sample light went through it. Both arms had identical achromatic lenses for image formation and the lateral positions of them were also manually adjustable. Therefore, temporally and spatially low coherent illumination was applicable because the users were able to balance precisely the path length of the two arms and to overlap the two wavefronts. In the experiment, spectrally filtered LED light for illumination (wavelength = 633 nm and bandwidth = 3 nm) was input to the interferometer module via a 50 micrometer core optical fiber. We have successfully captured full-field interference images by a camera put on the trinocular tube of the microscope and constructed quantitative phase images of the cultured cells by means of the quarter-wavelength phase shifting algorithm. The resultant quantitative phase images were speckle-free and halo-free due to spectrally and spatially low coherent illumination.
Versatile microsecond movie camera
NASA Astrophysics Data System (ADS)
Dreyfus, R. W.
1980-03-01
A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.
Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.
Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro
2016-01-01
At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
The Endockscope Using Next Generation Smartphones: "A Global Opportunity".
Tse, Christina; Patel, Roshan M; Yoon, Renai; Okhunov, Zhamshid; Landman, Jaime; Clayman, Ralph V
2018-06-02
The Endockscope combines a smartphone, a battery powered flashlight and a fiberoptic cystoscope allowing for mobile videocystoscopy. We compared conventional videocystoscopy to the Endockscope paired with next generation smartphones in an ex-vivo porcine bladder model to evaluate its image quality. The Endockscope consists of a three-dimensional (3D) printed attachment that connects a smartphone to a flexible fiberoptic cystoscope plus a 1000 lumen light-emitting diode (LED) cordless light source. Video recordings of porcine cystoscopy with a fiberoptic flexible cystoscope (Storz) were captured for each mobile device (iPhone 6, iPhone 6S, iPhone 7, Samsung S8, and Google Pixel) and for the high-definition H3-Z versatile camera (HD) set-up with both the LED light source and the xenon light (XL) source. Eleven faculty urologists, blinded to the modality used, evaluated each video for image quality/resolution, brightness, color quality, sharpness, overall quality, and acceptability for diagnostic use. When comparing the Endockscope coupled to an Galaxy S8, iPhone 7, and iPhone 6S with the LED portable light source to the HD camera with XL, there were no statistically significant differences in any metric. 82% and 55% of evaluators considered the iPhone 7 + LED light source and iPhone 6S + LED light, respectively, appropriate for diagnostic purposes as compared to 100% who considered the HD camera with XL appropriate. The iPhone 6 and Google Pixel coupled with the LED source were both inferior to the HD camera with XL in all metrics. The Endockscope system with a LED light source when coupled with either an iPhone 7 or Samsung S8 (total cost: $750) is comparable to conventional videocystoscopy with a standard camera and XL light source (total cost: $45,000).
Space telescope phase B definition study. Volume 2A: Science instruments, f48/96 planetary camera
NASA Technical Reports Server (NTRS)
Grosso, R. P.; Mccarthy, D. J.
1976-01-01
The analysis and preliminary design of the f48/96 planetary camera for the space telescope are discussed. The camera design is for application to the axial module position of the optical telescope assembly.
The system analysis of light field information collection based on the light field imaging
NASA Astrophysics Data System (ADS)
Wang, Ye; Li, Wenhua; Hao, Chenyang
2016-10-01
Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H
2015-02-01
Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.
Spillover Effect and Economic Effect of Red Light Cameras
DOT National Transportation Integrated Search
2017-04-01
"Spillover effect" of red light cameras (RLCs) refers to the expected safety improvement at intersections other than those actually treated. Such effects may be due to jurisdiction-wide publicity of RLCs and the general publics lack of knowledge o...
Advances in Measurement of Skin Friction in Airflow
NASA Technical Reports Server (NTRS)
Brown, James L.; Naughton, Jonathan W.
2006-01-01
The surface interferometric skin-friction (SISF) measurement system is an instrument for determining the distribution of surface shear stress (skin friction) on a wind-tunnel model. The SISF system utilizes the established oil-film interference method, along with advanced image-data-processing techniques and mathematical models that express the relationship between interferograms and skin friction, to determine the distribution of skin friction over an observed region of the surface of a model during a single wind-tunnel test. In the oil-film interference method, a wind-tunnel model is coated with a thin film of oil of known viscosity and is illuminated with quasi-monochromatic, collimated light, typically from a mercury lamp. The light reflected from the outer surface of the oil film interferes with the light reflected from the oil-covered surface of the model. In the present version of the oil-film interference method, a camera captures an image of the illuminated model and the image in the camera is modulated by the interference pattern. The interference pattern depends on the oil-thickness distribution on the observed surface, and this distribution can be extracted through analysis of the image acquired by the camera. The oil-film technique is augmented by a tracer technique for observing the streamline pattern. To make the streamlines visible, small dots of fluorescentchalk/oil mixture are placed on the model just before a test. During the test, the chalk particles are embedded in the oil flow and produce chalk streaks that mark the streamlines. The instantaneous rate of thinning of the oil film at a given position on the surface of the model can be expressed as a function of the instantaneous thickness, the skin-friction distribution on the surface, and the streamline pattern on the surface; the functional relationship is expressed by a mathematical model that is nonlinear in the oil-film thickness and is known simply as the thin-oil-film equation. From the image data acquired as described, the time-dependent oil-thickness distribution and streamline pattern are extracted and by inversion of the thin-oil-film equation it is then possible to determine the skin-friction distribution. In addition to a quasi-monochromatic light source, the SISF system includes a beam splitter and two video cameras equipped with filters for observing the same area on a model in different wavelength ranges, plus a frame grabber and a computer for digitizing the video images and processing the image data. One video camera acquires the interference pattern in a narrow wavelength range of the quasi-monochromatic source. The other video camera acquires the streamline image of fluorescence from the chalk in a nearby but wider wavelength range. The interference- pattern and fluorescence images are digitized, and the resulting data are processed by an algorithm that inverts the thin-oil-film equation to find the skin-friction distribution.
Line scanning system for direct digital chemiluminescence imaging of DNA sequencing blots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karger, A.E.; Weiss, R.; Gesteland, R.F.
A cryogenically cooled charge-coupled device (CCD) camera equipped with an area CCD array is used in a line scanning system for low-light-level imaging of chemiluminescent DNA sequencing blots. Operating the CCD camera in time-delayed integration (TDI) mode results in continuous data acquisition independent of the length of the CCD array. Scanning is possible with a resolution of 1.4 line pairs/mm at the 50% level of the modulation transfer function. High-sensitivity, low-light-level scanning of chemiluminescent direct-transfer electrophoresis (DTE) DNA sequencing blots is shown. The detection of DNA fragments on the blot involves DNA-DNA hybridization with oligonucleotide-alkaline phosphatase conjugate and 1,2-dioxetane-based chemiluminescence.more » The width of the scan allows the recording of up to four sequencing reactions (16 lanes) on one scan. The scan speed of 52 cm/h used for the sequencing blots corresponds to a data acquisition rate of 384 pixels/s. The chemiluminescence detection limit on the scanned images is 3.9 [times] 10[sup [minus]18] mol of plasmid DNA. A conditional median filter is described to remove spikes caused by cosmic ray events from the CCD images. 39 refs., 9 refs.« less
NASA Astrophysics Data System (ADS)
Ojaghi, Ashkan; Parkhimchyk, Artur; Tabatabaei, Nima
2016-09-01
Early detection of the most prevalent oral disease worldwide, i.e., dental caries, still remains as one of the major challenges in dentistry. The current dental standard of care relies on caries detection methods, such as visual inspection and x-ray radiography, which lack the sufficient specificity and sensitivity to detect caries at early stages of formation when they can be healed. We report on the feasibility of early caries detection in a clinically and commercially viable thermophotonic imaging system. The system incorporates intensity-modulated laser light along with a low-cost long-wavelength infrared (LWIR; 8 to 14 μm) camera, providing diagnostic contrast based on the enhanced light absorption of early caries. The LWIR camera is highly suitable for integration into clinical platforms because of its low weight and cost. In addition, through theoretical modeling, we show that LWIR detection enhances the diagnostic contrast due to the minimal LWIR transmittance of enamel and suppression of the masking effect of the direct thermal Planck emission. Diagnostic performance of the system and its detection threshold are experimentally evaluated by monitoring the inception and progression of artificially induced occlusal and smooth surface caries. The results are suggestive of the suitability of the developed LWIR system for detecting early dental caries.
NASA Astrophysics Data System (ADS)
Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.
2012-03-01
Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.
Beam measurements using visible synchrotron light at NSLS2 storage ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Weixing, E-mail: chengwx@bnl.gov; Bacha, Bel; Singh, Om
2016-07-27
Visible Synchrotron Light Monitor (SLM) diagnostic beamline has been designed and constructed at NSLS2 storage ring, to characterize the electron beam profile at various machine conditions. Due to the excellent alignment, SLM beamline was able to see the first visible light when beam was circulating the ring for the first turn. The beamline has been commissioned for the past year. Besides a normal CCD camera to monitor the beam profile, streak camera and gated camera are used to measure the longitudinal and transverse profile to understand the beam dynamics. Measurement results from these cameras will be presented in this paper.more » A time correlated single photon counting system (TCSPC) has also been setup to measure the single bunch purity.« less
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Suzuki, Mayumi; Kato, Katsuhiko; Watabe, Tadashi; Ikeda, Hayato; Kanai, Yasukazu; Ogata, Yoshimune; Hatazawa, Jun
2016-09-01
Although iodine 131 (I-131) is used for radionuclide therapy, high resolution images are difficult to obtain with conventional gamma cameras because of the high energy of I-131 gamma photons (364 keV). Cerenkov-light imaging is a possible method for beta emitting radionuclides, and I-131 (606 MeV maximum beta energy) is a candidate to obtain high resolution images. We developed a high energy gamma camera system for I-131 radionuclide and combined it with a Cerenkov-light imaging system to form a gamma-photon/Cerenkov-light hybrid imaging system to compare the simultaneously measured images of these two modalities. The high energy gamma imaging detector used 0.85-mm×0.85-mm×10-mm thick GAGG scintillator pixels arranged in a 44×44 matrix with a 0.1-mm thick reflector and optical coupled to a Hamamatsu 2 in. square position sensitive photomultiplier tube (PSPMT: H12700 MOD). The gamma imaging detector was encased in a 2 cm thick tungsten shield, and a pinhole collimator was mounted on its top to form a gamma camera system. The Cerenkov-light imaging system was made of a high sensitivity cooled CCD camera. The Cerenkov-light imaging system was combined with the gamma camera using optical mirrors to image the same area of the subject. With this configuration, we simultaneously imaged the gamma photons and the Cerenkov-light from I-131 in the subjects. The spatial resolution and sensitivity of the gamma camera system for I-131 were respectively 3 mm FWHM and 10 cps/MBq for the high sensitivity collimator at 10 cm from the collimator surface. The spatial resolution of the Cerenkov-light imaging system was 0.64 mm FWHM at 10 cm from the system surface. Thyroid phantom and rat images were successfully obtained with the developed gamma-photon/Cerenkov-light hybrid imaging system, allowing direct comparison of these two modalities. Our developed gamma-photon/Cerenkov-light hybrid imaging system will be useful to evaluate the advantages and disadvantages of these two modalities.
MS Lucid and Blaha with MGBX aboard the Mir space station Priroda module
1997-03-26
STS079-S-092 (16-26 Sept. 1996) --- Astronauts Shannon W. Lucid and John E. Blaha work at a microgravity glove box on the Priroda Module aboard Russia's Mir Space Station complex. Blaha, who flew into Earth-orbit with the STS-79 crew, and Lucid are the first participants in a series of ongoing exchanges of NASA astronauts serving time as cosmonaut guest researchers onboard Mir. Lucid went on to spend a total of 188 days in space before returning to Earth with the STS-79 crew. During the STS-79 mission, the crew used an IMAX camera to document activities aboard the Space Shuttle Atlantis and the various Mir modules, with the cooperation of the Russian Space Agency (RSA). A hand-held version of the 65mm camera system accompanied the STS-79 crew into space in Atlantis' crew cabin. NASA has flown IMAX camera systems on many Shuttle missions, including a special cargo bay camera's coverage of other recent Shuttle-Mir rendezvous and/or docking missions.
Swap intensified WDR CMOS module for I2/LWIR fusion
NASA Astrophysics Data System (ADS)
Ni, Yang; Noguier, Vincent
2015-05-01
The combination of high resolution visible-near-infrared low light sensor and moderate resolution uncooled thermal sensor provides an efficient way for multi-task night vision. Tremendous progress has been made on uncooled thermal sensors (a-Si, VOx, etc.). It's possible to make a miniature uncooled thermal camera module in a tiny 1cm3 cube with <1W power consumption. For silicon based solid-state low light CCD/CMOS sensors have observed also a constant progress in terms of readout noise, dark current, resolution and frame rate. In contrast to thermal sensing which is intrinsic day&night operational, the silicon based solid-state sensors are not yet capable to do the night vision performance required by defense and critical surveillance applications. Readout noise, dark current are 2 major obstacles. The low dynamic range at high sensitivity mode of silicon sensors is also an important limiting factor, which leads to recognition failure due to local or global saturations & blooming. In this context, the image intensifier based solution is still attractive for the following reasons: 1) high gain and ultra-low dark current; 2) wide dynamic range and 3) ultra-low power consumption. With high electron gain and ultra low dark current of image intensifier, the only requirement on the silicon image pickup device are resolution, dynamic range and power consumption. In this paper, we present a SWAP intensified Wide Dynamic Range CMOS module for night vision applications, especially for I2/LWIR fusion. This module is based on a dedicated CMOS image sensor using solar-cell mode photodiode logarithmic pixel design which covers a huge dynamic range (> 140dB) without saturation and blooming. The ultra-wide dynamic range image from this new generation logarithmic sensor can be used directly without any image processing and provide an instant light accommodation. The complete module is slightly bigger than a simple ANVIS format I2 tube with <500mW power consumption.
APOLLO 16 ASTRONAUTS JOHN YOUNG AND CHARLES DUKE EXAMINE FAR ULTRAVIOLET CAMERA
NASA Technical Reports Server (NTRS)
1971-01-01
Apollo 16 Lunar Module Pilot Charles M. Duke, Jr., left and Mission Commander John W. Young examine Far Ultraviolet Camera they will take to the Moon in March. They will measure the universe's ultraviolet spectrum. They will be launched to the Moon no earlier than March 17, 1972, with Command Module Pilot Thomas K. Mattingly, II.
SU-E-T-161: SOBP Beam Analysis Using Light Output of Scintillation Plate Acquired by CCD Camera.
Cho, S; Lee, S; Shin, J; Min, B; Chung, K; Shin, D; Lim, Y; Park, S
2012-06-01
To analyze Bragg-peak beams in SOBP (spread-out Bragg-peak) beam using CCD (charge-coupled device) camera - scintillation screen system. We separated each Bragg-peak beam using light output of high sensitivity scintillation material acquired by CCD camera and compared with Bragg-peak beams calculated by Monte Carlo simulation. In this study, CCD camera - scintillation screen system was constructed with a high sensitivity scintillation plate (Gd2O2S:Tb) and a right-angled prismatic PMMA phantom, and a Marlin F-201B, EEE-1394 CCD camera. SOBP beam irradiated by the double scattering mode of a PROTEUS 235 proton therapy machine in NCC is 8 cm width, 13 g/cm 2 range. The gain, dose rate and current of this beam is 50, 2 Gy/min and 70 nA, respectively. Also, we simulated the light output of scintillation plate for SOBP beam using Geant4 toolkit. We evaluated the light output of high sensitivity scintillation plate according to intergration time (0.1 - 1.0 sec). The images of CCD camera during the shortest intergration time (0.1 sec) were acquired automatically and randomly, respectively. Bragg-peak beams in SOBP beam were analyzed by the acquired images. Then, the SOBP beam used in this study was calculated by Geant4 toolkit and Bragg-peak beams in SOBP beam were obtained by ROOT program. The SOBP beam consists of 13 Bragg-peak beams. The results of experiment were compared with that of simulation. We analyzed Bragg-peak beams in SOBP beam using light output of scintillation plate acquired by CCD camera and compared with that of Geant4 simulation. We are going to study SOBP beam analysis using more effective the image acquisition technique. © 2012 American Association of Physicists in Medicine.
Stray light lessons learned from the Mars reconnaissance orbiter's optical navigation camera
NASA Astrophysics Data System (ADS)
Lowman, Andrew E.; Stauder, John L.
2004-10-01
The Optical Navigation Camera (ONC) is a technical demonstration slated to fly on NASA"s Mars Reconnaissance Orbiter in 2005. Conventional navigation methods have reduced accuracy in the days immediately preceding Mars orbit insertion. The resulting uncertainty in spacecraft location limits rover landing sites to relatively safe areas, away from interesting features that may harbor clues to past life on the planet. The ONC will provide accurate navigation on approach for future missions by measuring the locations of the satellites of Mars relative to background stars. Because Mars will be a bright extended object just outside the camera"s field of view, stray light control at small angles is essential. The ONC optomechanical design was analyzed by stray light experts and appropriate baffles were implemented. However, stray light testing revealed significantly higher levels of light than expected at the most critical angles. The primary error source proved to be the interface between ground glass surfaces (and the paint that had been applied to them) and the polished surfaces of the lenses. This paper will describe troubleshooting and correction of the problem, as well as other lessons learned that affected stray light performance.
CANDU in-reactor quantitative visual-based inspection techniques
NASA Astrophysics Data System (ADS)
Rochefort, P. A.
2009-02-01
This paper describes two separate visual-based inspection procedures used at CANDU nuclear power generating stations. The techniques are quantitative in nature and are delivered and operated in highly radioactive environments with access that is restrictive, and in one case is submerged. Visual-based inspections at stations are typically qualitative in nature. For example a video system will be used to search for a missing component, inspect for a broken fixture, or locate areas of excessive corrosion in a pipe. In contrast, the methods described here are used to measure characteristic component dimensions that in one case ensure ongoing safe operation of the reactor and in the other support reactor refurbishment. CANDU reactors are Pressurized Heavy Water Reactors (PHWR). The reactor vessel is a horizontal cylindrical low-pressure calandria tank approximately 6 m in diameter and length, containing heavy water as a neutron moderator. Inside the calandria, 380 horizontal fuel channels (FC) are supported at each end by integral end-shields. Each FC holds 12 fuel bundles. The heavy water primary heat transport water flows through the FC pressure tube, removing the heat from the fuel bundles and delivering it to the steam generator. The general design of the reactor governs both the type of measurements that are required and the methods to perform the measurements. The first inspection procedure is a method to remotely measure the gap between FC and other in-core horizontal components. The technique involves delivering vertically a module with a high-radiation-resistant camera and lighting into the core of a shutdown but fuelled reactor. The measurement is done using a line-of-sight technique between the components. Compensation for image perspective and viewing elevation to the measurement is required. The second inspection procedure measures flaws within the reactor's end shield FC calandria tube rolled joint area. The FC calandria tube (the outer shell of the FC) is sealed by rolling its ends into the rolled joint area. During reactor refurbishment, the original FC calandria tubes are removed, potentially scratching the rolled joint area and, thereby, compromising the seal with the new FC calandria tube. The procedure involves delivering an inspection module having a radiation-resistant camera, standard lighting, and a structured lighting projector. The surface is inspected by rotating the module within the rolled joint area. If a flaw is detected, its depth and width are gauged from the profile variation of the structured lighting in a captured image. As well, the diameter profile of the area is measured from the analysis of a series of captured circumferential images of the structured lighting profiles on the surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birch, Gabriel Carisle; Griffin, John Clark
2015-01-01
The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenariosmore » are presented with calculations showing the application of such a metric.« less
Feasibility of Using Video Cameras for Automated Enforcement on Red-Light Running and Managed Lanes.
DOT National Transportation Integrated Search
2009-12-01
The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and high occupancy vehicle (HOV) occupancy requirement using video cameras in Nev...
Apparatus and method for laser beam diagnosis
Salmon, Jr., Joseph T.
1991-01-01
An apparatus and method is disclosed for accurate, real time monitoring of the wavefront curvature of a coherent laser beam. Knowing the curvature, it can be quickly determined whether the laser beam is collimated, or focusing (converging), or de-focusing (diverging). The apparatus includes a lateral interferometer for forming an interference pattern of the laser beam to be diagnosed. The interference pattern is imaged to a spatial light modulator (SLM), whose output is a coherent laser beam having an image of the interference pattern impressed on it. The SLM output is focused to obtain the far-field diffraction pattern. A video camera, such as CCD, monitors the far-field diffraction pattern, and provides an electrical output indicative of the shape of the far-field pattern. Specifically, the far-field pattern comprises a central lobe and side lobes, whose relative positions are indicative of the radius of curvature of the beam. The video camera's electrical output may be provided to a computer which analyzes the data to determine the wavefront curvature of the laser beam.
Apparatus and method for laser beam diagnosis
Salmon, J.T. Jr.
1991-08-27
An apparatus and method are disclosed for accurate, real time monitoring of the wavefront curvature of a coherent laser beam. Knowing the curvature, it can be quickly determined whether the laser beam is collimated, or focusing (converging), or de-focusing (diverging). The apparatus includes a lateral interferometer for forming an interference pattern of the laser beam to be diagnosed. The interference pattern is imaged to a spatial light modulator (SLM), whose output is a coherent laser beam having an image of the interference pattern impressed on it. The SLM output is focused to obtain the far-field diffraction pattern. A video camera, such as CCD, monitors the far-field diffraction pattern, and provides an electrical output indicative of the shape of the far-field pattern. Specifically, the far-field pattern comprises a central lobe and side lobes, whose relative positions are indicative of the radius of curvature of the beam. The video camera's electrical output may be provided to a computer which analyzes the data to determine the wavefront curvature of the laser beam. 11 figures.
Miniature Raman spectroscopy utilizing stabilized diode lasers and 2D CMOS detector arrays
NASA Astrophysics Data System (ADS)
Auz, Bryan; Bonvallet, Joseph; Rodriguez, John; Olmstead, Ty
2017-02-01
A miniature Raman spectrometer was designed in a rapid development cycle (< 4 months) to investigate the performance capabilities achievable with two dimensional (2D) CMOS detectors found in cell phone camera modules and commercial off the shelf optics (COTS). This paper examines the design considerations and tradeoffs made during the development cycle. The final system developed measures 40 mm in length, 40 mm in width, 15 mm tall and couples directly with the cell phone camera optics. Two variants were made: one with an excitation wavelength of 638 nm and the other with a 785 nm excitation wavelength. Raman spectra of the following samples were gathered at both excitations: Toluene, Cyclohexane, Bis(MSB), Aspirin, Urea, and Ammonium Nitrate. The system obtained a resolution of 40 cm-1. The spectra produced at 785 nm excitation required integration times of up to 10 times longer than the 1.5 seconds at 638 nm, however, contained reduced stray light and less fluorescence which led to an overall cleaner signal.
Hemphill, Ashton S; Shen, Yuecheng; Liu, Yan; Wang, Lihong V
2017-11-27
In biological applications, optical focusing is limited by the diffusion of light, which prevents focusing at depths greater than ∼1 mm in soft tissue. Wavefront shaping extends the depth by compensating for phase distortions induced by scattering and thus allows for focusing light through biological tissue beyond the optical diffusion limit by using constructive interference. However, due to physiological motion, light scattering in tissue is deterministic only within a brief speckle correlation time. In in vivo tissue, this speckle correlation time is on the order of milliseconds, and so the wavefront must be optimized within this brief period. The speed of digital wavefront shaping has typically been limited by the relatively long time required to measure and display the optimal phase pattern. This limitation stems from the low speeds of cameras, data transfer and processing, and spatial light modulators. While binary-phase modulation requiring only two images for the phase measurement has recently been reported, most techniques require at least three frames for the full-phase measurement. Here, we present a full-phase digital optical phase conjugation method based on off-axis holography for single-shot optical focusing through scattering media. By using off-axis holography in conjunction with graphics processing unit based processing, we take advantage of the single-shot full-phase measurement while using parallel computation to quickly reconstruct the phase map. With this system, we can focus light through scattering media with a system latency of approximately 9 ms, on the order of the in vivo speckle correlation time.
NASA Astrophysics Data System (ADS)
Hemphill, Ashton S.; Shen, Yuecheng; Liu, Yan; Wang, Lihong V.
2017-11-01
In biological applications, optical focusing is limited by the diffusion of light, which prevents focusing at depths greater than ˜1 mm in soft tissue. Wavefront shaping extends the depth by compensating for phase distortions induced by scattering and thus allows for focusing light through biological tissue beyond the optical diffusion limit by using constructive interference. However, due to physiological motion, light scattering in tissue is deterministic only within a brief speckle correlation time. In in vivo tissue, this speckle correlation time is on the order of milliseconds, and so the wavefront must be optimized within this brief period. The speed of digital wavefront shaping has typically been limited by the relatively long time required to measure and display the optimal phase pattern. This limitation stems from the low speeds of cameras, data transfer and processing, and spatial light modulators. While binary-phase modulation requiring only two images for the phase measurement has recently been reported, most techniques require at least three frames for the full-phase measurement. Here, we present a full-phase digital optical phase conjugation method based on off-axis holography for single-shot optical focusing through scattering media. By using off-axis holography in conjunction with graphics processing unit based processing, we take advantage of the single-shot full-phase measurement while using parallel computation to quickly reconstruct the phase map. With this system, we can focus light through scattering media with a system latency of approximately 9 ms, on the order of the in vivo speckle correlation time.
Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.
Kopp, O; Markert, S; Tornow, R P
2002-01-01
To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.
Multi-channel automotive night vision system
NASA Astrophysics Data System (ADS)
Lu, Gang; Wang, Li-jun; Zhang, Yi
2013-09-01
A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.
Improving accuracy of Plenoptic PIV using two light field cameras
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Timothy
2017-11-01
Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.
Applications of digital image acquisition in anthropometry
NASA Technical Reports Server (NTRS)
Woolford, B.; Lewis, J. L.
1981-01-01
A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.
Robust Behavior Recognition in Intelligent Surveillance Environments.
Batchuluun, Ganbayar; Kim, Yeong Gon; Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2016-06-30
Intelligent surveillance systems have been studied by many researchers. These systems should be operated in both daytime and nighttime, but objects are invisible in images captured by visible light camera during the night. Therefore, near infrared (NIR) cameras, thermal cameras (based on medium-wavelength infrared (MWIR), and long-wavelength infrared (LWIR) light) have been considered for usage during the nighttime as an alternative. Due to the usage during both daytime and nighttime, and the limitation of requiring an additional NIR illuminator (which should illuminate a wide area over a great distance) for NIR cameras during the nighttime, a dual system of visible light and thermal cameras is used in our research, and we propose a new behavior recognition in intelligent surveillance environments. Twelve datasets were compiled by collecting data in various environments, and they were used to obtain experimental results. The recognition accuracy of our method was found to be 97.6%, thereby confirming the ability of our method to outperform previous methods.
Feasibility of Using Video Camera for Automated Enforcement on Red-Light Running and Managed Lanes.
DOT National Transportation Integrated Search
2009-12-25
The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and HOV occupancy requirement using video cameras in Nevada. This objective was a...
Feasibility and accuracy assessment of light field (plenoptic) PIV flow-measurement technique
NASA Astrophysics Data System (ADS)
Shekhar, Chandra; Ogawa, Syo; Kawaguchi, Tatsuya
A light field camera can enable measurement of all the three velocity components of a flow field inside a three-dimensional volume when implemented in a PIV measurement. Due to the usage of only one camera, the measurement procedure gets greatly simplified, as well as measurement of the flows with limited visual access also becomes possible. Due to these advantages, light field cameras and their usage in PIV measurements are actively studied. The overall procedure of obtaining an instantaneous flow field consists of imaging a seeded flow at two closely separated time instants, reconstructing the two volumetric distributions of the particles using algorithms such as MART, followed by obtaining the flow velocity through cross-correlations. In this study, we examined effects of various configuration parameters of a light field camera on the in-plane and the depth resolutions, obtained near-optimal parameters in a given case, and then used it to simulate a PIV measurement scenario in order to assess the reconstruction accuracy.
SVBRDF-Invariant Shape and Reflectance Estimation from a Light-Field Camera.
Wang, Ting-Chun; Chandraker, Manmohan; Efros, Alexei A; Ramamoorthi, Ravi
2018-03-01
Light-field cameras have recently emerged as a powerful tool for one-shot passive 3D shape capture. However, obtaining the shape of glossy objects like metals or plastics remains challenging, since standard Lambertian cues like photo-consistency cannot be easily applied. In this paper, we derive a spatially-varying (SV)BRDF-invariant theory for recovering 3D shape and reflectance from light-field cameras. Our key theoretical insight is a novel analysis of diffuse plus single-lobe SVBRDFs under a light-field setup. We show that, although direct shape recovery is not possible, an equation relating depths and normals can still be derived. Using this equation, we then propose using a polynomial (quadratic) shape prior to resolve the shape ambiguity. Once shape is estimated, we also recover the reflectance. We present extensive synthetic data on the entire MERL BRDF dataset, as well as a number of real examples to validate the theory, where we simultaneously recover shape and BRDFs from a single image taken with a Lytro Illum camera.
NASA Astrophysics Data System (ADS)
Do, Trong Hop; Yoo, Myungsik
2018-01-01
This paper proposes a vehicle positioning system using LED street lights and two rolling shutter CMOS sensor cameras. In this system, identification codes for the LED street lights are transmitted to camera-equipped vehicles through a visible light communication (VLC) channel. Given that the camera parameters are known, the positions of the vehicles are determined based on the geometric relationship between the coordinates of the LEDs in the images and their real world coordinates, which are obtained through the LED identification codes. The main contributions of the paper are twofold. First, the collinear arrangement of the LED street lights makes traditional camera-based positioning algorithms fail to determine the position of the vehicles. In this paper, an algorithm is proposed to fuse data received from the two cameras attached to the vehicles in order to solve the collinearity problem of the LEDs. Second, the rolling shutter mechanism of the CMOS sensors combined with the movement of the vehicles creates image artifacts that may severely degrade the positioning accuracy. This paper also proposes a method to compensate for the rolling shutter artifact, and a high positioning accuracy can be achieved even when the vehicle is moving at high speeds. The performance of the proposed positioning system corresponding to different system parameters is examined by conducting Matlab simulations. Small-scale experiments are also conducted to study the performance of the proposed algorithm in real applications.
Ye, Jian; Liu, Guanghui; Liu, Peng; Zhang, Shiwu; Shao, Pengfei; Smith, Zachary J; Liu, Chenhai; Xu, Ronald X
2018-02-01
We propose a portable fluorescence microscopic imaging system (PFMS) for intraoperative display of biliary structure and prevention of iatrogenic injuries during cholecystectomy. The system consists of a light source module, a camera module, and a Raspberry Pi computer with an LCD. Indocyanine green (ICG) is used as a fluorescent contrast agent for experimental validation of the system. Fluorescence intensities of the ICG aqueous solution at different concentration levels are acquired by our PFMS and compared with those of a commercial Xenogen IVIS system. We study the fluorescence detection depth by superposing different thicknesses of chicken breast on an ICG-loaded agar phantom. We verify the technical feasibility for identifying potential iatrogenic injury in cholecystectomy using a rat model in vivo. The proposed PFMS system is portable, inexpensive, and suitable for deployment in resource-limited settings. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Temporal compression in episodic memory for real-life events.
Jeunehomme, Olivier; Folville, Adrien; Stawarczyk, David; Van der Linden, Martial; D'Argembeau, Arnaud
2018-07-01
Remembering an event typically takes less time than experiencing it, suggesting that episodic memory represents past experience in a temporally compressed way. Little is known, however, about how the continuous flow of real-life events is summarised in memory. Here we investigated the nature and determinants of temporal compression by directly comparing memory contents with the objective timing of events as measured by a wearable camera. We found that episodic memories consist of a succession of moments of prior experience that represent events with varying compression rates, such that the density of retrieved information is modulated by goal processing and perceptual changes. Furthermore, the results showed that temporal compression rates remain relatively stable over one week and increase after a one-month delay, particularly for goal-related events. These data shed new light on temporal compression in episodic memory and suggest that compression rates are adaptively modulated to maintain current goal-relevant information.
Precise Selenodetic Coordinate System on Artificial Light Refers
NASA Astrophysics Data System (ADS)
Bagrov, Alexander; Pichkhadze, Konstantin M.; Sysoev, Valentin
Historically a coordinate system for the Moon was established on the base of telescopic observations from the Earth. As the angular resolution of Earth-to-Space telescopic observations is limited by Earth atmosphere, and is ordinary worse then 1 ang. second, the mean accuracy of selenodetic coordinates is some angular minutes, which corresponds to errors about 900 meters for positions of lunar objects near center of visible lunar disk, and at least twice more when objects are near lunar poles. As there are no Global Positioning System nor any astronomical observation instruments on the Moon, we proposed to use an autonomous light beacon on the Luna-Globe landing module to fix its position on the surface of the moon ant to use it as refer point for fixation of spherical coordinates system for the Moon. The light beacon is designed to be surely visible by orbiting probe TV-camera. As any space probe has its own stars-orientation system, there is not a problem to calculate a set of directions to the beacon and to the referent stars in probe-centered coordinate system during flight over the beacon. Large number of measured angular positions and time of each observation will be enough to calculate both orbital parameters of the probe and selenodetic coordinates of the beacon by methods of geodesy. All this will allow fixing angular coordinates of any feature of lunar surface in one global coordinate system, referred to the beacon. The satellite’s orbit plane contains ever the center mass of main body, so if the beacon will be placed closely to a lunar pole, we shall determine pole point position of the Moon with accuracy tens times better then it is known now. When angular accuracy of self-orientation by stars of the orbital module of Luna-Glob mission will be 6 angular seconds, then being in circular orbit with height of 200 km the on-board TV-camera will allow calculation of the beacon position as well as 6" corresponding to spatial resolution of the camera. It mean that coordinates of the beacon will be determined with accuracy not worse then 6 meters on the lunar surface. Much more accuracy can be achieved if orbital probe will use as precise angular measurer as optical interferometer. The limiting accuracy of proposed method is far above any reasonable level, because it may be sub-millimeter one. Theoretical analysis shows that for achievement of 1-meter accuracy of coordinate measuring over lunar globe it will be enough to disperse over it surface some 60 light beacons. Designed by Lavochkin Association light beacon is autonomous one, and it will work at least 10 years, so coordinate frame of any other lunar mission could use established selenodetic coordinates during this period. The same approach may be used for establishing Martial coordinates system.
Color correction pipeline optimization for digital cameras
NASA Astrophysics Data System (ADS)
Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo
2013-04-01
The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.
NASA Technical Reports Server (NTRS)
Franke, John M.; Rhodes, David B.; Jones, Stephen B.; Dismond, Harriet R.
1992-01-01
A technique for synchronizing a pulse light source to charge coupled device cameras is presented. The technique permits the use of pulse light sources for continuous as well as stop action flow visualization. The technique has eliminated the need to provide separate lighting systems at facilities requiring continuous and stop action viewing or photography.
Electronic cameras for low-light microscopy.
Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith
2013-01-01
This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.
Characterization of the LBNL PEM Camera
NASA Astrophysics Data System (ADS)
Wang, G.-C.; Huber, J. S.; Moses, W. W.; Qi, J.; Choong, W.-S.
2006-06-01
We present the tomographic images and performance measurements of the LBNL positron emission mammography (PEM) camera, a specially designed positron emission tomography (PET) camera that utilizes PET detector modules with depth of interaction measurement capability to achieve both high sensitivity and high resolution for breast cancer detection. The camera currently consists of 24 detector modules positioned as four detector banks to cover a rectangular patient port that is 8.2/spl times/6 cm/sup 2/ with a 5 cm axial extent. Each LBNL PEM detector module consists of 64 3/spl times/3/spl times/30 mm/sup 3/ LSO crystals coupled to a single photomultiplier tube (PMT) and an 8/spl times/8 silicon photodiode array (PD). The PMT provides accurate timing, the PD identifies the crystal of interaction, the sum of the PD and PMT signals (PD+PMT) provides the total energy, and the PD/(PD+PMT) ratio determines the depth of interaction. The performance of the camera has been evaluated by imaging various phantoms. The full-width-at-half-maximum (FWHM) spatial resolution changes slightly from 1.9 mm to 2.1 mm when measured at the center and corner of the field of the view, respectively, using a 6 ns coincidence timing window and a 300-750 keV energy window. With the same setup, the peak sensitivity of the camera is 1.83 kcps//spl mu/Ci.
NASA Astrophysics Data System (ADS)
Ma, Chen; Cheng, Dewen; Xu, Chen; Wang, Yongtian
2014-11-01
Fundus camera is a complex optical system for retinal photography, involving illumination and imaging of the retina. Stray light is one of the most significant problems of fundus camera because the retina is so minimally reflective that back reflections from the cornea and any other optical surface are likely to be significantly greater than the light reflected from the retina. To provide maximum illumination to the retina while eliminating back reflections, a novel design of illumination system used in portable fundus camera is proposed. Internal illumination, in which eyepiece is shared by both the illumination system and the imaging system but the condenser and the objective are separated by a beam splitter, is adopted for its high efficiency. To eliminate the strong stray light caused by corneal center and make full use of light energy, the annular stop in conventional illumination systems is replaced by a fiber-coupled, ring-shaped light source that forms an annular beam. Parameters including size and divergence angle of the light source are specially designed. To weaken the stray light, a polarized light source is used, and an analyzer plate is placed after beam splitter in the imaging system. Simulation results show that the illumination uniformity at the fundus exceeds 90%, and the stray light is within 1%. Finally, a proof-of-concept prototype is developed and retinal photos of an ophthalmophantom are captured. The experimental results show that ghost images and stray light have been greatly reduced to a level that professional diagnostic will not be interfered with.
Characterization of a thinned back illuminated MIMOSA V sensor as a visible light camera
NASA Astrophysics Data System (ADS)
Bulgheroni, Antonio; Bianda, Michele; Caccia, Massimo; Cappellini, Chiara; Mozzanica, Aldo; Ramelli, Renzo; Risigo, Fabio
2006-09-01
This paper reports the measurements that have been performed both in the Silicon Detector Laboratory at the University of Insubria (Como, Italy) and at the Instituto Ricerche SOlari Locarno (IRSOL) to characterize a CMOS pixel particle detector as a visible light camera. The CMOS sensor has been studied in terms of Quantum Efficiency in the visible spectrum, image blooming and reset inefficiency in saturation condition. The main goal of these measurements is to prove that this kind of particle detector can also be used as an ultra fast, 100% fill factor visible light camera in solar physics experiments.
Optical registration of spaceborne low light remote sensing camera
NASA Astrophysics Data System (ADS)
Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long
2018-02-01
For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.
ERIC Educational Resources Information Center
Brochu, Michel
1983-01-01
In August, 1981, National Aeronautics and Space Administration launched Dynamics Explorer 1 into polar orbit equipped with three cameras built to view the Northern Lights. The cameras can photograph aurora borealis' faint light without being blinded by the earth's bright dayside. Photographs taken by the satellite are provided. (JN)
Photometric Calibration of Consumer Video Cameras
NASA Technical Reports Server (NTRS)
Suggs, Robert; Swift, Wesley, Jr.
2007-01-01
Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.
Unstructured Facility Navigation by Applying the NIST 4D/RCS Architecture
2006-07-01
control, and the planner); wire- less data and emergency stop radios; GPS receiver; inertial navigation unit; dual stereo cameras; infrared sensors...current Actuators Wheel motors, camera controls Scale & filter signals status commands commands commands GPS Antenna Dual stereo cameras...used in the sensory processing module include the two pairs of stereo color cameras, the physical bumper and infrared bumper sensors, the motor
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
Pipe inspection and repair system
NASA Technical Reports Server (NTRS)
Schempf, Hagen (Inventor); Mutschler, Edward (Inventor); Chemel, Brian (Inventor); Boehmke, Scott (Inventor); Crowley, William (Inventor)
2004-01-01
A multi-module pipe inspection and repair device. The device includes a base module, a camera module, a sensor module, an MFL module, a brush module, a patch set/test module, and a marker module. Each of the modules may be interconnected to construct one of an inspection device, a preparation device, a marking device, and a repair device.
Ambient-Light-Canceling Camera Using Subtraction of Frames
NASA Technical Reports Server (NTRS)
Morookian, John Michael
2004-01-01
The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.
Development of a camera casing suited for cryogenic and vacuum applications
NASA Astrophysics Data System (ADS)
Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.
2013-12-01
We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.
Orbital docking system centerline color television camera system test
NASA Technical Reports Server (NTRS)
Mongan, Philip T.
1993-01-01
A series of tests was run to verify that the design of the centerline color television camera (CTVC) system is adequate optically for the STS-71 Space Shuttle Orbiter docking mission with the Mir space station. In each test, a mockup of the Mir consisting of hatch, docking mechanism, and docking target was positioned above the Johnson Space Center's full fuselage trainer, which simulated the Orbiter with a mockup of the external airlock and docking adapter. Test subjects viewed the docking target through the CTVC under 30 different lighting conditions and evaluated target resolution, field of view, light levels, light placement, and methods of target alignment. Test results indicate that the proposed design will provide adequate visibility through the centerline camera for a successful docking, even with a reasonable number of light failures. It is recommended that the flight deck crew have individual switching capability for docking lights to provide maximum shadow management and that centerline lights be retained to deal with light failures and user preferences. Procedures for light management should be developed and target alignment aids should be selected during simulated docking runs.
Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields.
Xiao, Zhaolin; Wang, Qing; Zhou, Guoqing; Yu, Jingyi
2017-05-01
When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, and so on. In this paper, we present a different solution that first detects and then removes angular aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the angular aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing versus non-aliasing regions and angular aliasing removal. Experiments on both synthetic scene and real light field data sets (camera array and Lytro camera) demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.
Vijayakumar, A; Rosen, Joseph
2017-06-12
Recording digital holograms without wave interference simplifies the optical systems, increases their power efficiency and avoids complicated aligning procedures. We propose and demonstrate a new technique of digital hologram acquisition without two-wave interference. Incoherent light emitted from an object propagates through a random-like coded phase mask and recorded directly without interference by a digital camera. In the training stage of the system, a point spread hologram (PSH) is first recorded by modulating the light diffracted from a point object by the coded phase masks. At least two different masks should be used to record two different intensity distributions at all possible axial locations. The various recorded patterns at every axial location are superposed in the computer to obtain a complex valued PSH library cataloged to its axial location. Following the training stage, an object is placed within the axial boundaries of the PSH library and the light diffracted from the object is once again modulated by the same phase masks. The intensity patterns are recorded and superposed exactly as the PSH to yield a complex hologram of the object. The object information at any particular plane is reconstructed by a cross-correlation between the complex valued hologram and the appropriate element of the PSH library. The characteristics and the performance of the proposed system were compared with an equivalent regular imaging system.
View of Scientific Instrument Module to be flown on Apollo 15
1971-06-27
S71-2250X (June 1971) --- A close-up view of the Scientific Instrument Module (SIM) to be flown for the first time on the Apollo 15 lunar landing mission. Mounted in a previously vacant sector of the Apollo Service Module (SM), the SIM carries specialized cameras and instrumentation for gathering lunar orbit scientific data. SIM equipment includes a laser altimeter for accurate measurement of height above the lunar surface; a large-format panoramic camera for mapping, correlated with a metric camera and the laser altimeter for surface mapping; a gamma ray spectrometer on a 25-feet extendible boom; a mass spectrometer on a 21-feet extendible boom; X-ray and alpha particle spectrometers; and a subsatellite which will be injected into lunar orbit carrying a particle and magnetometer, and the S-Band transponder.
A mobile light source for carbon/nitrogen cameras
NASA Astrophysics Data System (ADS)
Trower, W. P.; Karev, A. I.; Melekhin, V. N.; Shvedunov, V. I.; Sobenin, N. P.
1995-05-01
The pulsed light source for carbon/nitrogen cameras developed to image concealed narcotics/explosives is described. This race-track microtron will produce 40 mA pulses of 70 MeV electrons, have minimal size and weight, and maximal ruggedness and reliability, so that it can be transported on a truck.
Lights, Camera, Read! Arizona Reading Program Manual.
ERIC Educational Resources Information Center
Arizona State Dept. of Library, Archives and Public Records, Phoenix.
This document is the manual for the Arizona Reading Program (ARP) 2003 entitled "Lights, Camera, Read!" This theme spotlights books that were made into movies, and allows readers to appreciate favorite novels and stories that have progressed to the movie screen. The manual consists of eight sections. The Introduction includes welcome…
Opto-fluidics based microscopy and flow cytometry on a cell phone for blood analysis.
Zhu, Hongying; Ozcan, Aydogan
2015-01-01
Blood analysis is one of the most important clinical tests for medical diagnosis. Flow cytometry and optical microscopy are widely used techniques to perform blood analysis and therefore cost-effective translation of these technologies to resource limited settings is critical for various global health as well as telemedicine applications. In this chapter, we review our recent progress on the integration of imaging flow cytometry and fluorescent microscopy on a cell phone using compact, light-weight and cost-effective opto-fluidic attachments integrated onto the camera module of a smartphone. In our cell-phone based opto-fluidic imaging cytometry design, fluorescently labeled cells are delivered into the imaging area using a disposable micro-fluidic chip that is positioned above the existing camera unit of the cell phone. Battery powered light-emitting diodes (LEDs) are butt-coupled to the sides of this micro-fluidic chip without any lenses, which effectively acts as a multimode slab waveguide, where the excitation light is guided to excite the fluorescent targets within the micro-fluidic chip. Since the excitation light propagates perpendicular to the detection path, an inexpensive plastic absorption filter is able to reject most of the scattered light and create a decent dark-field background for fluorescent imaging. With this excitation geometry, the cell-phone camera can record fluorescent movies of the particles/cells as they are flowing through the microchannel. The digital frames of these fluorescent movies are then rapidly processed to quantify the count and the density of the labeled particles/cells within the solution under test. With a similar opto-fluidic design, we have recently demonstrated imaging and automated counting of stationary blood cells (e.g., labeled white blood cells or unlabeled red blood cells) loaded within a disposable cell counting chamber. We tested the performance of this cell-phone based imaging cytometry and blood analysis platform by measuring the density of red and white blood cells as well as hemoglobin concentration in human blood samples, which showed a good match to our measurement results obtained using a commercially available hematology analyzer. Such a cell-phone enabled opto-fluidics microscopy, flow cytometry, and blood analysis platform could be especially useful for various telemedicine applications in remote and resource-limited settings.
Modular, Microprocessor-Controlled Flash Lighting System
NASA Technical Reports Server (NTRS)
Kiefer, Dwayne; Gray, Elizabeth; Skupinski, Robert; Stachowicz, Arthur; Birchenough, William
2006-01-01
A microprocessor-controlled lighting system generates brief, precisely timed, high-intensity flashes of light for scientific imaging at frame rates up to about 1 kHz. The system includes an array of light-emitting diodes (LEDs) that are driven in synchronism with an externally generated timing signal (for example, a timing signal generated by a video camera). The light output can be varied in peak intensity, pulse duration, pulse delay, and pulse rate, all depending on the timing signal and associated externally generated control signals. The array of LEDs comprises as many as 16 LED panels that can be attached together. Each LED panel is a module consisting of a rectangular subarray of 10 by 20 LEDs of advanced design on a printed-circuit board in a mounting frame with a power/control connector. The LED panels are controlled by an LED control module that contains an AC-to-DC power supply, a control board, and 8 LED-panel driver boards. In prior LED panels, the LEDs are packaged at less than maximum areal densities in bulky metal housings that reduce effective active areas. In contrast, in the present LED panels, the LEDs are packed at maximum areal density so as to afford 100-percent active area and so that when panels are joined side by side to form the array, there are no visible seams between them and the proportion of active area is still 100 percent. Each panel produces an illuminance of .5 x 10( exp 4) lux at a distance of 5.8 in. (approx.1.6 cm). The LEDs are driven according to a pulse-width-modulation control scheme that makes it safe to drive the LEDs beyond their rated steady-state currents in order to generate additional light during short periods. The drive current and the pulse-width modulation for each LED panel can be controlled independently of those of the other 15 panels. The maximum allowable duration of each pulse of drive current is a function of the amount of overdrive, the total time to be spent in overdrive operation, and the limitations of the LEDs. The system is configured to limit the overdrive according to values specific to each type of LED in the array. These values are coded into firmware to prevent inadvertent damage to the LED panels.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Moon, Russell
2009-11-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Gridnev, Konstantin; Moon, Russell; Vasiliev, Victor
2009-11-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron^2. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Gridnev, Konstantin; Moon, Russell; Vasiliev, Victor
2009-10-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron^2. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Moon, Russell; Gridnev, Konstantin; Vasiliev, Victor
2010-02-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons. )
Video sensor with range measurement capability
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)
2008-01-01
A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.
High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces
NASA Astrophysics Data System (ADS)
Jiang, Hongzhi; Zhao, Huijie; Li, Xudong
2012-10-01
This paper presents a novel 3-D scanning technique for high-reflective surfaces based on phase-shifting fringe projection method. High dynamic range fringe acquisition (HDRFA) technique is developed to process the fringe images reflected from the shiny surfaces, and generates a synthetic fringe image by fusing the raw fringe patterns, acquired with different camera exposure time and the illumination fringe intensity from the projector. Fringe image fusion algorithm is introduced to avoid saturation and under-illumination phenomenon by choosing the pixels in the raw fringes with the highest fringe modulation intensity. A method of auto-selection of HDRFA parameters is developed and largely increases the measurement automation. The synthetic fringes have higher signal-to-noise ratio (SNR) under ambient light by optimizing HDRFA parameters. Experimental results show that the proposed technique can successfully measure objects with high-reflective surfaces and is insensitive to ambient light.
Wang, Yajun; Laughner, Jacob I.; Efimov, Igor R.; Zhang, Song
2013-01-01
This paper presents a two-frequency binary phase-shifting technique to measure three-dimensional (3D) absolute shape of beating rabbit hearts. Due to the low contrast of the cardiac surface, the projector and the camera must remain focused, which poses challenges for any existing binary method where the measurement accuracy is low. To conquer this challenge, this paper proposes to utilize the optimal pulse width modulation (OPWM) technique to generate high-frequency fringe patterns, and the error-diffusion dithering technique to produce low-frequency fringe patterns. Furthermore, this paper will show that fringe patterns produced with blue light provide the best quality measurements compared to fringe patterns generated with red or green light; and the minimum data acquisition speed for high quality measurements is around 800 Hz for a rabbit heart beating at 180 beats per minute. PMID:23482151
Characteristics of an Imaging Polarimeter for the Powell Observatory
NASA Astrophysics Data System (ADS)
Hall, Shannon; Henson, G.
2010-01-01
A dual-beam imaging polarimeter has been built for use on the 14 inch Schmidt-Cassegrain telescope at the ETSU Harry D. Powell Observatory. The polarimeter includes a rotating half-wave plate and a Wollaston prism to separate light into two orthogonal linearly polarized rays. A TEC cooled CCD camera is used to detect the modulated polarized light. We present here measurements of the polarization of polarimetric standard stars. By measuring unpolarized and polarized standard stars we are able to establish the instrumental polarization and the efficiency of the instrument. The polarimeter will initially be used as a dedicated instrument in an ongoing project to monitor the eclipsing binary star, Epsilon Aurigae. This project was funded by a partnership between the National Science Foundation (NSF AST-0552798), Research Experience for Undergraduates (REU), and the Department of Defense (DoD) ASSURE (Awards to Stimulate and Support Undergraduate Research Experiences) programs.
Supercontinuum as a light source for miniaturized endoscopes.
Lu, M K; Lin, H Y; Hsieh, C C; Kao, F J
2016-09-01
In this work, we have successfully implemented supercontinuum based illumination through single fiber coupling. The integration of a single fiber illumination with a miniature CMOS sensor forms a very slim and powerful camera module for endoscopic imaging. A set of tests and in vivo animal experiments are conducted accordingly to characterize the corresponding illuminance, spectral profile, intensity distribution, and image quality. The key illumination parameters of the supercontinuum, including color rendering index (CRI: 72%~97%) and correlated color temperature (CCT: 3,100K~5,200K), are modified with external filters and compared with those from a LED light source (CRI~76% & CCT~6,500K). The very high spatial coherence of the supercontinuum allows high luminosity conduction through a single multimode fiber (core size~400μm), whose distal end tip is attached with a diffussion tip to broaden the solid angle of illumination (from less than 10° to more than 80°).
NASA Astrophysics Data System (ADS)
Gaddam, Vamsidhar Reddy; Griwodz, Carsten; Halvorsen, Pâl.
2014-02-01
One of the most common ways of capturing wide eld-of-view scenes is by recording panoramic videos. Using an array of cameras with limited overlapping in the corresponding images, one can generate good panorama images. Using the panorama, several immersive display options can be explored. There is a two fold synchronization problem associated to such a system. One is the temporal synchronization, but this challenge can easily be handled by using a common triggering solution to control the shutters of the cameras. The other synchronization challenge is the automatic exposure synchronization which does not have a straight forward solution, especially in a wide area scenario where the light conditions are uncontrolled like in the case of an open, outdoor football stadium. In this paper, we present the challenges and approaches for creating a completely automatic real-time panoramic capture system with a particular focus on the camera settings. One of the main challenges in building such a system is that there is not one common area of the pitch that is visible to all the cameras that can be used for metering the light in order to nd appropriate camera parameters. One approach we tested is to use the green color of the eld grass. Such an approach provided us with acceptable results only in limited light conditions.A second approach was devised where the overlapping areas between adjacent cameras are exploited, thus creating pairs of perfectly matched video streams. However, there still existed some disparity between di erent pairs. We nally developed an approach where the time between two temporal frames is exploited to communicate the exposures among the cameras where we achieve a perfectly synchronized array. An analysis of the system and some experimental results are presented in this paper. In summary, a pilot-camera approach running in auto-exposure mode and then distributing the used exposure values to the other cameras seems to give best visual results.
NASA Astrophysics Data System (ADS)
Aguilar, J. A.; Basili, A.; Boccone, V.; Cadoux, F.; Christov, A.; della Volpe, D.; Montaruli, T.; Płatos, Ł.; Rameez, M.
2015-01-01
The focal-plane cameras of γ -ray telescopes frequently use light concentrators in front of the light sensors. The purpose of these concentrators is to increase the effective area of the camera as well as to reduce the stray light coming at large incident angles. These light concentrators are usually based on the Winston cone design. In this contribution we present the design of a hexagonal hollow light concentrator with a lateral profile optimized using a cubic Bézier function to achieve a higher collection efficiency in the angular region of interest. The design presented here is optimized for a Davies-Cotton telescope with a primary mirror of about 4 m in diameter and a focal length of 5.6 m. The described concentrators are part of an innovative camera made up of silicon-photomultiplier sensors, although a similar approach can be used for other sizes of single-mirror telescopes with different camera sensors, including photomultipliers. The challenge of our approach is to achieve a cost-effective design suitable for standard industrial production of both the plastic concentrator substrate and the reflective coating. At the same time we maximize the optical performance. In this paper we also describe the optical set-up to measure the absolute collection efficiency of the light concentrators and demonstrate our good understanding of the measured data using a professional ray-tracing simulation.
Surveyor 3: Bacterium isolated from lunar retrieved television camera
NASA Technical Reports Server (NTRS)
Mitchell, F. J.; Ellis, W. L.
1972-01-01
Microbial analysis was the first of several studies of the retrieved camera and was performed immediately after the camera was opened. The emphasis of the analysis was placed upon isolating microorganisms that could be potentially pathogenic for man. Every step in the retrieval of the Surveyor 3 television camera was analyzed for possible contamination sources, including camera contact by the astronauts, ingassing in the lunar and command module during the mission or at splashdown, and handling during quarantine, disassembly, and analysis at the Lunar Receiving Laboratory
Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung
2018-05-24
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.
Dual-Photoelastic-Modulator-Based Polarimetric Imaging Concept for Aerosol Remote Sensing
NASA Technical Reports Server (NTRS)
Diner, David J.; Davis, Ab; Hancock, Bruce; Gutt, Gary; Chipman, Russell A.; Cairns, Brian
2007-01-01
A dual-photoelastic-modulator- (PEM-) based spectropolarimetric camera concept is presented as an approach for global aerosol monitoring from space. The most challenging performance objective is to measure degree of linear polarization (DOLP) with an uncertainty of less than 0.5% in multiple spectral bands, at moderately high spatial resolution, over a wide field of view, and for the duration of a multiyear mission. To achieve this, the tandem PEMs are operated as an electro-optic circular retardance modulator within a high-performance reflective imaging system. Operating the PEMs at slightly different resonant frequencies generates a beat signal that modulates the polarized component of the incident light at a much lower heterodyne frequency. The Stokes parameter ratio q = Q/I is obtained from measurements acquired from each pixel during a single frame, providing insensitivity to pixel responsivity drift and minimizing polarization artifacts that conventionally arise when this quantity is derived from differences in the signals from separate detectors. Similarly, u = U/I is obtained from a different pixel; q and u are then combined to form the DOLP. A detailed accuracy and tolerance analysis for this polarimeter is presented.
Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask
NASA Astrophysics Data System (ADS)
Morel, Sébastien
2004-09-01
A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.
Portable fiber-optic taper coupled optical microscopy platform
NASA Astrophysics Data System (ADS)
Wang, Weiming; Yu, Yan; Huang, Hui; Ou, Jinping
2017-04-01
The optical fiber taper coupled with CMOS has advantages of high sensitivity, compact structure and low distortion in the imaging platform. So it is widely used in low light, high speed and X-ray imaging systems. In the meanwhile, the peculiarity of the coupled structure can meet the needs of the demand in microscopy imaging. Toward this end, we developed a microscopic imaging platform based on the coupling of cellphone camera module and fiber optic taper for the measurement of the human blood samples and ascaris lumbricoides. The platform, weighing 70 grams, is based on the existing camera module of the smartphone and a fiber-optic array which providing a magnification factor of 6x.The top facet of the taper, on which samples are placed, serves as an irregular sampling grid for contact imaging. The magnified images of the sample, located on the bottom facet of the fiber, are then projected onto the CMOS sensor. This paper introduces the portable medical imaging system based on the optical fiber coupling with CMOS, and theoretically analyzes the feasibility of the system. The image data and process results either can be stored on the memory or transmitted to the remote medical institutions for the telemedicine. We validate the performance of this cell-phone based microscopy platform using human blood samples and test target, achieving comparable results to a standard bench-top microscope.
MagAO: Status and on-sky performance of the Magellan adaptive optics system
NASA Astrophysics Data System (ADS)
Morzinski, Katie M.; Close, Laird M.; Males, Jared R.; Kopon, Derek; Hinz, Phil M.; Esposito, Simone; Riccardi, Armando; Puglisi, Alfio; Pinna, Enrico; Briguglio, Runa; Xompero, Marco; Quirós-Pacheco, Fernando; Bailey, Vanessa; Follette, Katherine B.; Rodigas, T. J.; Wu, Ya-Lin; Arcidiacono, Carmelo; Argomedo, Javier; Busoni, Lorenzo; Hare, Tyson; Uomoto, Alan; Weinberger, Alycia
2014-07-01
MagAO is the new adaptive optics system with visible-light and infrared science cameras, located on the 6.5-m Magellan "Clay" telescope at Las Campanas Observatory, Chile. The instrument locks on natural guide stars (NGS) from 0th to 16th R-band magnitude, measures turbulence with a modulating pyramid wavefront sensor binnable from 28×28 to 7×7 subapertures, and uses a 585-actuator adaptive secondary mirror (ASM) to provide at wavefronts to the two science cameras. MagAO is a mutated clone of the similar AO systems at the Large Binocular Telescope (LBT) at Mt. Graham, Arizona. The high-level AO loop controls up to 378 modes and operates at frame rates up to 1000 Hz. The instrument has two science cameras: VisAO operating from 0.5-1μm and Clio2 operating from 1-5 μm. MagAO was installed in 2012 and successfully completed two commissioning runs in 2012-2013. In April 2014 we had our first science run that was open to the general Magellan community. Observers from Arizona, Carnegie, Australia, Harvard, MIT, Michigan, and Chile took observations in collaboration with the MagAO instrument team. Here we describe the MagAO instrument, describe our on-sky performance, and report our status as of summer 2014.
NASA Astrophysics Data System (ADS)
Groch, A.; Seitel, A.; Hempel, S.; Speidel, S.; Engelbrecht, R.; Penne, J.; Höller, K.; Röhl, S.; Yung, K.; Bodenstedt, S.; Pflaum, F.; dos Santos, T. R.; Mersmann, S.; Meinzer, H.-P.; Hornegger, J.; Maier-Hein, L.
2011-03-01
One of the main challenges related to computer-assisted laparoscopic surgery is the accurate registration of pre-operative planning images with patient's anatomy. One popular approach for achieving this involves intraoperative 3D reconstruction of the target organ's surface with methods based on multiple view geometry. The latter, however, require robust and fast algorithms for establishing correspondences between multiple images of the same scene. Recently, the first endoscope based on Time-of-Flight (ToF) camera technique was introduced. It generates dense range images with high update rates by continuously measuring the run-time of intensity modulated light. While this approach yielded promising results in initial experiments, the endoscopic ToF camera has not yet been evaluated in the context of related work. The aim of this paper was therefore to compare its performance with different state-of-the-art surface reconstruction methods on identical objects. For this purpose, surface data from a set of porcine organs as well as organ phantoms was acquired with four different cameras: a novel Time-of-Flight (ToF) endoscope, a standard ToF camera, a stereoscope, and a High Definition Television (HDTV) endoscope. The resulting reconstructed partial organ surfaces were then compared to corresponding ground truth shapes extracted from computed tomography (CT) data using a set of local and global distance metrics. The evaluation suggests that the ToF technique has high potential as means for intraoperative endoscopic surface registration.
Apollo 17 Command/Service modules photographed from lunar module in orbit
1972-12-14
AS17-145-22254 (14 Dec. 1972) --- An excellent view of the Apollo 17 Command and Service Modules (CSM) photographed from the Lunar Module (LM) "Challenger" during rendezvous and docking maneuvers in lunar orbit. The LM ascent stage, with astronauts Eugene A. Cernan and Harrison H. Schmitt aboard, had just returned from the Taurus-Littrow landing site on the lunar surface. Astronaut Ronald E. Evans remained with the CSM in lunar orbit. Note the exposed Scientific Instrument Module (SIM) Bay in Sector 1 of the Service Module (SM). Three experiments are carried in the SIM bay: S-209 lunar sounder, S-171 infrared scanning spectrometer, and the S-169 far-ultraviolet spectrometer. Also mounted in the SIM bay are the panoramic camera, mapping camera and laser altimeter used in service module photographic tasks. A portion of the LM is on the right.
A Daytime Aspect Camera for Balloon Altitudes
NASA Technical Reports Server (NTRS)
Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.; Six, N. Frank (Technical Monitor)
2001-01-01
We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600-1000 nm region of the spectrum, successfully provided daytime aspect information of approximately 10 arcsecond resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models, but the daytime stellar magnitude limit was lower than expected due to dispersion of red light by the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.
Impact of New Camera Technologies on Discoveries in Cell Biology.
Stuurman, Nico; Vale, Ronald D
2016-08-01
New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.
NASA Astrophysics Data System (ADS)
Feng, Zhixin
2018-02-01
Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.
New ultrasensitive pickup device for deep-sea robots: underwater super-HARP color TV camera
NASA Astrophysics Data System (ADS)
Maruyama, Hirotaka; Tanioka, Kenkichi; Uchida, Tetsuo
1994-11-01
An ultra-sensitive underwater super-HARP color TV camera has been developed. The characteristics -- spectral response, lag, etc. -- of the super-HARP tube had to be designed for use underwater because the propagation of light in water is very different from that in air, and also depends on the light's wavelength. The tubes have new electrostatic focusing and magnetic deflection functions and are arranged in parallel to miniaturize the camera. A deep sea robot (DOLPHIN 3K) was fitted with this camera and used for the first sea test in Sagami Bay, Japan. The underwater visual information was clear enough to promise significant improvements in both deep sea surveying and safety. It was thus confirmed that the Super- HARP camera is very effective for underwater use.
NASA Astrophysics Data System (ADS)
Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.
2017-02-01
In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.
Kim, Heekang; Kwon, Soon; Kim, Sungho
2016-07-08
This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen).
Rapid assessment of forest canopy and light regime using smartphone hemispherical photography.
Bianchi, Simone; Cahalan, Christine; Hale, Sophie; Gibbons, James Michael
2017-12-01
Hemispherical photography (HP), implemented with cameras equipped with "fisheye" lenses, is a widely used method for describing forest canopies and light regimes. A promising technological advance is the availability of low-cost fisheye lenses for smartphone cameras. However, smartphone camera sensors cannot record a full hemisphere. We investigate whether smartphone HP is a cheaper and faster but still adequate operational alternative to traditional cameras for describing forest canopies and light regimes. We collected hemispherical pictures with both smartphone and traditional cameras in 223 forest sample points, across different overstory species and canopy densities. The smartphone image acquisition followed a faster and simpler protocol than that for the traditional camera. We automatically thresholded all images. We processed the traditional camera images for Canopy Openness (CO) and Site Factor estimation. For smartphone images, we took two pictures with different orientations per point and used two processing protocols: (i) we estimated and averaged total canopy gap from the two single pictures, and (ii) merging the two pictures together, we formed images closer to full hemispheres and estimated from them CO and Site Factors. We compared the same parameters obtained from different cameras and estimated generalized linear mixed models (GLMMs) between them. Total canopy gap estimated from the first processing protocol for smartphone pictures was on average significantly higher than CO estimated from traditional camera images, although with a consistent bias. Canopy Openness and Site Factors estimated from merged smartphone pictures of the second processing protocol were on average significantly higher than those from traditional cameras images, although with relatively little absolute differences and scatter. Smartphone HP is an acceptable alternative to HP using traditional cameras, providing similar results with a faster and cheaper methodology. Smartphone outputs can be directly used as they are for ecological studies, or converted with specific models for a better comparison to traditional cameras.
Qualification Tests of Micro-camera Modules for Space Applications
NASA Astrophysics Data System (ADS)
Kimura, Shinichi; Miyasaka, Akira
Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.
NASA Astrophysics Data System (ADS)
Kutulakos, Kyros N.; O'Toole, Matthew
2015-03-01
Conventional cameras record all light falling on their sensor regardless of the path that light followed to get there. In this paper we give an overview of a new family of computational cameras that offers many more degrees of freedom. These cameras record just a fraction of the light coming from a controllable source, based on the actual 3D light path followed. Photos and live video captured this way offer an unconventional view of everyday scenes in which the effects of scattering, refraction and other phenomena can be selectively blocked or enhanced, visual structures that are too subtle to notice with the naked eye can become apparent, and object appearance can depend on depth. We give an overview of the basic theory behind these cameras and their DMD-based implementation, and discuss three applications: (1) live indirect-only imaging of complex everyday scenes, (2) reconstructing the 3D shape of scenes whose geometry or material properties make them hard or impossible to scan with conventional methods, and (3) acquiring time-of-flight images that are free of multi-path interference.
High-accuracy 3D measurement system based on multi-view and structured light
NASA Astrophysics Data System (ADS)
Li, Mingyue; Weng, Dongdong; Li, Yufeng; Zhang, Longbin; Zhou, Haiyun
2013-12-01
3D surface reconstruction is one of the most important topics in Spatial Augmented Reality (SAR). Using structured light is a simple and rapid method to reconstruct the objects. In order to improve the precision of 3D reconstruction, we present a high-accuracy multi-view 3D measurement system based on Gray-code and Phase-shift. We use a camera and a light projector that casts structured light patterns on the objects. In this system, we use only one camera to take photos on the left and right sides of the object respectively. In addition, we use VisualSFM to process the relationships between each perspective, so the camera calibration can be omitted and the positions to place the camera are no longer limited. We also set appropriate exposure time to make the scenes covered by gray-code patterns more recognizable. All of the points above make the reconstruction more precise. We took experiments on different kinds of objects, and a large number of experimental results verify the feasibility and high accuracy of the system.
Counting neutrons with a commercial S-CMOS camera
NASA Astrophysics Data System (ADS)
Patrick, Van Esch; Paolo, Mutti; Emilio, Ruiz-Martinez; Estefania, Abad Garcia; Marita, Mosconi; Jon, Ortega
2018-01-01
It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable "neutron impact" data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera) but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has already been walked.
Design of intelligent vehicle control system based on single chip microcomputer
NASA Astrophysics Data System (ADS)
Zhang, Congwei
2018-06-01
The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.
Apollo 12 crew assisted with egressing command module after landing
1969-11-24
S69-22271 (24 Nov. 1969) --- A United States Navy Underwater Demolition Team swimmer assists the Apollo 12 crew during recovery operations in the Pacific Ocean. In the life raft are astronauts Charles Conrad Jr. (facing camera), commander; Richard F. Gordon Jr. (middle), command module pilot; and Alan L. Bean (nearest camera), lunar module pilot. The three crew men of the second lunar landing mission were picked up by helicopter and flown to the prime recovery ship, USS Hornet. Apollo 12 splashed down at 2:58 p.m. (CST), Nov. 24, 1969, near American Samoa. While astronauts Conrad and Bean descended in the Lunar Module (LM) "Intrepid" to explore the Ocean of Storms region of the moon, astronaut Gordon remained with the Command and Service Modules (CSM) "Yankee Clipper" in lunar orbit.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
Compact fluorescence and white-light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; tan Hehir, Cristina
2012-02-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
A compact fluorescence and white light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; Tan Hehir, Cristina
2012-03-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
Li, Jin; Liu, Zilong; Liu, Si
2017-02-20
In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.
NASA Astrophysics Data System (ADS)
Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin
2018-01-01
ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.
Polymorphic robotic system controlled by an observing camera
NASA Astrophysics Data System (ADS)
Koçer, Bilge; Yüksel, Tugçe; Yümer, M. Ersin; Özen, C. Alper; Yaman, Ulas
2010-02-01
Polymorphic robotic systems, which are composed of many modular robots that act in coordination to achieve a goal defined on the system level, have been drawing attention of industrial and research communities since they bring additional flexibility in many applications. This paper introduces a new polymorphic robotic system, in which the detection and control of the modules are attained by a stationary observing camera. The modules do not have any sensory equipment for positioning or detecting each other. They are self-powered, geared with means of wireless communication and locking mechanisms, and are marked to enable the image processing algorithm detect the position and orientation of each of them in a two dimensional space. Since the system does not depend on the modules for positioning and commanding others, in a circumstance where one or more of the modules malfunction, the system will be able to continue operating with the rest of the modules. Moreover, to enhance the compatibility and robustness of the system under different illumination conditions, stationary reference markers are employed together with global positioning markers, and an adaptive filtering parameter decision methodology is enclosed. To the best of authors' knowledge, this is the first study to introduce a remote camera observer to control modules of a polymorphic robotic system.
Design of CMOS imaging system based on FPGA
NASA Astrophysics Data System (ADS)
Hu, Bo; Chen, Xiaolai
2017-10-01
In order to meet the needs of engineering applications for high dynamic range CMOS camera under the rolling shutter mode, a complete imaging system is designed based on the CMOS imaging sensor NSC1105. The paper decides CMOS+ADC+FPGA+Camera Link as processing architecture and introduces the design and implementation of the hardware system. As for camera software system, which consists of CMOS timing drive module, image acquisition module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The ISE 14.6 emulator ISim is used in the simulation of signals. The imaging experimental results show that the system exhibits a 1280*1024 pixel resolution, has a frame frequency of 25 fps and a dynamic range more than 120dB. The imaging quality of the system satisfies the requirement of the index.
Multispectral imaging system for contaminant detection
NASA Technical Reports Server (NTRS)
Poole, Gavin H. (Inventor)
2003-01-01
An automated inspection system for detecting digestive contaminants on food items as they are being processed for consumption includes a conveyor for transporting the food items, a light sealed enclosure which surrounds a portion of the conveyor, with a light source and a multispectral or hyperspectral digital imaging camera disposed within the enclosure. Operation of the conveyor, light source and camera are controlled by a central computer unit. Light reflected by the food items within the enclosure is detected in predetermined wavelength bands, and detected intensity values are analyzed to detect the presence of digestive contamination.
System for photometric calibration of optoelectronic imaging devices especially streak cameras
Boni, Robert; Jaanimagi, Paul
2003-11-04
A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glaser, Adam K., E-mail: Adam.K.Glaser@dartmouth.edu, E-mail: Brian.W.Pogue@dartmouth.edu; Andreozzi, Jacqueline M.; Davis, Scott C.
Purpose: A novel technique for optical dosimetry of dynamic intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) plans was investigated for the first time by capturing images of the induced Cherenkov radiation in water. Methods: A high-sensitivity, intensified CCD camera (ICCD) was configured to acquire a two-dimensional (2D) projection image of the Cherenkov radiation induced by IMRT and VMAT plans, based on the Task Group 119 (TG-119) C-Shape geometry. Plans were generated using the Varian Eclipse treatment planning system (TPS) and delivered using 6 MV x-rays from a Varian TrueBeam Linear Accelerator (Linac) incident on a water tank dopedmore » with the fluorophore quinine sulfate. The ICCD acquisition was gated to the Linac target trigger pulse to reduce background light artifacts, read out for a single radiation pulse, and binned to a resolution of 512 × 512 pixels. The resulting videos were analyzed temporally for various regions of interest (ROI) covering the planning target volume (PTV) and organ at risk (OAR), and summed to obtain an overall light intensity distribution, which was compared to the expected dose distribution from the TPS using a gamma-index analysis. Results: The chosen camera settings resulted in 23.5 frames per second dosimetry videos. Temporal intensity plots of the PTV and OAR ROIs confirmed the preferential delivery of dose to the PTV versus the OAR, and the gamma analysis yielded 95.9% and 96.2% agreement between the experimentally captured Cherenkov light distribution and expected TPS dose distribution based upon a 3%/3 mm dose difference and distance-to-agreement criterion for the IMRT and VMAT plans, respectively. Conclusions: The results from this initial study demonstrate the first documented use of Cherenkov radiation for video-rate optical dosimetry of dynamic IMRT and VMAT treatment plans. The proposed modality has several potential advantages over alternative methods including the real-time nature of the acquisition, and upon future refinement may prove to be a robust and novel dosimetry method with both research and clinical applications.« less
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.
2015-01-01
Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851
NASA Astrophysics Data System (ADS)
Ishikawa, K.; Yatabe, K.; Ikeda, Y.; Oikawa, Y.; Onuma, T.; Niwa, H.; Yoshii, M.
2017-02-01
Imaging of sound aids the understanding of the acoustical phenomena such as propagation, reflection, and diffraction, which is strongly required for various acoustical applications. The imaging of sound is commonly done by using a microphone array, whereas optical methods have recently been interested due to its contactless nature. The optical measurement of sound utilizes the phase modulation of light caused by sound. Since light propagated through a sound field changes its phase as proportional to the sound pressure, optical phase measurement technique can be used for the sound measurement. Several methods including laser Doppler vibrometry and Schlieren method have been proposed for that purpose. However, the sensitivities of the methods become lower as a frequency of sound decreases. In contrast, since the sensitivities of the phase-shifting technique do not depend on the frequencies of sounds, that technique is suitable for the imaging of sounds in the low-frequency range. The principle of imaging of sound using parallel phase-shifting interferometry was reported by the authors (K. Ishikawa et al., Optics Express, 2016). The measurement system consists of a high-speed polarization camera made by Photron Ltd., and a polarization interferometer. This paper reviews the principle briefly and demonstrates the high-speed imaging of acoustical phenomena. The results suggest that the proposed system can be applied to various industrial problems in acoustical engineering.
Synchronization of video recording and laser pulses including background light suppression
NASA Technical Reports Server (NTRS)
Kalshoven, Jr., James E. (Inventor); Tierney, Jr., Michael (Inventor); Dabney, Philip W. (Inventor)
2004-01-01
An apparatus for and a method of triggering a pulsed light source, in particular a laser light source, for predictable capture of the source by video equipment. A frame synchronization signal is derived from the video signal of a camera to trigger the laser and position the resulting laser light pulse in the appropriate field of the video frame and during the opening of the electronic shutter, if such shutter is included in the camera. Positioning of the laser pulse in the proper video field allows, after recording, for the viewing of the laser light image with a video monitor using the pause mode on a standard cassette-type VCR. This invention also allows for fine positioning of the laser pulse to fall within the electronic shutter opening. For cameras with externally controllable electronic shutters, the invention provides for background light suppression by increasing shutter speed during the frame in which the laser light image is captured. This results in the laser light appearing in one frame in which the background scene is suppressed with the laser light being uneffected, while in all other frames, the shutter speed is slower, allowing for the normal recording of the background scene. This invention also allows for arbitrary (manual or external) triggering of the laser with full video synchronization and background light suppression.
Miniature photometric stereo system for textile surface structure reconstruction
NASA Astrophysics Data System (ADS)
Gorpas, Dimitris; Kampouris, Christos; Malassiotis, Sotiris
2013-04-01
In this work a miniature photometric stereo system is presented, targeting the three-dimensional structural reconstruction of various fabric types. This is a supportive module to a robot system, attempting to solve the well known "laundry problem". The miniature device has been designed for mounting onto the robot gripper. It is composed of a low-cost off-the-shelf camera, operating in macro mode, and eight light emitting diodes. The synchronization between image acquisition and lighting direction is controlled by an Arduino Nano board and software triggering. The ambient light has been addressed by a cylindrical enclosure. The direction of illumination is recovered by locating the reflection or the brightest point on a mirror sphere, while a flatfielding process compensates for the non-uniform illumination. For the evaluation of this prototype, the classical photometric stereo methodology has been used. The preliminary results on a large number of textiles are very promising for the successful integration of the miniature module to the robot system. The required interaction with the robot is implemented through the estimation of the Brenner's focus measure. This metric successfully assesses the focus quality with reduced time requirements in comparison to other well accepted focus metrics. Besides the targeting application, the small size of the developed system makes it a very promising candidate for applications with space restrictions, like the quality control in industrial production lines or object recognition based on structural information and in applications where easiness in operation and light-weight are required, like those in the Biomedical field, and especially in dermatology.
NASA Technical Reports Server (NTRS)
2005-01-01
This spectacular image of comet Tempel 1 was taken 67 seconds after it obliterated Deep Impact's impactor spacecraft. The image was taken by the high-resolution camera on the mission's flyby craft. Scattered light from the collision saturated the camera's detector, creating the bright splash seen here. Linear spokes of light radiate away from the impact site, while reflected sunlight illuminates most of the comet surface. The image reveals topographic features, including ridges, scalloped edges and possibly impact craters formed long ago.Development of an LYSO based gamma camera for positron and scinti-mammography
NASA Astrophysics Data System (ADS)
Liang, H.-C.; Jan, M.-L.; Lin, W.-C.; Yu, S.-F.; Su, J.-L.; Shen, L.-H.
2009-08-01
In this research, characteristics of combination of PSPMTs (position sensitive photo-multiplier tube) to form a larger detection area is studied. A home-made linear divider circuit was built for merging signals and readout. Borosilicate glasses were chosen for the scintillation light sharing in the crossover region. Deterioration effect caused by the light guide was understood. The influences of light guide and crossover region on the separable crystal size were evaluated. According to the test results, a gamma camera with a crystal block of 90 × 90 mm2 covered area, composed of 2 mm LYSO crystal pixels, was designed and fabricated. Measured performances showed that this camera worked fine in both 511 keV and lower energy gammas. The light loss behaviour within the crossover region was analyzed and realized. Through count rate measurements, the 176Lu nature background didn't show severe influence on the single photon imaging and exhibited an amount of less than 1/3 of all the events acquired. These results show that with using light sharing techniques, combination of multiple PSPMTs in both X and Y directions to build a large area imaging detector is capable to be achieved. Also this camera design is feasible to keep both the abilities for positron and single photon breast imaging applications. Separable crystal size is 2 mm with 2 mm thick glass applied for the light sharing in current status.
Velocity visualization in gaseous flows
NASA Technical Reports Server (NTRS)
Hanson, R. K.; Hiller, B.; Hassa, C.; Booman, R. A.
1984-01-01
Techniques yielding simultaneous, multiple-point measurements of velocity in reacting or nonreacting flow fields have the potential to significantly impact basic and applied studies of fluid mechanics. This research program is aimed at investigating several candidate schemes which could provide such measurement capability. The concepts under study have in common the use of a laser source (to illuminate a column, a grid, a plane or a volume in the flow) and the collection of light at right angles (from Mie scattering, fluorescence, phosphorescence or chemiluminescence) using a multi-element solid-state camera (100 x 100 array of photodiodes). The work will include an overview and a status report of work in progress with particular emphasis on the method of Doppler-modulated absorption.
Subaperture correlation based digital adaptive optics for full field optical coherence tomography.
Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A
2013-05-06
This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.
Real-time FPGA-based radar imaging for smart mobility systems
NASA Astrophysics Data System (ADS)
Saponara, Sergio; Neri, Bruno
2016-04-01
The paper presents an X-band FMCW (Frequency Modulated Continuous Wave) Radar Imaging system, called X-FRI, for surveillance in smart mobility applications. X-FRI allows for detecting the presence of targets (e.g. obstacles in a railway crossing or urban road crossing, or ships in a small harbor), as well as their speed and their position. With respect to alternative solutions based on LIDAR or camera systems, X-FRI operates in real-time also in bad lighting and weather conditions, night and day. The radio-frequency transceiver is realized through COTS (Commercial Off The Shelf) components on a single-board. An FPGA-based baseband platform allows for real-time Radar image processing.
Photography Foundations: The Student Photojournalist.
ERIC Educational Resources Information Center
Glowacki, Joseph W.
Designed to aid student publications photographers in taking effective photographs, this publication provides discussions relating to the following areas: a publications photographer's self-image, the camera, camera handling, using the adjustable camera, the light meter, depth of field, shutter speeds and action pictures, lenses for publications…
Multi-camera synchronization core implemented on USB3 based FPGA platform
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Image synchronization for 3D application using the NanEye sensor
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories
NASA Astrophysics Data System (ADS)
Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji
2008-11-01
We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.
1972-04-07
S72-35971 (21 April 1972) --- A 360-degree field of view of the Apollo 16 Descartes landing site area composed of individual scenes taken from color transmission made by the color RCA TV camera mounted on the Lunar Roving Vehicle (LRV). This panorama was made while the LRV was parked at the rim of North Ray Crater (Stations 11 & 12) during the third Apollo 16 lunar surface extravehicular activity (EVA) by astronauts John W. Young and Charles M. Duke Jr. The overlay identifies the directions and the key lunar terrain features. The camera panned across the rear portion of the LRV in its 360-degree sweep. Note Young and Duke walking along the edge of the crater in one of the scenes. The TV camera was remotely controlled from a console in the Mission Control Center (MCC). Astronauts Young, commander; and Duke, lunar module pilot; descended in the Apollo 16 Lunar Module (LM) "Orion" to explore the Descartes highlands landing site on the moon. Astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.
Close-up view of RCA color television camera mounted on the LRV
1972-04-23
AS16-117-18754 (23 April 1972) --- A view of the smooth terrain in the general area of the North Ray Crater geological site, photographed by the Apollo 16 crew from the Lunar Roving Vehicle (LRV) shortly after leaving the immediate area of the geology site. The RCA color television camera is mounted on the front of the LRV and can be seen in the foreground, along with a small part of the high gain antenna, upper left. The tracks were made on the earlier trip to the North Ray Crater site. Astronaut Charles M. Duke Jr., lunar module pilot, exposed this view with his 70mm Hasselblad camera. Astronaut John W. Young, commander, said that this area was much smoother than the region around South Ray Crater. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (LM) "Orion" to explore the Descartes highlands landing site on the moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.
Multi-Angle Snowflake Camera Value-Added Product
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shkurko, Konstantin; Garrett, T.; Gaustad, K
The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32more » mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.« less
Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Liu, Zejin
2015-10-20
Stable information of a sky light polarization pattern can be used for navigation with various advantages such as better performance of anti-interference, no "error cumulative effect," and so on. But the existing method of sky light polarization measurement is weak in real-time performance or with a complex system. Inspired by the navigational capability of a Cataglyphis with its compound eyes, we introduce a new approach to acquire the all-sky image under different polarization directions with one camera and without a rotating polarizer, so as to detect the polarization pattern across the full sky in a single snapshot. Our system is based on a handheld light field camera with a wide-angle lens and a triplet linear polarizer placed over its aperture stop. Experimental results agree with the theoretical predictions. Not only real-time detection but simple and costless architecture demonstrates the superiority of the approach proposed in this paper.
Machine-Vision Aids for Improved Flight Operations
NASA Technical Reports Server (NTRS)
Menon, P. K.; Chatterji, Gano B.
1996-01-01
The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera
NASA Astrophysics Data System (ADS)
Cruz Perez, Carlos; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor
2015-09-01
Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.
Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera.
Perez, Carlos Cruz; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor
2015-09-01
Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.
NASA Astrophysics Data System (ADS)
Harvey, Nate
2016-08-01
Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.
Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
Miniature self-contained vacuum compatible electronic imaging microscope
Naulleau, Patrick P.; Batson, Phillip J.; Denham, Paul E.; Jones, Michael S.
2001-01-01
A vacuum compatible CCD-based microscopic camera with an integrated illuminator. The camera can provide video or still feed from the microscope contained within a vacuum chamber. Activation of an optional integral illuminator can provide light to illuminate the microscope subject. The microscope camera comprises a housing with a objective port, modified objective, beam-splitter, CCD camera, and LED illuminator.
Blood pulsation measurement using cameras operating in visible light: limitations.
Koprowski, Robert
2016-10-03
The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).
2007-08-03
KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. In Orbiter Processing Facility bay 3, from left in blue flight suits, STS-120 Mission Specialist Stephanie D. Wilson, Commander Pamela A. Melroy, Pilot George D. Zamka, Mission Specialist Scott E. Parazynski (back to camera), Mission Specialist Douglas H. Wheelock and Mission Specialist Paolo A. Nespoli (holding camera), a European Space Agency astronaut from Italy, are given the opportunity to operate the cameras that will fly on their mission. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton
NASA Technical Reports Server (NTRS)
Abou-Khousa, M. A.
2009-01-01
A novel modulated slot design has been proposed and tested. The proposed slot is aimed to replace the inefficient small dipoles used in conventional MST-based imaging systems. The developed slot is very attractive as MST array element due to its small size and high efficiency/modulation depth. In fact, the developed slot has been successfully used to implement the first prototype of a microwave camera operating at 24 GHZ. It is also being used in the design of the second generation of the camera. Finally, the designed elliptical slot can be used as an electronically controlled waveguide iris for many other purposes (for instance in constructing waveguide reflective phase shifters and multiplexers/switches).
[Evaluation of Iris Morphology Viewed through Stromal Edematous Corneas by Infrared Camera].
Kobayashi, Masaaki; Morishige, Naoyuki; Morita, Yukiko; Yamada, Naoyuki; Kobayashi, Motomi; Sonoda, Koh-Hei
2016-02-01
We reported that the application of infrared camera enables us to observe iris morphology in Peters' anomaly through edematous corneas. To observe the iris morphology in bullous keratopathy or failure grafts with an infrared camera. Eleven bullous keratopathy or failure grafts subjects (6 men and 5 women, mean age ± SD; 72.7 ± 13.0 years old) were enrolled in this study. The iris morphology was observed by applying visible light mode and near infrared light mode of infrared camera (MeibomPen). The detectability of pupil shapes, iris patterns and presence of iridectomy was evaluated. Infrared mode observation enabled us to detect the pupil shapes in 11 out of 11 cases, iris patterns in 3 out of 11 cases, and presence of iridetomy in 9 out of 11 cases although visible light mode observation could not detect any iris morphological changes. Applying infrared optics was valuable for observation of the iris morphology through stromal edematous corneas.
Optical touch sensing: practical bounds for design and performance
NASA Astrophysics Data System (ADS)
Bläßle, Alexander; Janbek, Bebart; Liu, Lifeng; Nakamura, Kanna; Nolan, Kimberly; Paraschiv, Victor
2013-02-01
Touch sensitive screens are used in many applications ranging in size from smartphones and tablets to display walls and collaborative surfaces. In this study, we consider optical touch sensing, a technology best suited for large-scale touch surfaces. Optical touch sensing utilizes cameras and light sources placed along the edge of the display. Within this framework, we first find a sufficient number of cameras necessary for identifying a convex polygon touching the screen, using a continuous light source on the boundary of a circular domain. We then find the number of cameras necessary to distinguish between two circular objects in a circular or rectangular domain. Finally, we use Matlab to simulate the polygonal mesh formed from distributing cameras and light sources on a circular domain. Using this, we compute the number of polygons in the mesh and the maximum polygon area to give us information about the accuracy of the configuration. We close with summary and conclusions, and pointers to possible future research directions.
Development of the compact infrared camera (CIRC) for Earth observation
NASA Astrophysics Data System (ADS)
Naitoh, Masataka; Katayama, Haruyoshi; Harada, Masatomo; Nakamura, Ryoko; Kato, Eri; Tange, Yoshio; Sato, Ryota; Nakau, Koji
2017-11-01
The Compact Infrared Camera (CIRC) is an instrument equipped with an uncooled infrared array detector (microbolometer). We adopted the microbolometer, because it does not require a cooling system such as a mechanical cooler, and athermal optics, which does not require an active thermal control of optics. This can reduce the size, cost, and electrical power consumption of the sensor. The main mission of the CIRC is to demonstrate the technology for detecting wildfire, which are major and chronic disasters affecting many countries in the Asia-Pacific region. It is possible to increase observational frequency of wildfires, if CIRCs are carried on a various satellites by taking advantages of small size and light weight. We have developed two CIRCs. The first will be launched in JFY 2013 onboard Advanced Land Observing Satellite-2 (ALOS- 2), and the second will be launched in JFY 2014 onboard CALorimetric Electron Telescope (CALET) of the Japanese Experiment Module (JEM) at the International Space Station(ISS). We have finished the ground Calibration of the first CIRC onboard ALOS-2. In this paper, we provide an overview of the CIRC and its results of ground calibration.
Girshovitz, Pinhas; Frenklach, Irena; Shaked, Natan T
2015-11-01
We propose a new portable imaging configuration that can double the field of view (FOV) of existing off-axis interferometric imaging setups, including broadband off-axis interferometers. This configuration is attached at the output port of the off-axis interferometer and optically creates a multiplexed interferogram on the digital camera, which is composed of two off-axis interferograms with straight fringes at orthogonal directions. Each of these interferograms contains a different FOV of the imaged sample. Due to the separation of these two FOVs in the spatial-frequency domain, they can be fully reconstructed separately, while obtaining two complex wavefronts from the sample at once. Since the optically multiplexed off-axis interferogram is recorded by the camera in a single exposure, fast dynamics can be recorded with a doubled imaging area. We used this technique for quantitative phase microscopy of biological samples with extended FOV. We demonstrate attaching the proposed module to a diffractive phase microscopy interferometer, illuminated by a broadband light source. The biological samples used for the experimental demonstrations include microscopic diatom shells, cancer cells, and flowing blood cells.
A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature
NASA Astrophysics Data System (ADS)
Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min
2017-05-01
This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.
Kim, Heekang; Kwon, Soon; Kim, Sungho
2016-01-01
This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen). PMID:27399720
Daytime Aspect Camera for Balloon Altitudes
NASA Technical Reports Server (NTRS)
Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.
2002-01-01
We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40 km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600- to 1000-nm region of the spectrum, successfully provides daytime aspect information of approx. 10 arcsec resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models used to design the camera, but the daytime stellar magnitude limit was lower than expected due to longitudinal chromatic aberration in the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.
Adaptive DOF for plenoptic cameras
NASA Astrophysics Data System (ADS)
Oberdörster, Alexander; Lensch, Hendrik P. A.
2013-03-01
Plenoptic cameras promise to provide arbitrary re-focusing through a scene after the capture. In practice, however, the refocusing range is limited by the depth of field (DOF) of the plenoptic camera. For the focused plenoptic camera, this range is given by the range of object distances for which the microimages are in focus. We propose a technique of recording light fields with an adaptive depth of focus. Between multiple exposures { or multiple recordings of the light field { the distance between the microlens array (MLA) and the image sensor is adjusted. The depth and quality of focus is chosen by changing the number of exposures and the spacing of the MLA movements. In contrast to traditional cameras, extending the DOF does not necessarily lead to an all-in-focus image. Instead, the refocus range is extended. There is full creative control about the focus depth; images with shallow or selective focus can be generated.
NASA Astrophysics Data System (ADS)
Morison, Ian
2017-02-01
1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.
Operator vision aids for space teleoperation assembly and servicing
NASA Technical Reports Server (NTRS)
Brooks, Thurston L.; Ince, Ilhan; Lee, Greg
1992-01-01
This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.
NASA Astrophysics Data System (ADS)
Chi, Yuxi; Yu, Liping; Pan, Bing
2018-05-01
A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.
A new apparatus of infrared videopupillography for monitoring pupil size
NASA Astrophysics Data System (ADS)
Ko, M.-L.; Huang, T.-W.; Chen, Y.-Y.; Sone, B.-S.; Huang, Y.-C.; Jeng, W.-D.; Chen, Y.-T.; Hsieh, Y.-F.; Tao, K.-H.; Li, S.-T.; Ou-Yang, M.; Chiou, J.-C.
2013-09-01
Glaucoma was diagnosed or tracked by the intraocular pressure (IOP) generally because it is one of the physiology parameters that are associated with glaucoma. But measurement of IOP is not easy and consistence under different measure conditions. An infrared videopupillography is apparatus to monitor the pupil size in an attempt to bypass the direct IOP measurement. This paper propose an infrared videopupillography to monitoring the pupil size of different light stimulus in dark room. The portable infrared videopupillography contains a camera, a beam splitter, the visible-light LEDs for stimulating the eyes, and the infrared LEDs for lighting the eyes. It is lighter and smaller than the present product. It can modulate for different locations of different eyes, and can be mounted on any eyeglass frame. An analysis program of pupil size can evaluate the pupil diameter by image correlation. In our experiments, the eye diameter curves were not smooth and jagged. It caused by the light spots, lone eyelashes, and blink. In the future, we will improve the analysis program of pupil size and seek the approach to solve the LED light spots. And we hope this infrared videopupillography proposed in this paper can be a measuring platform to explore the relations between the different diseases and pupil response.
General theory of remote gaze estimation using the pupil center and corneal reflections.
Guestrin, Elias Daniel; Eizenman, Moshe
2006-06-01
This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.
Restoring the spatial resolution of refocus images on 4D light field
NASA Astrophysics Data System (ADS)
Lim, JaeGuyn; Park, ByungKwan; Kang, JooYoung; Lee, SeongDeok
2010-01-01
This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions. Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method compared to existing method.
Mask-to-wafer alignment system
Sweatt, William C.; Tichenor, Daniel A.; Haney, Steven J.
2003-11-04
A modified beam splitter that has a hole pattern that is symmetric in one axis and anti-symmetric in the other can be employed in a mask-to-wafer alignment device. The device is particularly suited for rough alignment using visible light. The modified beam splitter transmits and reflects light from a source of electromagnetic radiation and it includes a substrate that has a first surface facing the source of electromagnetic radiation and second surface that is reflective of said electromagnetic radiation. The substrate defines a hole pattern about a central line of the substrate. In operation, an input beam from a camera is directed toward the modified beam splitter and the light from the camera that passes through the holes illuminates the reticle on the wafer. The light beam from the camera also projects an image of a corresponding reticle pattern that is formed on the mask surface of the that is positioned downstream from the camera. Alignment can be accomplished by detecting the radiation that is reflected from the second surface of the modified beam splitter since the reflected radiation contains both the image of the pattern from the mask and a corresponding pattern on the wafer.
Compact battery-less information terminal (CoBIT) for location-based support systems
NASA Astrophysics Data System (ADS)
Nishimura, Takuichi; Itoh, Hideo; Yamamoto, Yoshinobu; Nakashima, Hideyuki
2002-06-01
The target of ubiquitous computing environment is to support users to get necessary information and services in a situation-dependent form. Therefore, we propose a location-based information support system by using Compact Battery-less Information Terminal (CoBIT). A CoBIT can communicate with the environmental system and with the user by only the energy supply from the environment. It has a solar cell and get a modulated light from an environmental optical beam transmitter. The current from the solar cell is directly (or through passive circuit) introduced into an earphone, which generates sound for the user. The current is also used to make vibration, LED signal or electrical stimulus on the skin. The sizes of CoBITs are about 2cm in diameter, 3cm in length, which can be hanged on ears conveniently. The cost of it would be only about 1 dollar if produced massively. The CoBIT also has sheet type corner reflector, which reflect optical beam back in the direction of the light source. Therefore the environmental system can easily detect the terminal position and direction as well as some simple signs from the user by multiple cameras with infra-red LEDs. The system identifies the sign by the modulated patterns of the reflected light, which the user makes by occluding the reflector by hand. The environmental system also recognizes other objects using other sensors and displays video information on a nearby monitor in order to realize situated support.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.
Streak camera imaging of single photons at telecom wavelength
NASA Astrophysics Data System (ADS)
Allgaier, Markus; Ansari, Vahid; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Donohue, John Matthew; Czerniuk, Thomas; Aßmann, Marc; Bayer, Manfred; Brecht, Benjamin; Silberhorn, Christine
2018-01-01
Streak cameras are powerful tools for temporal characterization of ultrafast light pulses, even at the single-photon level. However, the low signal-to-noise ratio in the infrared range prevents measurements on weak light sources in the telecom regime. We present an approach to circumvent this problem, utilizing an up-conversion process in periodically poled waveguides in Lithium Niobate. We convert single photons from a parametric down-conversion source in order to reach the point of maximum detection efficiency of commercially available streak cameras. We explore phase-matching configurations to apply the up-conversion scheme in real-world applications.
The innovations with the medical applications of white LEDs and the breakthrough for new business
NASA Astrophysics Data System (ADS)
Shimada, Jun-ichi; Itoh, Kazuhiro; Nishimura, Motohiro; Kawakami, Youichi; Tsuji, Kiyotsugu
2006-02-01
The distance between the LED and the surface of the target organ is about 4-5 cm, and we think this will become the "ultimate super-localized LED lighting". In an experiment with swine, we placed a LED module at the tip of the retractor. When compared to endoscopic lighting, this method illuminated the entire thoracic cavity more brightly. Since the light is emitted from the cylinder-shaped camera component, the light is unidirectional, and the shadows from the surgical instruments are moved to the side of the incision. Retractor LED lights provided enough light in the thoracic cavity. we believe that "medical white LEDs" can contribute in clinical settings as a light source for performing safe operations with bright surgical fields in the near future. Also, we use our LEDs for new real business. In the summer of 2004, LED lighting was world first used in the 1200 year-old Gion Festival for the first time in history as "a lighting device that does not destroy cultural assets by light heat". And the next is "Lighting at the "Diva status at diva gate" and the "Thousand Armed Avalokiteshwara in innermost sanctuary in the main hall" at Kiyomizudera in Kyoto". It was a great success, and we were invited back in the spring of 2005 and for future applications. We think this is the first real application of LEDs as an outdoor lighting device. The number of people who visit Kiyomizudera is 4000,000 annually, and LEDs were adopted to illuminate the diva gate.
Spacecraft hazard avoidance utilizing structured light
NASA Technical Reports Server (NTRS)
Liebe, Carl Christian; Padgett, Curtis; Chapsky, Jacob; Wilson, Daniel; Brown, Kenneth; Jerebets, Sergei; Goldberg, Hannah; Schroeder, Jeffrey
2006-01-01
At JPL, a <5 kg free-flying micro-inspector spacecraft is being designed for host-vehicle inspection. The spacecraft includes a hazard avoidance sensor to navigate relative to the vehicle being inspected. Structured light was selected for hazard avoidance because of its low mass and cost. Structured light is a method of remote sensing 3-dimensional structure of the proximity utilizing a laser, a grating, and a single regular APS camera. The laser beam is split into 400 different beams by a grating to form a regular spaced grid of laser beams that are projected into the field of view of an APS camera. The laser source and the APS camera are separated forming the base of a triangle. The distance to all beam intersections of the host are calculated based on triangulation.
NASA Astrophysics Data System (ADS)
Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.
2016-06-01
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
ERIC Educational Resources Information Center
Petzold, Paul
The amateur movie camera differs from a still camera on several important points. The author explores these differences and discusses the various ways they may be used to advantage. He describes in detail the workings of basic equipment--cameras, exposure meters, lenses, films, and lights--and demonstrates the proper use of each. Techniques such…
Vanlaar, Ward; Robertson, Robyn; Marcoux, Kyla
2014-01-01
The objective of this study was to evaluate the impact of Winnipeg's photo enforcement safety program on speeding, i.e., "speed on green", and red-light running behavior at intersections as well as on crashes resulting from these behaviors. ARIMA time series analyses regarding crashes related to red-light running (right-angle crashes and rear-end crashes) and crashes related to speeding (injury crashes and property damage only crashes) occurring at intersections were conducted using monthly crash counts from 1994 to 2008. A quasi-experimental intersection camera experiment was also conducted using roadside data on speeding and red-light running behavior at intersections. These data were analyzed using logistic regression analysis. The time series analyses showed that for crashes related to red-light running, there had been a 46% decrease in right-angle crashes at camera intersections, but that there had also been an initial 42% increase in rear-end crashes. For crashes related to speeding, analyses revealed that the installation of cameras was not associated with increases or decreases in crashes. Results of the intersection camera experiment show that there were significantly fewer red light running violations at intersections after installation of cameras and that photo enforcement had a protective effect on speeding behavior at intersections. However, the data also suggest photo enforcement may be less effective in preventing serious speeding violations at intersections. Overall, Winnipeg's photo enforcement safety program had a positive net effect on traffic safety. Results from both the ARIMA time series and the quasi-experimental design corroborate one another. However, the protective effect of photo enforcement is not equally pronounced across different conditions so further monitoring is required to improve the delivery of this measure. Results from this study as well as limitations are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
SU-E-J-17: A Study of Accelerator-Induced Cerenkov Radiation as a Beam Diagnostic and Dosimetry Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bateman, F; Tosh, R
2014-06-01
Purpose: To investigate accelerator-induced Cerenkov radiation imaging as a possible beam diagnostic and medical dosimetry tool. Methods: Cerenkov emission produced by clinical accelerator beams in a water phantom was imaged using a camera system comprised of a high-sensitivity thermoelectrically-cooled CCD camera coupled to a large aperture (f/0.75) objective lens with 16:1 magnification. This large format lens allows a significant amount of the available Cerenkov light to be collected and focused onto the CCD camera to form the image. Preliminary images, obtained with 6 MV photon beams, used an unshielded camera mounted horizontally with the beam normal to the water surface,more » and confirmed the detection of Cerenkov radiation. Several improvements were subsequently made including the addition of radiation shielding around the camera, and altering of the beam and camera angles to give a more favorable geometry for Cerenkov light collection. A detailed study was then undertaken over a range of electron and photon beam energies and dose rates to investigate the possibility of using this technique for beam diagnostics and dosimetry. Results: A series of images were obtained at a fixed dose rate over a range of electron energies from 6 to 20 MeV. The location of maximum intensity was found to vary linearly with the energy of the beam. A linear relationship was also found between the light observed from a fixed point on the central axis and the dose rate for both photon and electron beams. Conclusion: We have found that the analysis of images of beam-induced Cerenkov light in a water phantom has potential for use as a beam diagnostic and medical dosimetry tool. Our future goals include the calibration of the light output in terms of radiation dose and development of a tomographic system for 3D Cerenkov imaging in water phantoms and other media.« less
Pedestrian Detection in Far-Infrared Daytime Images Using a Hierarchical Codebook of SURF
Besbes, Bassem; Rogozan, Alexandrina; Rus, Adela-Maria; Bensrhair, Abdelaziz; Broggi, Alberto
2015-01-01
One of the main challenges in intelligent vehicles concerns pedestrian detection for driving assistance. Recent experiments have showed that state-of-the-art descriptors provide better performances on the far-infrared (FIR) spectrum than on the visible one, even in daytime conditions, for pedestrian classification. In this paper, we propose a pedestrian detector with on-board FIR camera. Our main contribution is the exploitation of the specific characteristics of FIR images to design a fast, scale-invariant and robust pedestrian detector. Our system consists of three modules, each based on speeded-up robust feature (SURF) matching. The first module allows generating regions-of-interest (ROI), since in FIR images of the pedestrian shapes may vary in large scales, but heads appear usually as light regions. ROI are detected with a high recall rate with the hierarchical codebook of SURF features located in head regions. The second module consists of pedestrian full-body classification by using SVM. This module allows one to enhance the precision with low computational cost. In the third module, we combine the mean shift algorithm with inter-frame scale-invariant SURF feature tracking to enhance the robustness of our system. The experimental evaluation shows that our system outperforms, in the FIR domain, the state-of-the-art Haar-like Adaboost-cascade, histogram of oriented gradients (HOG)/linear SVM (linSVM) and MultiFtrpedestrian detectors, trained on the FIR images. PMID:25871724
Clementine Observes the Moon, Solar Corona, and Venus
NASA Technical Reports Server (NTRS)
1997-01-01
In 1994, during its flight, the Clementine spacecraft returned images of the Moon. In addition to the geologic mapping cameras, the Clementine spacecraft also carried two Star Tracker cameras for navigation. These lightweight (0.3 kg) cameras kept the spacecraft on track by constantly observing the positions of stars, reminiscent of the age-old seafaring tradition of sextant/star navigation. These navigation cameras were also to take some spectacular wide angle images of the Moon.
In this picture the Moon is seen illuminated solely by light reflected from the Earth--Earthshine! The bright glow on the lunar horizon is caused by light from the solar corona; the sun is just behind the lunar limb. Caught in this image is the planet Venus at the top of the frame.Optoelectronic System Measures Distances to Multiple Targets
NASA Technical Reports Server (NTRS)
Liebe, Carl Christian; Abramovici, Alexander; Bartman, Randall; Chapsky, Jacob; Schmalz, John; Coste, Keith; Litty, Edward; Lam, Raymond; Jerebets, Sergei
2007-01-01
An optoelectronic metrology apparatus now at the laboratory-prototype stage of development is intended to repeatedly determine distances of as much as several hundred meters, at submillimeter accuracy, to multiple targets in rapid succession. The underlying concept of optoelectronic apparatuses that can measure distances to targets is not new; such apparatuses are commonly used in general surveying and machining. However, until now such apparatuses have been, variously, constrained to (1) a single target or (2) multiple targets with a low update rate and a requirement for some a priori knowledge of target geometry. When fully developed, the present apparatus would enable measurement of distances to more than 50 targets at an update rate greater than 10 Hz, without a requirement for a priori knowledge of target geometry. The apparatus (see figure) includes a laser ranging unit (LRU) that includes an electronic camera (photo receiver), the field of view of which contains all relevant targets. Each target, mounted at a fiducial position on an object of interest, consists of a small lens at the output end of an optical fiber that extends from the object of interest back to the LRU. For each target and its optical fiber, there is a dedicated laser that is used to illuminate the target via the optical fiber. The targets are illuminated, one at a time, with laser light that is modulated at a frequency of 10.01 MHz. The modulated laser light is emitted by the target, from where it returns to the camera (photodetector), where it is detected. Both the outgoing and incoming 10.01-MHz laser signals are mixed with a 10-MHz local-oscillator to obtain beat notes at 10 kHz, and the difference between the phases of the beat notes is measured by a phase meter. This phase difference serves as a measure of the total length of the path traveled by light going out through the optical fiber and returning to the camera (photodetector) through free space. Because the portion of the path length inside the optical fiber is not ordinarily known and can change with temperature, it is also necessary to measure the phase difference associated with this portion and subtract it from the aforementioned overall phase difference to obtain the phase difference proportional to only the free-space path length, which is the distance that one seeks to measure. Therefore, the apparatus includes a photodiode and a circulator that enable measurement of the phase difference associated with propagation from the LRU inside the fiber to the target, reflection from the fiber end, and propagation back inside the fiber to the LRU. Because this phase difference represents twice the optical path length of the fiber, this phase difference is divided in two before subtraction from the aforementioned total-path-length phase difference. Radiation-induced changes in the photodetectors in this apparatus can affect the measurements. To enable calibration for the purpose of compensation for these changes, the apparatus includes an additional target at a known short distance, located inside the camera. If the measured distance to this target changes, then the change is applied to the other targets.
Li, Jin; Liu, Zilong
2017-07-24
Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.
Development of the SEASIS instrument for SEDSAT
NASA Technical Reports Server (NTRS)
Maier, Mark W.
1996-01-01
Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.
Fusion of light-field and photogrammetric surface form data
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard K.
2017-08-01
Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.
Foale in Base Block with camera
1997-11-03
STS086-405-008 (25 Sept-6 Oct 1997) --- Astronaut C. Michael Foale, sporting attire representing the STS-86 crew after four months aboard Russia?s Mir Space Station in Russian wear, operates a video camera in Mir?s Base Block Module. Photo credit: NASA
Wakata and Barratt with cameras at SM window
2009-04-19
ISS019-E-008935 (19 April 2009) --- Japan Aerospace Exploration Agency (JAXA) astronaut Koichi Wakata (left) and NASA astronaut Michael Barratt, both Expedition 19/20 flight engineers, use still cameras at a window in the Zvezda Service Module of the International Space Station.
Line drawing Scientific Instrument Module and lunar orbital science package
NASA Technical Reports Server (NTRS)
1970-01-01
A line drawing of the Scientific Instrument Module (SIM) with its lunar orbital science package. The SIM will be mounted in a previously vacant sector of the Apollo Service Module. It will carry specialized cameras and instrumentation for gathering lunar orbit scientific data.
Radar based autonomous sensor module
NASA Astrophysics Data System (ADS)
Styles, Tim
2016-10-01
Most surveillance systems combine camera sensors with other detection sensors that trigger an alert to a human operator when an object is detected. The detection sensors typically require careful installation and configuration for each application and there is a significant burden on the operator to react to each alert by viewing camera video feeds. A demonstration system known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT) has been developed to address these issues using Autonomous Sensor Modules (ASM) and a central High Level Decision Making Module (HLDMM) that can fuse the detections from multiple sensors. This paper describes the 24 GHz radar based ASM, which provides an all-weather, low power and license exempt solution to the problem of wide area surveillance. The radar module autonomously configures itself in response to tasks provided by the HLDMM, steering the transmit beam and setting range resolution and power levels for optimum performance. The results show the detection and classification performance for pedestrians and vehicles in an area of interest, which can be modified by the HLDMM without physical adjustment. The module uses range-Doppler processing for reliable detection of moving objects and combines Radar Cross Section and micro-Doppler characteristics for object classification. Objects are classified as pedestrian or vehicle, with vehicle sub classes based on size. Detections are reported only if the object is detected in a task coverage area and it is classified as an object of interest. The system was shown in a perimeter protection scenario using multiple radar ASMs, laser scanners, thermal cameras and visible band cameras. This combination of sensors enabled the HLDMM to generate reliable alerts with improved discrimination of objects and behaviours of interest.
Cameras on the moon with Apollos 15 and 16.
NASA Technical Reports Server (NTRS)
Page, T.
1972-01-01
Description of the cameras used for photography and television by Apollo 15 and 16 missions, covering a hand-held Hasselblad camera for black and white panoramic views at locations visited by the astronauts, a special stereoscopic camera designed by astronomer Tom Gold, a 16-mm movie camera used on the Apollo 15 and 16 Rovers, and several TV cameras. Details are given on the far-UV camera/spectrograph of the Apollo 16 mission. An electronographic camera converts UV light to electrons which are ejected by a KBr layer at the focus of an f/1 Schmidt camera and darken photographic films much more efficiently than far-UV. The astronomical activity of the Apollo 16 astronauts on the moon, using this equipment, is discussed.
Basic Photography; A Primer for Professionals. Second Edition.
ERIC Educational Resources Information Center
Langford, Michael J.
In this textbook, which was written for the professional photography student, both photographic theory and practice are thoroughly explained. The author examines the principles of light and the properties of lenses and gives a detailed evaluation of camera movement, camera shutters, and the camera as a whole. He outlines the manufacture and…
Seeing the Light: A Classroom-Sized Pinhole Camera Demonstration for Teaching Vision
ERIC Educational Resources Information Center
Prull, Matthew W.; Banks, William P.
2005-01-01
We describe a classroom-sized pinhole camera demonstration (camera obscura) designed to enhance students' learning of the visual system. The demonstration consists of a suspended rear-projection screen onto which the outside environment projects images through a small hole in a classroom window. Students can observe these images in a darkened…
Improved spatial resolution of luminescence images acquired with a silicon line scanning camera
NASA Astrophysics Data System (ADS)
Teal, Anthony; Mitchell, Bernhard; Juhl, Mattias K.
2018-04-01
Luminescence imaging is currently being used to provide spatially resolved defect in high volume silicon solar cell production. One option to obtain the high throughput required for on the fly detection is the use a silicon line scan cameras. However, when using a silicon based camera, the spatial resolution is reduced as a result of the weakly absorbed light scattering within the camera's chip. This paper address this issue by applying deconvolution from a measured point spread function. This paper extends the methods for determining the point spread function of a silicon area camera to a line scan camera with charge transfer. The improvement in resolution is quantified in the Fourier domain and in spatial domain on an image of a multicrystalline silicon brick. It is found that light spreading beyond the active sensor area is significant in line scan sensors, but can be corrected for through normalization of the point spread function. The application of this method improves the raw data, allowing effective detection of the spatial resolution of defects in manufacturing.
Volumetric particle image velocimetry with a single plenoptic camera
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.
2015-11-01
A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.
Improvements in low-cost label-free QPI microscope for live cell imaging
NASA Astrophysics Data System (ADS)
Seniya, C.; Towers, C. E.; Towers, D. P.
2017-07-01
This paper reports an improvement in the development of a low-cost QPI microscope offering new capabilities in term of phase measurement accuracy for label-free live samples in the longer term (i.e., hours to days). The spatially separated scattered and non-scattered image light fields are reshaped in the Fourier plane and modulated to form an interference image at a CCD camera. The apertures that enable these two beams to be generated have been optimised by means of laser-cut apertures placed on the mirrors of a Michelson interferometer and has improved the phase measuring and reconstruction capability of the QPI microscope. The microscope was tested with transparent onion cells as an object of interest.
Harrison, Thomas C; Sigler, Albrecht; Murphy, Timothy H
2009-09-15
We describe a simple and low-cost system for intrinsic optical signal (IOS) imaging using stable LED light sources, basic microscopes, and commonly available CCD cameras. IOS imaging measures activity-dependent changes in the light reflectance of brain tissue, and can be performed with a minimum of specialized equipment. Our system uses LED ring lights that can be mounted on standard microscope objectives or video lenses to provide a homogeneous and stable light source, with less than 0.003% fluctuation across images averaged from 40 trials. We describe the equipment and surgical techniques necessary for both acute and chronic mouse preparations, and provide software that can create maps of sensory representations from images captured by inexpensive 8-bit cameras or by 12-bit cameras. The IOS imaging system can be adapted to commercial upright microscopes or custom macroscopes, eliminating the need for dedicated equipment or complex optical paths. This method can be combined with parallel high resolution imaging techniques such as two-photon microscopy.
Shaw, S L; Salmon, E D; Quatrano, R S
1995-12-01
In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.
Improved linearity using harmonic error rejection in a full-field range imaging system
NASA Astrophysics Data System (ADS)
Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.
2008-02-01
Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.
Intramolecular co-action of two independent photosensory modules in the fern phytochrome 3.
Kanegae, Takeshi
2015-01-01
Fern phytochrome3/neochrome1 (phy3/neo1) is a chimeric photoreceptor composed of a phytochrome-chromophore binding domain and an almost full-length phototropin. phy3 thus contains two different light-sensing modules; a red/far-red light receptor phytochrome and a blue light receptor phototropin. phy3 induces both red light- and blue light-dependent phototropism in phototropin-deficient Arabidopsis thaliana (phot1 phot2) seedlings. The red-light response is dependent on the phytochrome module of phy3, and the blue-light response is dependent on the phototropin module. We recently showed that both the phototropin-sensing module and the phytochrome-sensing module mediate the blue light-dependent phototropic response. Particularly under low-light conditions, these two light-sensing modules cooperate to induce the blue light-dependent phototropic response. This intramolecular co-action of two independent light-sensing modules in phy3 enhances light sensitivity, and perhaps allowed ferns to adapt to the low-light canopy conditions present in angiosperm forests.
Alignment Test Results of the JWST Pathfinder Telescope Mirrors in the Cryogenic Environment
NASA Technical Reports Server (NTRS)
Whitman, Tony L.; Wells, Conrad; Hadaway, James; Knight, J. Scott; Lunt, Sharon
2016-01-01
After integration of the Optical Telescope Element (OTE) to the Integrated Science Instrument Module (ISIM) to become the OTIS, the James Webb Space Telescope OTIS is tested at NASAs Johnson Space Center (JSC) in the cryogenic vacuum Chamber A for alignment and optical performance. The alignment of the mirrors comprises a sequence of steps as follows: The mirrors are coarsely aligned using photogrammetry cameras with reflective targets attached to the sides of the mirrors. Then a multi-wavelength interferometer is aligned to the 18-segment primary mirror using cameras at the center of curvature to align reflected light from the segments and using fiducials at the edge of the primary mirror. Once the interferometer is aligned, the 18 primary mirror segments are then adjusted to optimize wavefront error of the aggregate mirror. This process phases the piston and tilt positions of all the mirror segments. An optical fiber placed at the Cassegrain focus of the telescope then emits light towards the secondary mirror to create a collimated beam emitting from the primary mirror. Portions of the collimated beam are retro-reflected from flat mirrors at the top of the chamber to pass through the telescope to the SI detector. The image on the detector is used for fine alignment of the secondary mirror and a check of the primary mirror alignment using many of the same analysis techniques used in the on-orbit alignment. The entire process was practiced and evaluated in 2015 at cryogenic temperature with the Pathfinder telescope.
NASA Astrophysics Data System (ADS)
Yang, Yi; Moore, Anna M.; Krisciunas, Kevin; Wang, Lifan; Ashley, Michael C. B.; Fu, Jianning; Brown, Peter J.; Cui, Xiangqun; Feng, Long-Long; Gong, Xuefei; Hu, Zhongwen; Lawrence, Jon S.; Luong-Van, Daniel; Riddle, Reed L.; Shang, Zhaohui; Sims, Geoff; Storey, John W. V.; Suntzeff, Nicholas B.; Tothill, Nick; Travouillon, Tony; Yang, Huigen; Yang, Ji; Zhou, Xu; Zhu, Zhenxi
2017-07-01
The summit of the Antarctic plateau, Dome A, is proving to be an excellent site for optical, near-infrared, and terahertz astronomical observations. Gattini is a wide-field camera installed on the PLATO instrument module as part of the Chinese-led traverse to Dome A in 2009 January. We present here the measurements of sky brightness with the Gattini ultra-large field of view (90^\\circ × 90^\\circ ) in the photometric B-, V-, and R-bands; cloud cover statistics measured during the 2009 winter season; and an estimate of the sky transparency. A cumulative probability distribution indicates that the darkest 10% of the nights at Dome A have sky brightness of S B = 22.98, S V = 21.86, and S R = 21.68 mag arcsec-2. These values were obtained during the year 2009 with minimum aurora, and they are comparable to the faintest sky brightness at Maunakea and the best sites of northern Chile. Since every filter includes strong auroral lines that effectively contaminate the sky brightness measurements, for instruments working around the auroral lines, either with custom filters or with high spectral resolution instruments, these values could be easily obtained on a more routine basis. In addition, we present example light curves for bright targets to emphasize the unprecedented observational window function available from this ground-based site. These light curves will be published in a future paper.
High dynamic spectroscopy using a digital micromirror device and periodic shadowing.
Kristensson, Elias; Ehn, Andreas; Berrocal, Edouard
2017-01-09
We present an optical solution called DMD-PS to boost the dynamic range of 2D imaging spectroscopic measurements up to 22 bits by incorporating a digital micromirror device (DMD) prior to detection in combination with the periodic shadowing (PS) approach. In contrast to high dynamic range (HDR), where the dynamic range is increased by recording several images at different exposure times, the current approach has the potential of improving the dynamic range from a single exposure and without saturation of the CCD sensor. In the procedure, the spectrum is imaged onto the DMD that selectively reduces the reflection from the intense spectral lines, allowing the signal from the weaker lines to be increased by a factor of 28 via longer exposure times, higher camera gains or increased laser power. This manipulation of the spectrum can either be based on a priori knowledge of the spectrum or by first performing a calibration measurement to sense the intensity distribution. The resulting benefits in detection sensitivity come, however, at the cost of strong generation of interfering stray light. To solve this issue the Periodic Shadowing technique, which is based on spatial light modulation, is also employed. In this proof-of-concept article we describe the full methodology of DMD-PS and demonstrate - using the calibration-based concept - an improvement in dynamic range by a factor of ~100 over conventional imaging spectroscopy. The dynamic range of the presented approach will directly benefit from future technological development of DMDs and camera sensors.
Window Observational Rack Facility (WORF)
NASA Technical Reports Server (NTRS)
2002-01-01
Developed by Boeing, at the Marshall Space Flight Center (MSFC) Space Station Manufacturing building, the Window Observational Rack Facility (WORF) will help Space Station crews take some of the best photographs ever snapped from an orbiting spacecraft by eliminating glare and allowing researchers to control their cameras and other equipment from the ground. The WORF is designed to make the best possible use of the high-quality research window in the Space Station's U.S. Destiny laboratory module. Engineers at the MSFC proposed a derivative of the EXPRESS (Expedite the Processing of Experiments to the Space Station) Rack already used on the Space Station and were given the go-ahead. The EXPRESS rack can hold a wide variety of experiments and provide them with power, communications, data, cooling, fluids, and other utilities - all the things that Earth-observing experiment instruments would need. WORF will supply payloads with power, data, cooling, video downlink, and stable, standardized interfaces for mounting imaging instruments. Similar to specialized orbital observatories, the interior of the rack is sealed against light and coated with a special low-reflectant black paint, so payloads will be able to observe low-light-level subjects such as the faint glow of auroras. Cameras and remote sensing instruments in the WORF can be preprogrammed, controlled from the ground, or operated by a Station crewmember by using a flexible shroud designed to cinch tightly around the crewmember's waist. The WORF is scheduled to be launched aboard the STS-114 Space Shuttle mission in the year 2003.
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.; Jorge, Jorge M.
1998-01-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
NASA Astrophysics Data System (ADS)
Costa, Manuel F.; Jorge, Jorge M.
1997-12-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
Capturing the plenoptic function in a swipe
NASA Astrophysics Data System (ADS)
Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi
2016-09-01
Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.
Fabrication of multi-focal microlens array on curved surface for wide-angle camera module
NASA Astrophysics Data System (ADS)
Pan, Jun-Gu; Su, Guo-Dung J.
2017-08-01
In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-03-16
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.
Laser-Induced-Fluorescence Photogrammetry and Videogrammetry
NASA Technical Reports Server (NTRS)
Danehy, Paul; Jones, Tom; Connell, John; Belvin, Keith; Watson, Kent
2004-01-01
An improved method of dot-projection photogrammetry and an extension of the method to encompass dot-projection videogrammetry overcome some deficiencies of dot-projection photogrammetry as previously practiced. The improved method makes it possible to perform dot-projection photogrammetry or videogrammetry on targets that have previously not been amenable to dot-projection photogrammetry because they do not scatter enough light. Such targets include ones that are transparent, specularly reflective, or dark. In standard dot-projection photogrammetry, multiple beams of white light are projected onto the surface of an object of interest (denoted the target) to form a known pattern of bright dots. The illuminated surface is imaged in one or more cameras oriented at a nonzero angle or angles with respect to a central axis of the illuminating beams. The locations of the dots in the image(s) contain stereoscopic information on the locations of the dots, and, hence, on the location, shape, and orientation of the illuminated surface of the target. The images are digitized and processed to extract this information. Hardware and software to implement standard dot-projection photogrammetry are commercially available. Success in dot-projection photogrammetry depends on achieving sufficient signal-to-noise ratios: that is, it depends on scattering of enough light by the target so that the dots as imaged in the camera(s) stand out clearly against the ambient-illumination component of the image of the target. In one technique used previously to increase the signal-to-noise ratio, the target is illuminated by intense, pulsed laser light and the light entering the camera(s) is band-pass filtered at the laser wavelength. Unfortunately, speckle caused by the coherence of the laser light engenders apparent movement in the projected dots, thereby giving rise to errors in the measurement of the centroids of the dots and corresponding errors in the computed shape and location of the surface of the target. The improved method is denoted laser-induced-fluorescence photogrammetry.
1998-12-07
S88-E-5057 (12-07-98) --- Astronaut James H. Newman, waves at camera as he holds onto one of the hand rails on the Unity connecting module during the early stages of a 7-hour, 21-minute spacewalk. Astronauts Newman and Jerry L. Ross, both mission specialists, went on to mate 40 cables and connectors running 76 feet from the Zarya control module to Unity, with the 35-ton complex towering over Endeavour's cargo bay. This photo was taken with an electronic still camera (ESC) at 23:37:40 GMT, Dec. 7.
Transient full-field vibration measurement using spectroscopical stereo photogrammetry.
Yue, Kaiduan; Li, Zhongke; Zhang, Ming; Chen, Shan
2010-12-20
Contrasted with other vibration measurement methods, a novel spectroscopical photogrammetric approach is proposed. Two colored light filters and a CCD color camera are used to achieve the function of two traditional cameras. Then a new calibration method is presented. It focuses on the vibrating object rather than the camera and has the advantage of more accuracy than traditional camera calibration. The test results have shown an accuracy of 0.02 mm.
Clementine Observes the Moon, Solar Corona, and Venus
1999-06-12
In 1994, during its flight, NASA's Clementine spacecraft returned images of the Moon. In addition to the geologic mapping cameras, the Clementine spacecraft also carried two Star Tracker cameras for navigation. These lightweight (0.3 kg) cameras kept the spacecraft on track by constantly observing the positions of stars, reminiscent of the age-old seafaring tradition of sextant/star navigation. These navigation cameras were also to take some spectacular wide angle images of the Moon. In this picture the Moon is seen illuminated solely by light reflected from the Earth--Earthshine! The bright glow on the lunar horizon is caused by light from the solar corona; the sun is just behind the lunar limb. Caught in this image is the planet Venus at the top of the frame. http://photojournal.jpl.nasa.gov/catalog/PIA00434
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Bartram, Scott M.
2001-01-01
A novel multiple-camera system for the recording of digital particle image velocimetry (DPIV) images acquired in a two-dimensional separating/reattaching flow is described. The measurements were performed in the NASA Langley Subsonic Basic Research Tunnel as part of an overall series of experiments involving the simultaneous acquisition of dynamic surface pressures and off-body velocities. The DPIV system utilized two frequency-doubled Nd:YAG lasers to generate two coplanar, orthogonally polarized light sheets directed upstream along the horizontal centerline of the test model. A recording system containing two pairs of matched high resolution, 8-bit cameras was used to separate and capture images of illuminated tracer particles embedded in the flow field. Background image subtraction was used to reduce undesirable flare light emanating from the surface of the model, and custom pixel alignment algorithms were employed to provide accurate registration among the various cameras. Spatial cross correlation analysis with median filter validation was used to determine the instantaneous velocity structure in the separating/reattaching flow region illuminated by the laser light sheets. In operation the DPIV system exhibited a good ability to resolve large-scale separated flow structures with acceptable accuracy over the extended field of view of the cameras. The recording system design provided enhanced performance versus traditional DPIV systems by allowing a variety of standard and non-standard cameras to be easily incorporated into the system.
Deep-Sea Video Cameras Without Pressure Housings
NASA Technical Reports Server (NTRS)
Cunningham, Thomas
2004-01-01
Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If power-supply regulators or filter capacitors were needed, these could be attached in chip form directly onto the back of, and potted with, the imager chip. Because CMOS imagers dissipate little power, the potting would not result in overheating. To minimize the cost of the camera, a fixed lens could be fabricated as part of the plastic case. For improved optical performance at greater cost, an adjustable glass achromatic lens would be mounted in a reservoir that would be filled with transparent oil and subject to the full hydrostatic pressure, and the reservoir would be mounted on the case to position the lens in front of the image sensor. The lens would by adjusted for focus by use of a motor inside the reservoir (oil-filled motors already exist).
Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera
ERIC Educational Resources Information Center
Fuhrman, Nicholas E.
2016-01-01
Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…
View of Scientific Instrument Module to be flown on Apollo 15
NASA Technical Reports Server (NTRS)
1971-01-01
Close-up view of the Scientific Instrument Module (SIM) to be flown for the first time on the Apollo 15 mission. Mounted in a previously vacant sector of the Apollo Service Module, the SIM carries specialized cameras and instrumentation for gathering lunar orbit scientific data.
NASA Astrophysics Data System (ADS)
Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.
2003-07-01
We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.
Voss in hatch at aft end of Service module
2001-03-22
ISS002-E-5702 (22 March 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, translates through the forward hatch of the Zvezda Service Module. The image was recorded with a digital still camera.
Voss in Service module with cycle ergometer
2001-03-23
ISS002-E-5732 (23 March 2001) --- James S. Voss, Expedition Two flight engineer, prepares to exercise on the cycle ergometer in the Zvezda Service Module. The image was taken with a digital still camera.
Usachev on cycle ergometer in Service Module
2001-04-27
ISS002-E-6136 (27 April 2001) --- Yury V. Usachev of Rosaviakosmos, Expedition Two mission commander, exercises on the cycle ergometer in the Zvezda Service Module. The image was taken with a digital still camera.
Usachev tests Vozdukh in Service module
2001-05-11
ISS002-E-6111 (11 May 2001) --- Yury V. Usachev of Rosaviakosmos, Expedition Two mission commander, tests the Vozdukh Air Purification System in the Zvezda Service Module. The image was taken with a digital still camera.
Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
Recent technology and usage of plastic lenses in image taking objectives
NASA Astrophysics Data System (ADS)
Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko
2005-09-01
Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.
NASA Astrophysics Data System (ADS)
Dubey, Vishesh; Singh, Veena; Ahmad, Azeem; Singh, Gyanendra; Mehta, Dalip Singh
2016-03-01
We report white light phase shifting interferometry in conjunction with color fringe analysis for the detection of contaminants in water such as Escherichia coli (E.coli), Campylobacter coli and Bacillus cereus. The experimental setup is based on a common path interferometer using Mirau interferometric objective lens. White light interferograms are recorded using a 3-chip color CCD camera based on prism technology. The 3-chip color camera have lesser color cross talk and better spatial resolution in comparison to single chip CCD camera. A piezo-electric transducer (PZT) phase shifter is fixed with the Mirau objective and they are attached with a conventional microscope. Five phase shifted white light interferograms are recorded by the 3-chip color CCD camera and each phase shifted interferogram is decomposed into the red, green and blue constituent colors, thus making three sets of five phase shifted intererograms for three different colors from a single set of white light interferogram. This makes the system less time consuming and have lesser effect due to surrounding environment. Initially 3D phase maps of the bacteria are reconstructed for red, green and blue wavelengths from these interferograms using MATLAB, from these phase maps we determines the refractive index (RI) of the bacteria. Experimental results of 3D shape measurement and RI at multiple wavelengths will be presented. These results might find applications for detection of contaminants in water without using any chemical processing and fluorescent dyes.
NASA Astrophysics Data System (ADS)
Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.
2015-03-01
Non-contact camera-based imaging photoplethysmography (iPPG) is useful for measuring heart rate in conditions where contact devices are problematic due to issues such as mobility, comfort, and sanitation. Existing iPPG methods analyse the light-tissue interaction of either active or passive (ambient) illumination. Many active iPPG methods assume the incident ambient light is negligible to the active illumination, resulting in high power requirements, while many passive iPPG methods assume near-constant ambient conditions. These assumptions can only be achieved in environments with controlled illumination and thus constrain the use of such devices. To increase the number of possible applications of iPPG devices, we propose a dual-mode active iPPG system that is robust to changes in ambient illumination variations. Our system uses a temporally-coded illumination sequence that is synchronized with the camera to measure both active and ambient illumination interaction for determining heart rate. By subtracting the ambient contribution, the remaining illumination data can be attributed to the controlled illuminant. Our device comprises a camera and an LED illuminant controlled by a microcontroller. The microcontroller drives the temporal code via synchronizing the frame captures and illumination time at the hardware level. By simulating changes in ambient light conditions, experimental results show our device is able to assess heart rate accurately in challenging lighting conditions. By varying the temporal code, we demonstrate the trade-off between camera frame rate and ambient light compensation for optimal blood pulse detection.
Light field reconstruction robust to signal dependent noise
NASA Astrophysics Data System (ADS)
Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai
2014-11-01
Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.
Confocal retinal imaging using a digital light projector with a near infrared VCSEL source
NASA Astrophysics Data System (ADS)
Muller, Matthew S.; Elsner, Ann E.
2018-02-01
A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1" LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging.
Korzynska, Anna; Roszkowiak, Lukasz; Pijanowska, Dorota; Kozlowski, Wojciech; Markiewicz, Tomasz
2014-01-01
The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature moves image descriptors into proper chromatic space but simultaneously the value of luminance changes. So the process of the image unification in a sense of colour fidelity can be solved in separate introductory stage before the automatic image analysis.
A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras
NASA Astrophysics Data System (ADS)
Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.
2006-05-01
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.
System selects framing rate for spectrograph camera
NASA Technical Reports Server (NTRS)
1965-01-01
Circuit using zero-order light is reflected to a photomultiplier in the incoming radiation of a spectrograph monitor to provide an error signal which controls the advancing and driving rate of the film through the camera.
C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors
NASA Astrophysics Data System (ADS)
Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David
2018-02-01
After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.
Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles
Benjamin, R.F.
1983-10-18
An apparatus for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously.
Optical Indoor Positioning System Based on TFT Technology.
Gőzse, István
2015-12-24
A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low.
Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles
Benjamin, Robert F.
1987-01-01
An apparatus for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously.
Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles
Benjamin, R.F.
1987-03-10
An apparatus is disclosed for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously. 3 figs.
Rushford, Michael C.
1988-01-01
A pin hole camera assembly for use in viewing an object having a relatively large light intensity range, for example a crucible containing molten metal in an atomic vapor laser isotope separation (AVLIS) system is disclosed herein. The assembly includes means for optically compressing the light intensity range appearing at its input sufficient to make it receivable and decipherable by a standard video camera. To accomplish this, the assembly utilizes the combination of interference filter and a liquid crystal notch filter. The latter which preferably includes a cholesteric liquid crystal arrangement is configured to pass light at all wavelengths, except a relatively narrow wavelength band which defines the filter's notch, and includes means for causing the notch to vary to at least a limited extent with the intensity of light at its light incidence surface.
Smartphone Based Platform for Colorimetric Sensing of Dyes
NASA Astrophysics Data System (ADS)
Dutta, Sibasish; Nath, Pabitra
We demonstrate the working of a smartphone based optical sensor for measuring absorption band of coloured dyes. By integration of simple laboratory optical components with the camera unit of the smartphone we have converted it into a visible spectrometer with a pixel resolution of 0.345 nm/pixel. Light from a broadband optical source is allowed to transmit through a specific dye solution. The transmitted light signal is captured by the camera of the smartphone. The present sensor is inexpensive, portable and light weight making it an ideal handy sensor suitable for different on-field sensing.
Imaging using a supercontinuum laser to assess tumors in patients with breast carcinoma
NASA Astrophysics Data System (ADS)
Sordillo, Laura A.; Sordillo, Peter P.; Alfano, R. R.
2016-03-01
The supercontinuum laser light source has many advantages over other light sources, including broad spectral range. Transmission images of paired normal and malignant breast tissue samples from two patients were obtained using a Leukos supercontinuum (SC) laser light source with wavelengths in the second and third NIR optical windows and an IR- CCD InGaAs camera detector (Goodrich Sensors Inc. high response camera SU320KTSW-1.7RT with spectral response between 900 nm and 1,700 nm). Optical attenuation measurements at the four NIR optical windows were obtained from the samples.
Astronaut Vance Brand seen in hatchway leading to Apollo Docking module
NASA Technical Reports Server (NTRS)
1975-01-01
Astronaut Vance D. Brand, command module pilot of the American Apollo Soyuz Test Project (ASTP) crew, is seen in the hatchway leading from the Apollo Command Module (CM) into the Apollo Docking Module (DM) during joint U.S.-USSR ASTP docking in Earth orbit mission. The 35mm camera is looking from the DM into the CM.
NASA Astrophysics Data System (ADS)
Abookasis, David; Lay, Christopher C.; Mathews, Marlon S.; Linskey, Mark E.; Frostig, Ron D.; Tromberg, Bruce J.
2009-03-01
We describe a technique that uses spatially modulated near-infrared (NIR) illumination to detect and map changes in both optical properties (absorption and reduced scattering parameters) and tissue composition (oxy- and deoxyhemoglobin, total hemoglobin, and oxygen saturation) during acute ischemic injury in the rat barrel cortex. Cerebral ischemia is induced using an open vascular occlusion technique of the middle cerebral artery (MCA). Diffuse reflected NIR light (680 to 980 nm) from the left parietal somatosensory cortex is detected by a CCD camera before and after MCA occlusion. Monte Carlo simulations are used to analyze the spatial frequency dependence of the reflected light to predict spatiotemporal changes in the distribution of tissue absorption and scattering properties in the brain. Experimental results from seven rats show a 17+/-4.7% increase in tissue concentration of deoxyhemoglobin and a 45+/-3.1, 23+/-5.4, and 21+/-2.2% decrease in oxyhemoglobin, total hemoglobin concentration and cerebral tissue oxygen saturation levels, respectively, 45 min following induction of cerebral ischemia. An ischemic index (Iisch=ctHHb/ctO2Hb) reveals an average of more then twofold contrast after MCAo. The wavelength-dependence of the reduced scattering (i.e., scatter power) decreased by 35+/-10.3% after MCA occlusion. Compared to conventional CCD-based intrinsic signal optical imaging (ISOI), the use of structured illumination and model-based analysis allows for generation of separate maps of light absorption and scattering properties as well as tissue hemoglobin concentration. This potentially provides a powerful approach for quantitative monitoring and imaging of neurophysiology and metabolism with high spatiotemporal resolution.
Concept of electro-optical sensor module for sniper detection system
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Dulski, Rafal; Kastek, Mariusz
2010-10-01
The paper presents an initial concept of the electro-optical sensor unit for sniper detection purposes. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. Being a part of a larger system it should contribute to greater overall system efficiency and lower false alarm rate thanks to data and sensor fusion techniques. Additionally, it is expected to provide some pre-shot detection capabilities. Generally acoustic (or radar) systems used for shot detection offer only "after-the-shot" information and they cannot prevent enemy attack, which in case of a skilled sniper opponent usually means trouble. The passive imaging sensors presented in this paper, together with active systems detecting pointed optics, are capable of detecting specific shooter signatures or at least the presence of suspected objects in the vicinity. The proposed sensor unit use thermal camera as a primary sniper and shot detection tool. The basic camera parameters such as focal plane array size and type, focal length and aperture were chosen on the basis of assumed tactical characteristics of the system (mainly detection range) and current technology level. In order to provide costeffective solution the commercially available daylight camera modules and infrared focal plane arrays were tested, including fast cooled infrared array modules capable of 1000 fps image acquisition rate. The daylight camera operates as a support, providing corresponding visual image, easier to comprehend for a human operator. The initial assumptions concerning sensor operation were verified during laboratory and field test and some example shot recording sequences are presented.
Kinematic control of male Allen's Hummingbird wing trill over a range of flight speeds.
Clark, Christopher J; Mistick, Emily A
2018-05-18
Wing trills are pulsed sounds produced by modified wing feathers at one or more specific points in time during a wingbeat. Male Allen's Hummingbird ( Selasphorus sasin ) produce a sexually dimorphic 9 kHz wing trill in flight. Here we investigate the kinematic basis for trill production. The wingtip velocity hypothesis posits that trill production is modulated by the airspeed of the wingtip at some point during the wingbeat, whereas the wing rotation hypothesis posits that trill production is instead modulated by wing rotation kinematics. To test these hypotheses, we flew six male Allen's Hummingbirds in an open jet wind tunnel at flight speeds of 0, 3, 6, 9, 12 and 14 m s -1 , and recorded their flight with two 'acoustic cameras' placed below and behind, or below and lateral to the flying bird. The acoustic cameras are phased arrays of 40 microphones that used beamforming to spatially locate sound sources within a camera image. Trill Sound Pressure Level (SPL) exhibited a U-shaped relationship with flight speed in all three camera positions. SPL was greatest perpendicular to the stroke plane. Acoustic camera videos suggest that the trill is produced during supination. The trill was up to 20 dB louder during maneuvers than it was during steady state flight in the wind tunnel, across all airspeeds tested. These data provide partial support for the wing rotation hypothesis. Altered wing rotation kinematics could allow male Allen's Hummingbird to modulate trill production in social contexts such as courtship displays. © 2018. Published by The Company of Biologists Ltd.
2007-08-03
KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. In Orbiter Processing Facility bay 3, from left in blue flight suits, STS-120 Mission Specialist Stephanie D. Wilson, Pilot George D. Zamka, Commander Pamela A. Melroy, Mission Specialist Scott E. Parazynski (holding camera) and Mission Specialist Douglas H. Wheelock are given the opportunity to operate the cameras that will fly on their mission. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton
Traffic Light Detection Using Conic Section Geometry
NASA Astrophysics Data System (ADS)
Hosseinyalmdary, S.; Yilmaz, A.
2016-06-01
Traffic lights detection and their state recognition is a crucial task that autonomous vehicles must reliably fulfill. Despite scientific endeavors, it still is an open problem due to the variations of traffic lights and their perception in image form. Unlike previous studies, this paper investigates the use of inaccurate and publicly available GIS databases such as OpenStreetMap. In addition, we are the first to exploit conic section geometry to improve the shape cue of the traffic lights in images. Conic section also enables us to estimate the pose of the traffic lights with respect to the camera. Our approach can detect multiple traffic lights in the scene, it also is able to detect the traffic lights in the absence of prior knowledge, and detect the traffics lights as far as 70 meters. The proposed approach has been evaluated for different scenarios and the results show that the use of stereo cameras significantly improves the accuracy of the traffic lights detection and pose estimation.
Toslak, Devrim; Liu, Changgeng; Alam, Minhaj Nur; Yao, Xincheng
2018-06-01
A portable fundus imager is essential for emerging telemedicine screening and point-of-care examination of eye diseases. However, existing portable fundus cameras have limited field of view (FOV) and frequently require pupillary dilation. We report here a miniaturized indirect ophthalmoscopy-based nonmydriatic fundus camera with a snapshot FOV up to 67° external angle, which corresponds to a 101° eye angle. The wide-field fundus camera consists of a near-infrared light source (LS) for retinal guidance and a white LS for color retinal imaging. By incorporating digital image registration and glare elimination methods, a dual-image acquisition approach was used to achieve reflection artifact-free fundus photography.
Effects on Training Using Illumination in Virtual Environments
NASA Technical Reports Server (NTRS)
Maida, James C.; Novak, M. S. Jennifer; Mueller, Kristian
1999-01-01
Camera based tasks are commonly performed during orbital operations, and orbital lighting conditions, such as high contrast shadowing and glare, are a factor in performance. Computer based training using virtual environments is a common tool used to make and keep CTW members proficient. If computer based training included some of these harsh lighting conditions, would the crew increase their proficiency? The project goal was to determine whether computer based training increases proficiency if one trains for a camera based task using computer generated virtual environments with enhanced lighting conditions such as shadows and glare rather than color shaded computer images normally used in simulators. Previous experiments were conducted using a two degree of freedom docking system. Test subjects had to align a boresight camera using a hand controller with one axis of rotation and one axis of rotation. Two sets of subjects were trained on two computer simulations using computer generated virtual environments, one with lighting, and one without. Results revealed that when subjects were constrained by time and accuracy, those who trained with simulated lighting conditions performed significantly better than those who did not. To reinforce these results for speed and accuracy, the task complexity was increased.
Usachev takes notes in Service Module
2001-03-26
ISS002-E-5773 (28 March 2001) --- Yury V. Usachev of Rosaviakosmos, Expedtion Two mission commander, scribbles down some notes at the computer workstation in the Zvezda Service Module. The image was taken with a digital still camera.
Method to implement the CCD timing generator based on FPGA
NASA Astrophysics Data System (ADS)
Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin
2010-07-01
With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.
From the Pinhole Camera to the Shape of a Lens: The Camera-Obscura Reloaded
ERIC Educational Resources Information Center
Ziegler, Max; Priemer, Burkhard
2015-01-01
We demonstrate how the form of a plano-convex lens and a derivation of the thin lens equation can be understood through simple physical considerations. The basic principle is the extension of the pinhole camera using additional holes. The resulting images are brought into coincidence through the deflection of light with an arrangement of prisms.…
Real-time machine vision system using FPGA and soft-core processor
NASA Astrophysics Data System (ADS)
Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad
2012-06-01
This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.
Imaging Dot Patterns for Measuring Gossamer Space Structures
NASA Technical Reports Server (NTRS)
Dorrington, A. A.; Danehy, P. M.; Jones, T. W.; Pappa, R. S.; Connell, J. W.
2005-01-01
A paper describes a photogrammetric method for measuring the changing shape of a gossamer (membrane) structure deployed in outer space. Such a structure is typified by a solar sail comprising a transparent polymeric membrane aluminized on its Sun-facing side and coated black on the opposite side. Unlike some prior photogrammetric methods, this method does not require an artificial light source or the attachment of retroreflectors to the gossamer structure. In a basic version of the method, the membrane contains a fluorescent dye, and the front and back coats are removed in matching patterns of dots. The dye in the dots absorbs some sunlight and fluoresces at a longer wavelength in all directions, thereby enabling acquisition of high-contrast images from almost any viewing angle. The fluorescent dots are observed by one or more electronic camera(s) on the Sun side, the shade side, or both sides. Filters that pass the fluorescent light and suppress most of the solar spectrum are placed in front of the camera(s) to increase the contrast of the dots against the background. The dot image(s) in the camera(s) are digitized, then processed by use of commercially available photogrammetric software.
Optical Design of the Camera for Transiting Exoplanet Survey Satellite (TESS)
NASA Technical Reports Server (NTRS)
Chrisp, Michael; Clark, Kristin; Primeau, Brian; Dalpiaz, Michael; Lennon, Joseph
2015-01-01
The optical design of the wide field of view refractive camera, 34 degrees diagonal field, for the TESS payload is described. This fast f/1.4 cryogenic camera, operating at -75 C, has no vignetting for maximum light gathering within the size and weight constraints. Four of these cameras capture full frames of star images for photometric searches of planet crossings. The optical design evolution, from the initial Petzval design, took advantage of Forbes aspheres to develop a hybrid design form. This maximized the correction from the two aspherics resulting in a reduction of average spot size by sixty percent in the final design. An external long wavelength pass filter was replaced by an internal filter coating on a lens to save weight, and has been fabricated to meet the specifications. The stray light requirements were met by an extended lens hood baffle design, giving the necessary off-axis attenuation.
Pan, Jui-Wen; Tu, Sheng-Han
2012-05-20
A cost-effective, high-throughput, and high-yield method for the efficiency enhancement of an optical mouse lighting module is proposed. We integrated imprinting technology and free-form surface design to obtain a lighting module with high illumination efficiency and uniform intensity distribution. The imprinting technique can increase the light extraction efficiency and modulate the intensity distribution of light-emitting diodes. A modulated light source was utilized to add a compact free-form surface element to create a lighting module with 95% uniformity and 80% optical efficiency.
Human tracking over camera networks: a review
NASA Astrophysics Data System (ADS)
Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang
2017-12-01
In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.
View of model of Scientific Instrument Module to be flown on Apollo 15
NASA Technical Reports Server (NTRS)
1970-01-01
Close-up view of a scale model of the Scientific Instrument Module (SIM) to be flown for the first time on the Apollo 15 mission. Mounted in a previously vacant sector of the Apollo service module, the SIM carries specialized cameras and instrumentation for gathering lunar orbit scientific data.
Liu, Jun; Wang, Jian
2015-07-06
We present a simple configuration incorporating a single polarization-sensitive phase-only liquid crystal spatial light modulator (LC-SLM) to facilitate polarization-insensitive spatial light modulation. The polarization-insensitive configuration is formed by a polarization beam splitter (PBS), a polarization-sensitive phase-only LC-SLM, a half-wave plate (HWP), and a mirror in a loop structure. We experimentally demonstrate polarization-insensitive spatial light modulations for incident linearly polarized beams with different polarization states and polarization-multiplexed beams. Polarization-insensitive spatial light modulations generating orbital angular momentum (OAM) beams are demonstrated in the experiment. The designed polarization-insensitive configuration may find promising applications in spatial light modulations accommodating diverse incident polarizations.
Light in flight photography and applications (Conference Presentation)
NASA Astrophysics Data System (ADS)
Faccio, Daniele
2017-02-01
The first successful attempts (Abramson) at capturing light in flight relied on the holographic interference between the ``object'' beam scattered from a screen and a short reference pulse propagating at an angle, acting as an ultrafast shutter cite{egg}. This interference pattern was recorded on a photographic plate or film and allowed the visualisation of light as it propagated through complex environments with unprecedented temporal and spatial resolution. More recently, advances in ultrafast camera technology and in particular the use of picosecond resolution streak cameras allowed the direct digital recording of a light pulse propagating through a plastic bottle (Rasker at el.). This represented a remarkable step forward as it provided the first ever video recording (in the traditional sense with which one intends a video, i.e. something that can be played back directly on a screen and saved in digital format) of a pulse of light in flight. We will discuss a different technology that is based on an imaging camera with a pixel array in which each individual pixel is a single photon avalanche diode (SPAD). SPADs offer both sensitivity to single photons and picosecond temporal resolution of the photon arrival time (with respect to a trigger event). When adding imaging capability, SPAD arrays can deliver videos of light pulse propagating in free space, without the need for a scattering medium or diffuser as in all previous work (Gariepy et al). This capability can then be harnessed for a variety of applications. We will discuss the details of SPAD camera detection of moving objects (e.g. human beings) that are hidden from view and then conclude with a discussion of future perspectives in the field of bio-imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jozsef, G
Purpose: To build a test device for HDR afterloaders capable of checking source positions, times at positions and estimate the activity of the source. Methods: A catheter is taped on a plastic scintillation sheet. When a source travels through the catheter, the scintillator sheet lights up around the source. The sheet is monitored with a video camera, and records the movement of the light spot. The center of the spot on each image on the video provides the source location, and the time stamps of the images can provide the dwell time the source spend in each location. Finally, themore » brightness of the light spot is related to the activity of the source. A code was developed for noise removal, calibrate the scale of the image to centimeters, eliminate the distortion caused by the oblique view angle, identifying the boundaries of the light spot, transforming the image into binary and detect and calculate the source motion, positions and times. The images are much less noisy if the camera is shielded. That requires that the light spot is monitored in a mirror, rather than directly. The whole assembly is covered from external light and has a size of approximately 17×35×25cm (H×L×W) Results: A cheap camera in BW mode proved to be sufficient with a plastic scintillator sheet. The best images were resulted by a 3mm thick sheet with ZnS:Ag surface coating. The shielding of the camera decreased the noise, but could not eliminate it. A test run even in noisy condition resulted in approximately 1 mm and 1 sec difference from the planned positions and dwell times. Activity tests are in progress. Conclusion: The proposed method is feasible. It might simplify the monthly QA process of HDR Brachytherapy units.« less
NASA Astrophysics Data System (ADS)
Brahme, Anders; Lind, Bengt K.
2002-04-01
Radiation therapy is today in a state of very rapid development with new intensity modulated treatment techniques continuously being developed. This has made intensity modulated electron and photon beams almost as powerful as conventional uniform beam proton therapy. To be able to cure also the most advanced hypoxic and radiation resistant tumors of complex local spread, intensity modulated light ion beams are really the ultimate tool and only slightly more expensive than proton therapy. The aim of the new center for ion therapy and tumor diagnostics in Stockholm is to develop radiobiologically optimized 3-dimensional pencil beam scanning techniques. Beside the "classical" approaches using low ionization density hydrogen ions (protons, but also deuterons and tritium nuclei) and high ionization density carbon ions, two new approaches will be developed. In the first one lithium or beryllium ions, that induce the least detrimental biological effect to normal tissues for a given biological effect in a small volume of the tumor, will be key particles. In the second approach, referred patients will be given a high-dose high-precision "boost" treatment with carbon or oxygen ions during one week preceding the final treatment with conventional radiations in the referring hospital. The rationale behind these approaches is to reduce the high ionization density dose to the normal tissue stroma inside the tumor and to ensure a microscopically uniform dose delivery. The principal idea of the center is to closely integrate ion therapy into the clinical routine and research of a large radiotherapy department. The light ion therapy center will therefore be combined with advanced tumor diagnostics including MR and PET-CT imaging to facilitate efficient high-precision high-dose boost treatment of remitted patients. The possibility to do 3D tumor diagnostics and 3D dose delivery verification with the same PET camera will be the ultimate step in high quality adaptive radiation therapy where alterations in the delivered dose can be corrected by subsequent treatments
Making 3D movies of Northern Lights
NASA Astrophysics Data System (ADS)
Hivon, Eric; Mouette, Jean; Legault, Thierry
2017-10-01
We describe the steps necessary to create three-dimensional (3D) movies of Northern Lights or Aurorae Borealis out of real-time images taken with two distant high-resolution fish-eye cameras. Astrometric reconstruction of the visible stars is used to model the optical mapping of each camera and correct for it in order to properly align the two sets of images. Examples of the resulting movies can be seen at http://www.iap.fr/aurora3d
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-01-01
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783
NASA Astrophysics Data System (ADS)
Smee, Stephen A.; Prochaska, Travis; Shectman, Stephen A.; Hammond, Randolph P.; Barkhouser, Robert H.; DePoy, D. L.; Marshall, J. L.
2012-09-01
We describe the conceptual optomechanical design for GMACS, a wide-field, multi-object, moderate-resolution optical spectrograph for the Giant Magellan Telescope (GMT). GMACS is a candidate first-light instrument for the GMT and will be one of several instruments housed in the Gregorian Instrument Rotator (GIR) located at the Gregorian focus. The instrument samples a 9 arcminute x 18 arcminute field of view providing two resolution modes (i.e, low resolution, R ~ 2000, and moderate resolution, R ~ 4000) over a 3700 Å to 10200 Å wavelength range. To minimize the size of the optics, four fold mirrors at the GMT focal plane redirect the full field into four individual "arms", that each comprises a double spectrograph with a red and blue channel. Hence, each arm samples a 4.5 arcminute x 9 arcminute field of view. The optical layout naturally leads to three separate optomechanical assemblies: a focal plane assembly, and two identical optics modules. The focal plane assembly contains the last element of the telescope's wide-field corrector, slit-mask, tent-mirror assembly, and slit-mask magazine. Each of the two optics modules supports two of the four instrument arms and houses the aft-optics (i.e. collimators, dichroics, gratings, and cameras). A grating exchange mechanism, and articulated gratings and cameras facilitate multiple resolution modes. In this paper we describe the details of the GMACS optomechanical design, including the requirements and considerations leading to the design, mechanism details, optics mounts, and predicted flexure performance.
NASA Astrophysics Data System (ADS)
Zhang, Bing; Li, Kunyang
2018-02-01
The “Breakthrough Starshot” aims at sending near-speed-of-light cameras to nearby stellar systems in the future. Due to the relativistic effects, a transrelativistic camera naturally serves as a spectrograph, a lens, and a wide-field camera. We demonstrate this through a simulation of the optical-band image of the nearby galaxy M51 in the rest frame of the transrelativistic camera. We suggest that observing celestial objects using a transrelativistic camera may allow one to study the astronomical objects in a special way, and to perform unique tests on the principles of special relativity. We outline several examples that suggest transrelativistic cameras may make important contributions to astrophysics and suggest that the Breakthrough Starshot cameras may be launched in any direction to serve as a unique astronomical observatory.
Concepts, laboratory, and telescope test results of the plenoptic camera as a wavefront sensor
NASA Astrophysics Data System (ADS)
Rodríguez-Ramos, L. F.; Montilla, I.; Fernández-Valdivia, J. J.; Trujillo-Sevilla, J. L.; Rodríguez-Ramos, J. M.
2012-07-01
The plenoptic camera has been proposed as an alternative wavefront sensor adequate for extended objects within the context of the design of the European Solar Telescope (EST), but it can also be used with point sources. Originated in the field of the Electronic Photography, the plenoptic camera directly samples the Light Field function, which is the four - dimensional representation of all the light entering a camera. Image formation can then be seen as the result of the photography operator applied to this function, and many other features of the light field can be exploited to extract information of the scene, like depths computation to extract 3D imaging or, as it will be specifically addressed in this paper, wavefront sensing. The underlying concept of the plenoptic camera can be adapted to the case of a telescope by using a lenslet array of the same f-number placed at the focal plane, thus obtaining at the detector a set of pupil images corresponding to every sampled point of view. This approach will generate a generalization of Shack-Hartmann, Curvature and Pyramid wavefront sensors in the sense that all those could be considered particular cases of the plenoptic wavefront sensor, because the information needed as the starting point for those sensors can be derived from the plenoptic image. Laboratory results obtained with extended objects, phase plates and commercial interferometers, and even telescope observations using stars and the Moon as an extended object are presented in the paper, clearly showing the capability of the plenoptic camera to behave as a wavefront sensor.
Gidzenko in Service Module with laptop computers
2001-03-30
ISS-01-E-5070 (December 2000) --- Astronaut Yuri P. Gidzenko, Expedition One Soyuz commander, works with computers in the Zvezda or Service Module aboard the Earth-orbiting International Space Station (ISS). The picture was taken with a digital still camera.
Computational photography with plenoptic camera and light field capture: tutorial.
Lam, Edmund Y
2015-11-01
Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.
Performance analysis and enhancement for visible light communication using CMOS sensors
NASA Astrophysics Data System (ADS)
Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Fang, Liangtao; Liu, Xiaowei; Chen, Yingcong
2018-03-01
Complementary Metal-Oxide-Semiconductor (CMOS) sensors are widely used in mobile-phone and cameras. Hence, it is attractive if these camera can be used as the receivers of visible light communication (VLC). Using the rolling shutter mechanism can increase the data rate of VLC based on CMOS camera, and different techniques have been proposed to improve the demodulation of the rolling shutter mechanism. However, these techniques are too complexity. In this work, we demonstrate and analyze the performance of the VLC link using CMOS camera for different LED luminaires for the first time in our knowledge. Experimental evaluation to compare their bit-error-rate (BER) performances and demodulation are also performed, and it can be summarized that just need to change the LED luminaire with more uniformity light output, the blooming effect would not exist; which not only can reduce the complexity of the demodulation but also enhance the communication quality. In addition, we propose and demonstrate to use contrast limited adaptive histogram equalization to extend the transmission distance and mitigate the influence of the background noise. And the experimental results show that the BER can be decreased by an order of magnitude by using the proposed method.
Perez-Mendez, V.
1997-01-21
A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.
Perez-Mendez, Victor
1997-01-01
A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.
Demonstration of a vectorial optical field generator with adaptive close loop control.
Chen, Jian; Kong, Lingjiang; Zhan, Qiwen
2017-12-01
We experimentally demonstrate a vectorial optical field generator (VOF-Gen) with an adaptive close loop control. The close loop control capability is illustrated with the calibration of polarization modulation of the system. To calibrate the polarization ratio modulation, we generate 45° linearly polarized beam and make it propagate through a linear analyzer whose transmission axis is orthogonal to the incident beam. For the retardation calibration, circularly polarized beam is employed and a circular polarization analyzer with the opposite chirality is placed in front of the CCD as the detector. In both cases, the close loop control automatically changes the value of the corresponding calibration parameters in the pre-set ranges to generate the phase patterns applied to the spatial light modulators and records the intensity distribution of the output beam by the CCD camera. The optimized calibration parameters are determined corresponding to the minimum total intensity in each case. Several typical kinds of vectorial optical beams are created with and without the obtained calibration parameters, and the full Stokes parameter measurements are carried out to quantitatively analyze the polarization distribution of the generated beams. The comparisons among these results clearly show that the obtained calibration parameters could remarkably improve the accuracy of the polarization modulation of the VOF-Gen, especially for generating elliptically polarized beam with large ellipticity, indicating the significance of the presented close loop in enhancing the performance of the VOF-Gen.
Opportunistic traffic sensing using existing video sources (phase II).
DOT National Transportation Integrated Search
2017-02-01
The purpose of the project reported on here was to investigate methods for automatic traffic sensing using traffic surveillance : cameras, red light cameras, and other permanent and pre-existing video sources. Success in this direction would potentia...
Image quality evaluation of color displays using a Fovean color camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro
2007-03-01
This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
NASA Astrophysics Data System (ADS)
Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.
2017-12-01
Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.
Non-uniform refractive index field measurement based on light field imaging technique
NASA Astrophysics Data System (ADS)
Du, Xiaokun; Zhang, Yumin; Zhou, Mengjie; Xu, Dong
2018-02-01
In this paper, a method for measuring the non-uniform refractive index field based on the light field imaging technique is proposed. First, the light field camera is used to collect the four-dimensional light field data, and then the light field data is decoded according to the light field imaging principle to obtain image sequences with different acquisition angles of the refractive index field. Subsequently PIV (Particle Image Velocimetry) technique is used to extract ray offset of each image. Finally, the distribution of non-uniform refractive index field can be calculated by inversing the deflection of light rays. Compared with traditional optical methods which require multiple optical detectors from multiple angles to synchronously collect data, the method proposed in this paper only needs a light field camera and shoot once. The effectiveness of the method has been verified by the experiment which quantitatively measures the distribution of the refractive index field above the flame of the alcohol lamp.
Programmable Spectral Source and Design Tool for 3D Imaging Using Complementary Bandpass Filters
NASA Technical Reports Server (NTRS)
Bae, Youngsam (Inventor); Korniski, Ronald J. (Inventor); Ream, Allen (Inventor); Shearn, Michael J. (Inventor); Shahinian, Hrayr Karnig (Inventor); Fritz, Eric W. (Inventor)
2017-01-01
An endoscopic illumination system for illuminating a subject for stereoscopic image capture, includes a light source which outputs light; a first complementary multiband bandpass filter (CMBF) and a second CMBF, the first and second CMBFs being situated in first and second light paths, respectively, where the first CMBF and the second CMBF filter the light incident thereupon to output filtered light; and a camera which captures video images of the subject and generates corresponding video information, the camera receiving light reflected from the subject and passing through a pupil CMBF pair and a detection lens. The pupil CMBF includes a first pupil CMBF and a second pupil CMBF, the first pupil CMBF being identical to the first CMBF and the second pupil CMBF being identical to the second CMBF, and the detection lens includes one unpartitioned section that covers both the first pupil CMBF and the second pupil CMBF.
Confocal Retinal Imaging Using a Digital Light Projector with a Near Infrared VCSEL Source
Muller, Matthew S.; Elsner, Ann E.
2018-01-01
A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1″ LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging. PMID:29899586
Nishino, Ken; Nakamura, Mutsuko; Matsumoto, Masayuki; Tanno, Osamu; Nakauchi, Shigeki
2011-03-28
Light reflected from an object's surface contains much information about its physical and chemical properties. Changes in the physical properties of an object are barely detectable in spectra. Conventional trichromatic systems, on the other hand, cannot detect most spectral features because spectral information is compressively represented as trichromatic signals forming a three-dimensional subspace. We propose a method for designing a filter that optically modulates a camera's spectral sensitivity to find an alternative subspace highlighting an object's spectral features more effectively than the original trichromatic space. We designed and developed a filter that detects cosmetic foundations on human face. Results confirmed that the filter can visualize and nondestructively inspect the foundation distribution.
Wiens, Andrew D; Prahalad, Sampath; Inan, Omer T
2016-08-01
Vibroarthrography, a method for interpreting the sounds emitted by a knee during movement, has been studied for several joint disorders since 1902. However, to our knowledge, the usefulness of this method for management of Juvenile Idiopathic Arthritis (JIA) has not been investigated. To study joint sounds as a possible new biomarker for pediatric cases of JIA we designed and built VibroCV, a platform to capture vibroarthrograms from four accelerometers; electromyograms (EMG) and inertial measurements from four wireless EMG modules; and joint angles from two Sony Eye cameras and six light-emitting diodes with commercially-available off-the-shelf parts and computer vision via OpenCV. This article explains the design of this turn-key platform in detail, and provides a sample recording captured from a pediatric subject.
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects
Lambers, Martin; Kolb, Andreas
2017-01-01
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888