Adaptive DOF for plenoptic cameras
NASA Astrophysics Data System (ADS)
Oberdörster, Alexander; Lensch, Hendrik P. A.
2013-03-01
Plenoptic cameras promise to provide arbitrary re-focusing through a scene after the capture. In practice, however, the refocusing range is limited by the depth of field (DOF) of the plenoptic camera. For the focused plenoptic camera, this range is given by the range of object distances for which the microimages are in focus. We propose a technique of recording light fields with an adaptive depth of focus. Between multiple exposures { or multiple recordings of the light field { the distance between the microlens array (MLA) and the image sensor is adjusted. The depth and quality of focus is chosen by changing the number of exposures and the spacing of the MLA movements. In contrast to traditional cameras, extending the DOF does not necessarily lead to an all-in-focus image. Instead, the refocus range is extended. There is full creative control about the focus depth; images with shallow or selective focus can be generated.
NASA Astrophysics Data System (ADS)
Gamadia, Mark Noel
In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras are presented to further illustrate the real-world AF performance gains achieved by the developed approach. The major contribution of this dissertation is that the developed auto focusing approach can be successfully used by camera manufacturers in the development of the AF feature in future generations of digital still cameras and camera phones.
Servo-controlled intravital microscope system
NASA Technical Reports Server (NTRS)
Mansour, M. N.; Wayland, H. J.; Chapman, C. P. (Inventor)
1975-01-01
A microscope system is described for viewing an area of a living body tissue that is rapidly moving, by maintaining the same area in the field-of-view and in focus. A focus sensing portion of the system includes two video cameras at which the viewed image is projected, one camera being slightly in front of the image plane and the other slightly behind it. A focus sensing circuit for each camera differentiates certain high frequency components of the video signal and then detects them and passes them through a low pass filter, to provide dc focus signal whose magnitudes represent the degree of focus. An error signal equal to the difference between the focus signals, drives a servo that moves the microscope objective so that an in-focus view is delivered to an image viewing/recording camera.
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
On the accuracy potential of focused plenoptic camera range determination in long distance operation
NASA Astrophysics Data System (ADS)
Sardemann, Hannes; Maas, Hans-Gerd
2016-04-01
Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
The multifocus plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Lumsdaine, Andrew
2012-01-01
The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.
Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target
NASA Astrophysics Data System (ADS)
Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.
2016-06-01
In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.
Visible camera cryostat design and performance for the SuMIRe Prime Focus Spectrograph (PFS)
NASA Astrophysics Data System (ADS)
Smee, Stephen A.; Gunn, James E.; Golebiowski, Mirek; Hope, Stephen C.; Madec, Fabrice; Gabriel, Jean-Francois; Loomis, Craig; Le fur, Arnaud; Dohlen, Kjetil; Le Mignant, David; Barkhouser, Robert; Carr, Michael; Hart, Murdock; Tamura, Naoyuki; Shimono, Atsushi; Takato, Naruhisa
2016-08-01
We describe the design and performance of the SuMIRe Prime Focus Spectrograph (PFS) visible camera cryostats. SuMIRe PFS is a massively multi-plexed ground-based spectrograph consisting of four identical spectrograph modules, each receiving roughly 600 fibers from a 2394 fiber robotic positioner at the prime focus. Each spectrograph module has three channels covering wavelength ranges 380 nm - 640 nm, 640 nm - 955 nm, and 955 nm - 1.26 um, with the dispersed light being imaged in each channel by a f/1.07 vacuum Schmidt camera. The cameras are very large, having a clear aperture of 300 mm at the entrance window, and a mass of 280 kg. In this paper we describe the design of the visible camera cryostats and discuss various aspects of cryostat performance.
3D display for enhanced tele-operation and other applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-04-01
In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binder, Gary A.; /Caltech /SLAC
2010-08-25
In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images frommore » the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.« less
Application of phase matching autofocus in airborne long-range oblique photography camera
NASA Astrophysics Data System (ADS)
Petrushevsky, Vladimir; Guberman, Asaf
2014-06-01
The Condor2 long-range oblique photography (LOROP) camera is mounted in an aerodynamically shaped pod carried by a fast jet aircraft. Large aperture, dual-band (EO/MWIR) camera is equipped with TDI focal plane arrays and provides high-resolution imagery of extended areas at long stand-off ranges, at day and night. Front Ritchey-Chretien optics is made of highly stable materials. However, the camera temperature varies considerably in flight conditions. Moreover, a composite-material structure of the reflective objective undergoes gradual dehumidification in dry nitrogen atmosphere inside the pod, causing some small decrease of the structure length. The temperature and humidity effects change a distance between the mirrors by just a few microns. The distance change is small but nevertheless it alters the camera's infinity focus setpoint significantly, especially in the EO band. To realize the optics' resolution potential, the optimal focus shall be constantly maintained. In-flight best focus calibration and temperature-based open-loop focus control give mostly satisfactory performance. To get even better focusing precision, a closed-loop phase-matching autofocus method was developed for the camera. The method makes use of an existing beamsharer prism FPA arrangement where aperture partition exists inherently in an area of overlap between the adjacent detectors. The defocus is proportional to an image phase shift in the area of overlap. Low-pass filtering of raw defocus estimate reduces random errors related to variable scene content. Closed-loop control converges robustly to precise focus position. The algorithm uses the temperature- and range-based focus prediction as an initial guess for the closed-loop phase-matching control. The autofocus algorithm achieves excellent results and works robustly in various conditions of scene illumination and contrast.
An alternative approach to depth of field which avoids the blur circle and uses the pixel pitch
NASA Astrophysics Data System (ADS)
Schuster, Norbert
2015-09-01
Modern thermal imaging systems apply more and more uncooled detectors. High volume applications work with detectors which have a reduced pixel count (typical between 200x150 and 640x480). This shrinks the application of modern image treatment procedures like wave front coding. On the other hand side, uncooled detectors demand lenses with fast F-numbers near 1.0. Which are the limits on resolution if the target to analyze changes its distance to the camera system? The aim to implement lens arrangements without any focusing mechanism demands a deeper quantification of the Depth of Field problem. The proposed Depth of Field approach avoids the classic "accepted image blur circle". It bases on a camera specific depth of focus which is transformed in the object space by paraxial relations. The traditional RAYLEIGH's -criterion bases on the unaberrated Point Spread Function and delivers a first order relation for the depth of focus. Hence, neither the actual lens resolution neither the detector impact is considered. The camera specific depth of focus respects a lot of camera properties: Lens aberrations at actual F-number, detector size and pixel pitch. The through focus MTF is the base of the camera specific depth of focus. It has a nearly symmetric course around the maximum of sharp imaging. The through focus MTF is considered at detector's Nyquist frequency. The camera specific depth of focus is this the axial distance in front and behind of sharp image plane where the through focus MTF is <0.25. This camera specific depth of focus is transferred in the object space by paraxial relations. It follows a general applicable Depth of Field diagram which could be applied to lenses realizing a lateral magnification range -0.05…0. Easy to handle formulas are provided between hyperfocal distance and the borders of the Depth of Field in dependence on sharp distances. These relations are in line with the classical Depth of Field-theory. Thermal pictures, taken by different IR-camera cores, illustrate the new approach. The quite often requested graph "MTF versus distance" choses the half Nyquist frequency as reference. The paraxial transfer of the through focus MTF in object space distorts the MTF-curve: hard drop at closer distances than sharp distance, smooth drop at further distances. The formula of a general Diffraction-Limited-Through-Focus-MTF (DLTF) is deducted. Arbitrary detector-lens combinations could be discussed. Free variables in this analysis are waveband, aperture based F-number (lens) and pixel pitch (detector). The DLTF- discussion provides physical limits and technical requirements. The detector development with pixel pitches smaller than captured wavelength in the LWIR-region generates a special challenge for optical design.
Focusing and depth of field in photography: application in dermatology practice.
Taheri, Arash; Yentzer, Brad A; Feldman, Steven R
2013-11-01
Conventional photography obtains a sharp image of objects within a given 'depth of field'; objects not within the depth of field are out of focus. In recent years, digital photography revolutionized the way pictures are taken, edited, and stored. However, digital photography does not result in a deeper depth of field or better focusing. In this article, we briefly review the concept of depth of field and focus in photography as well as new technologies in this area. A deep depth of field is used to have more objects in focus; a shallow depth of field can emphasize a subject by blurring the foreground and background objects. The depth of field can be manipulated by adjusting the aperture size of the camera, with smaller apertures increasing the depth of field at the cost of lower levels of light capture. Light-field cameras are a new generation of digital cameras that offer several new features, including the ability to change the focus on any object in the image after taking the photograph. Understanding depth of field and camera technology helps dermatologists to capture their subjects in focus more efficiently. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Time-resolved spectra of dense plasma focus using spectrometer, streak camera, and CCD combination.
Goldin, F J; Meehan, B T; Hagen, E C; Wilkins, P R
2010-10-01
A time-resolving spectrographic instrument has been assembled with the primary components of a spectrometer, image-converting streak camera, and CCD recording camera, for the primary purpose of diagnosing highly dynamic plasmas. A collection lens defines the sampled region and couples light from the plasma into a step index, multimode fiber which leads to the spectrometer. The output spectrum is focused onto the photocathode of the streak camera, the output of which is proximity-coupled to the CCD. The spectrometer configuration is essentially Czerny-Turner, but off-the-shelf Nikon refraction lenses, rather than mirrors, are used for practicality and flexibility. Only recently assembled, the instrument requires significant refinement, but has now taken data on both bridge wire and dense plasma focus experiments.
Using Video Self-Analysis to Improve the "Withitness" of Student Teachers
ERIC Educational Resources Information Center
Snoeyink, Rick
2010-01-01
Although video self-analysis has been used for years in teacher education, the camera has almost always focused on the preservice teacher. In this study, the researcher videotaped eight preservice teachers four times each during their student-teaching internships. One camera was focused on them while another was focused on their students. Their…
Photon collider: a four-channel autoguider solution
NASA Astrophysics Data System (ADS)
Hygelund, John C.; Haynes, Rachel; Burleson, Ben; Fulton, Benjamin J.
2010-07-01
The "Photon Collider" uses a compact array of four off axis autoguider cameras positioned with independent filtering and focus. The photon collider is two way symmetric and robustly mounted with the off axis light crossing the science field which allows the compact single frame construction to have extremely small relative deflections between guide and science CCDs. The photon collider provides four independent guiding signals with a total of 15 square arc minutes of sky coverage. These signals allow for simultaneous altitude, azimuth, field rotation and focus guiding. Guide cameras read out without exposure overhead increasing the tracking cadence. The independent focus allows the photon collider to maintain in focus guide stars when the main science camera is taking defocused exposures as well as track for telescope focus changes. Independent filters allow auto guiding in the science camera wavelength bandpass. The four cameras are controlled with a custom web services interface from a single Linux based industrial PC, and the autoguider mechanism and telemetry is built around a uCLinux based Analog Devices BlackFin embedded microprocessor. Off axis light is corrected with a custom meniscus correcting lens. Guide CCDs are cooled with ethylene glycol with an advanced leak detection system. The photon collider was built for use on Las Cumbres Observatory's 2 meter Faulks telescopes and currently used to guide the alt-az mount.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flaugher, B.; Diehl, H. T.; Alvarez, O.
2015-11-15
The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuummore » Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less
Flaugher, B.
2015-04-11
The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar.more » The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less
A multi-criteria approach to camera motion design for volume data animation.
Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature
NASA Astrophysics Data System (ADS)
Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min
2017-05-01
This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.
Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
Super-resolved all-refocused image with a plenoptic camera
NASA Astrophysics Data System (ADS)
Wang, Xiang; Li, Lin; Hou, Guangqi
2015-12-01
This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.
Plenoptic camera based on a liquid crystal microlens array
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Xie, Changsheng
2015-09-01
A type of liquid crystal microlens array (LCMLA) with tunable focal length by the voltage signals applied between its top and bottom electrodes, is fabricated and then the common optical focusing characteristics are tested. The relationship between the focal length and the applied voltage signals is given. The LCMLA is integrated with an image sensor and further coupled with a main lens so as to construct a plenoptic camera. Several raw images at different voltage signals applied are acquired and contrasted through the LCMLA-based plenoptic camera constructed by us. Our experiments demonstrate that through utilizing a LCMLA in a plenoptic camera, the focused zone of the LCMLA-based plenoptic camera can be shifted effectively only by changing the voltage signals loaded between the electrodes of the LCMLA, which is equivalent to the extension of the depth of field.
3D vision upgrade kit for TALON robot
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-04-01
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
Opto-mechanical design of the G-CLEF flexure control camera system
NASA Astrophysics Data System (ADS)
Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson
2016-08-01
The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.
Depth measurements through controlled aberrations of projected patterns.
Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim
2012-03-12
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.
NASA Astrophysics Data System (ADS)
Ichikawa, Yasunori; Shirayama, Mari
JCII Camera Museum is a unique photographic museum having three major departments, the camera museum that collects, preserves and exhibits historically valuable cameras and camera-related produts, the photo salon that collects, preserve and exhibits various original photographic films and prints, and the library that collects, preserves and appraises photo-historical literatures including magazines, industrial histories, product catalogues and scientific papers.
Using focused plenoptic cameras for rich image capture.
Georgiev, T; Lumsdaine, A; Chunev, G
2011-01-01
This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture.
NICMOS Focus and HST Breathing
NASA Astrophysics Data System (ADS)
Suchkov, A.; Hershey, J.
1998-09-01
The program 7608 monitored on a biweekly basis NICMOS camera foci from June 9, 1997, through February 18, 1998. Each of the biweekly observations included 17 measurements of focus position (focus sweeps), individually for each of the three cameras. The measurements for camera 1 and camera 3 foci covered one or two HST orbital periods. Comparison of these measurements with the predictions of the three OTA focus breathing models has shown the following. (1). Focus variations seen in NICMOS focus sweeps correlate well with the OTA focus thermal breathing as predicted by breathing models (“4- temperature”, “full-temperature”, and “attitude” models). Thus they can be attributed mostly to the HST orbital temperature variation. (2). The amount of breathing (breathing amplitude) has been found to be on average larger in the first orbit after a telescope slew to a new target. This is explained as being due to additional thermal perturbations caused by the change in the HST attitude as the telescope repoints to a new target. (3). In the first orbit, the amount of focus change predicted by the 4-temperature model is about the same as that seen in the focus sweeps data (breathing scale factor ~1). However the full-temperature model predicts a two times smaller breathing amplitude (breathing scale factor ~1.7). This suggests that the light shield temperatures are more responsive to the attitude change than temperatures from the other temperature sensors. The results of this study may help to better understand the HST thermal cycles and to improve the models describing the impact of those on both the OTA and NICMOS focus.
Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy
NASA Technical Reports Server (NTRS)
1984-01-01
Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.
Lytro camera technology: theory, algorithms, performance analysis
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio
2013-03-01
The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.
Cameras on the moon with Apollos 15 and 16.
NASA Technical Reports Server (NTRS)
Page, T.
1972-01-01
Description of the cameras used for photography and television by Apollo 15 and 16 missions, covering a hand-held Hasselblad camera for black and white panoramic views at locations visited by the astronauts, a special stereoscopic camera designed by astronomer Tom Gold, a 16-mm movie camera used on the Apollo 15 and 16 Rovers, and several TV cameras. Details are given on the far-UV camera/spectrograph of the Apollo 16 mission. An electronographic camera converts UV light to electrons which are ejected by a KBr layer at the focus of an f/1 Schmidt camera and darken photographic films much more efficiently than far-UV. The astronomical activity of the Apollo 16 astronauts on the moon, using this equipment, is discussed.
Qualification Tests of Micro-camera Modules for Space Applications
NASA Astrophysics Data System (ADS)
Kimura, Shinichi; Miyasaka, Akira
Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.
An electrically tunable plenoptic camera using a liquid crystal microlens array.
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
An electrically tunable plenoptic camera using a liquid crystal microlens array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Yu; School of Automation, Huazhong University of Science and Technology, Wuhan 430074; Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074
2015-05-15
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated withmore » an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.« less
An electrically tunable plenoptic camera using a liquid crystal microlens array
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
Imaging Emission Spectra with Handheld and Cellphone Cameras
NASA Astrophysics Data System (ADS)
Sitar, David
2012-12-01
As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.
Camera Perspective Bias in Videotaped Confessions: Evidence that Visual Attention Is a Mediator
ERIC Educational Resources Information Center
Ware, Lezlee J.; Lassiter, G. Daniel; Patterson, Stephen M.; Ransom, Michael R.
2008-01-01
Several experiments have demonstrated a "camera perspective bias" in evaluations of videotaped confessions: videotapes with the camera focused on the suspect lead to judgments of greater voluntariness than alternative presentation formats. The present research investigated potential mediators of this bias. Using eye tracking to measure visual…
Space imaging infrared optical guidance for autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Akiyama, Akira; Kobayashi, Nobuaki; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu
2008-08-01
We have developed the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle based on the uncooled infrared camera and focusing technique to detect the objects to be evaded and to set the drive path. For this purpose we made servomotor drive system to control the focus function of the infrared camera lens. To determine the best focus position we use the auto focus image processing of Daubechies wavelet transform technique with 4 terms. From the determined best focus position we transformed it to the distance of the object. We made the aluminum frame ground vehicle to mount the auto focus infrared unit. Its size is 900mm long and 800mm wide. This vehicle mounted Ackerman front steering system and the rear motor drive system. To confirm the guidance ability of the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle we had the experiments for the detection ability of the infrared auto focus unit to the actual car on the road and the roadside wall. As a result the auto focus image processing based on the Daubechies wavelet transform technique detects the best focus image clearly and give the depth of the object from the infrared camera unit.
Development of biostereometric experiments. [stereometric camera system
NASA Technical Reports Server (NTRS)
Herron, R. E.
1978-01-01
The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
NASA Technical Reports Server (NTRS)
Thompson, Rodger I.
1997-01-01
Near Infrared Camera and Multi-Object Spectrometer (NICMOS) has been in orbit for about 8 months. This is a report on its current status and future plans. Also included are some comments on particular aspects of data analysis concerning dark subtraction, shading, and removal of cosmic rays. At present NICMOS provides excellent images of high scientific content. Most of the observations utilize cameras 1 and 2 which are in excellent focus. Camera 3 is not yet within the range of the focus adjustment mechanism, but its current images are still quite excellent. In this paper we will present the status of various aspects of the NICMOS instrument.
Transient full-field vibration measurement using spectroscopical stereo photogrammetry.
Yue, Kaiduan; Li, Zhongke; Zhang, Ming; Chen, Shan
2010-12-20
Contrasted with other vibration measurement methods, a novel spectroscopical photogrammetric approach is proposed. Two colored light filters and a CCD color camera are used to achieve the function of two traditional cameras. Then a new calibration method is presented. It focuses on the vibrating object rather than the camera and has the advantage of more accuracy than traditional camera calibration. The test results have shown an accuracy of 0.02 mm.
Recognizable-image selection for fingerprint recognition with a mobile-device camera.
Lee, Dongjae; Choi, Kyoungtaek; Choi, Heeseung; Kim, Jaihie
2008-02-01
This paper proposes a recognizable-image selection algorithm for fingerprint-verification systems that use a camera embedded in a mobile device. A recognizable image is defined as the fingerprint image which includes the characteristics that are sufficiently discriminating an individual from other people. While general camera systems obtain focused images by using various gradient measures to estimate high-frequency components, mobile cameras cannot acquire recognizable images in the same way because the obtained images may not be adequate for fingerprint recognition, even if they are properly focused. A recognizable image has to meet the following two conditions: First, valid region in the recognizable image should be large enough compared with other nonrecognizable images. Here, a valid region is a well-focused part, and ridges in the region are clearly distinguishable from valleys. In order to select valid regions, this paper proposes a new focus-measurement algorithm using the secondary partial derivatives and a quality estimation utilizing the coherence and symmetry of gradient distribution. Second, rolling and pitching degrees of a finger measured from the camera plane should be within some limit for a recognizable image. The position of a core point and the contour of a finger are used to estimate the degrees of rolling and pitching. Experimental results show that our proposed method selects valid regions and estimates the degrees of rolling and pitching properly. In addition, fingerprint-verification performance is improved by detecting the recognizable images.
Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission
NASA Astrophysics Data System (ADS)
Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.
2018-02-01
NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.
Single lens 3D-camera with extended depth-of-field
NASA Astrophysics Data System (ADS)
Perwaß, Christian; Wietzke, Lennart
2012-03-01
Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.
Electronographic cameras for space astronomy.
NASA Technical Reports Server (NTRS)
Carruthers, G. R.; Opal, C. B.
1972-01-01
Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.
Constrained space camera assembly
Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.
1999-01-01
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.
Refocusing distance of a standard plenoptic camera.
Hahne, Christopher; Aggoun, Amar; Velisavljevic, Vladan; Fiebig, Susanne; Pesch, Matthias
2016-09-19
Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera's complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs.
Remote gaze tracking system on a large display.
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-10-07
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.
Remote Gaze Tracking System on a Large Display
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-01-01
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351
Sweatt, William C.
1998-01-01
A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.
Auto-converging stereo cameras for 3D robotic tele-operation
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Aycock, Todd; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
Camera Test on Curiosity During Flight to Mars
2012-05-07
An in-flight camera check produced this out-of-focus image when NASA Mars Science Laboratory spacecraft turned on illumination sources that are part of the Curiosity rover Mars Hand Lens Imager MAHLI instrument.
Rentschler, M E; Dumpert, J; Platt, S R; Ahmed, S I; Farritor, S M; Oleynikov, D
2006-01-01
The use of small incisions in laparoscopy reduces patient trauma, but also limits the surgeon's ability to view and touch the surgical environment directly. These limitations generally restrict the application of laparoscopy to procedures less complex than those performed during open surgery. Although current robot-assisted laparoscopy improves the surgeon's ability to manipulate and visualize the target organs, the instruments and cameras remain fundamentally constrained by the entry incisions. This limits tool tip orientation and optimal camera placement. The current work focuses on developing a new miniature mobile in vivo adjustable-focus camera robot to provide sole visual feedback to surgeons during laparoscopic surgery. A miniature mobile camera robot was inserted through a trocar into the insufflated abdominal cavity of an anesthetized pig. The mobile robot allowed the surgeon to explore the abdominal cavity remotely and view trocar and tool insertion and placement without entry incision constraints. The surgeon then performed a cholecystectomy using the robot camera alone for visual feedback. This successful trial has demonstrated that miniature in vivo mobile robots can provide surgeons with sufficient visual feedback to perform common procedures while reducing patient trauma.
Camera systems in human motion analysis for biomedical applications
NASA Astrophysics Data System (ADS)
Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.
2015-05-01
Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.
NASA Astrophysics Data System (ADS)
Zhang, Rumin; Liu, Peng; Liu, Dijun; Su, Guobin
2015-12-01
In this paper, we establish a forward simulation model of plenoptic camera which is implemented by inserting a micro-lens array in a conventional camera. The simulation model is used to emulate how the space objects at different depths are imaged by the main lens then remapped by the micro-lens and finally captured on the 2D sensor. We can easily modify the parameters of the simulation model such as the focal lengths and diameters of the main lens and micro-lens and the number of micro-lens. Employing the spatial integration, the refocused images and all-in-focus images are rendered based on the plenoptic images produced by the model. The forward simulation model can be used to determine the trade-offs between different configurations and to test any new researches related to plenoptic camera without the need of prototype.
Combustion pinhole-camera system
Witte, A.B.
1982-05-19
A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.
Combustion pinhole camera system
Witte, A.B.
1984-02-21
A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor. 2 figs.
Combustion pinhole camera system
Witte, Arvel B.
1984-02-21
A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.
Pinhole Cameras: For Science, Art, and Fun!
ERIC Educational Resources Information Center
Button, Clare
2007-01-01
A pinhole camera is a camera without a lens. A tiny hole replaces the lens, and light is allowed to come in for short amount of time by means of a hand-operated shutter. The pinhole allows only a very narrow beam of light to enter, which reduces confusion due to scattered light on the film. This results in an image that is focused, reversed, and…
Seeing in a different light—using an infrared camera to teach heat transfer and optical phenomena
NASA Astrophysics Data System (ADS)
Pei Wong, Choun; Subramaniam, R.
2018-05-01
The infrared camera is a useful tool in physics education to ‘see’ in the infrared. In this paper, we describe four simple experiments that focus on phenomena related to heat transfer and optics that are encountered at undergraduate physics level using an infrared camera, and discuss the strengths and limitations of this tool for such purposes.
Seeing in a Different Light--Using an Infrared Camera to Teach Heat Transfer and Optical Phenomena
ERIC Educational Resources Information Center
Wong, Choun Pei; Subramaniam, R.
2018-01-01
The infrared camera is a useful tool in physics education to 'see' in the infrared. In this paper, we describe four simple experiments that focus on phenomena related to heat transfer and optics that are encountered at undergraduate physics level using an infrared camera, and discuss the strengths and limitations of this tool for such purposes.
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Sweatt, W.C.
1998-09-08
A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.
Constrained space camera assembly
Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.
1999-05-11
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
ERIC Educational Resources Information Center
Jeppsson, Fredrik; Frejd, Johanna; Lundmark, Frida
2017-01-01
This study focuses on investigating how students make use of their bodily experiences in combination with infrared (IR) cameras, as a way to make meaning in learning about heat, temperature, and friction. A class of 20 primary students (age 7-8 years), divided into three groups, took part in three IR camera laboratory experiments. The qualitative…
Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter
2017-01-01
Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038
NASA Astrophysics Data System (ADS)
Scaduto, Lucimara C. N.; Malavolta, Alexandre T.; Modugno, Rodrigo G.; Vales, Luiz F.; Carvalho, Erica G.; Evangelista, Sérgio; Stefani, Mario A.; de Castro Neto, Jarbas C.
2017-11-01
The first Brazilian remote sensing multispectral camera (MUX) is currently under development at Opto Eletronica S.A. It consists of a four-spectral-band sensor covering a 450nm to 890nm wavelength range. This camera will provide images within a 20m ground resolution at nadir. The MUX camera is part of the payload of the upcoming Sino-Brazilian satellites CBERS 3&4 (China-Brazil Earth Resource Satellite). The preliminary alignment between the optical system and the CCD sensor, which is located at the focal plane assembly, was obtained in air condition, clean room environment. A collimator was used for the performance evaluation of the camera. The preliminary performance evaluation of the optical channel was registered by compensating the collimator focus position due to changes in the test environment, as an air-to-vacuum environment transition leads to a defocus process in this camera. Therefore, it is necessary to confirm that the alignment of the camera must always be attained ensuring that its best performance is reached for an orbital vacuum condition. For this reason and as a further step on the development process, the MUX camera Qualification Model was tested and evaluated inside a thermo-vacuum chamber and submitted to an as-orbit vacuum environment. In this study, the influence of temperature fields was neglected. This paper reports on the performance evaluation and discusses the results for this camera when operating within those mentioned test conditions. The overall optical tests and results show that the "in air" adjustment method was suitable to be performed, as a critical activity, to guarantee the equipment according to its design requirements.
Visser, Leonie N C; Bol, Nadine; Hillen, Marij A; Verdam, Mathilde G E; de Haes, Hanneke C J M; van Weert, Julia C M; Smets, Ellen M A
2018-01-19
Video vignettes are used to test the effects of physicians' communication on patient outcomes. Methodological choices in video-vignette development may have far-stretching consequences for participants' engagement with the video, and thus the ecological validity of this design. To supplement the scant evidence in this field, this study tested how variations in video-vignette introduction format and camera focus influence participants' engagement with a video vignette showing a bad news consultation. Introduction format (A = audiovisual vs. B = written) and camera focus (1 = the physician only, 2 = the physician and the patient at neutral moments alternately, 3 = the physician and the patient at emotional moments alternately) were varied in a randomized 2 × 3 between-subjects design. One hundred eighty-one students were randomly assigned to watch one of the six resulting video-vignette conditions as so-called analogue patients, i.e., they were instructed to imagine themselves being in the video patient's situation. Four dimensions of self-reported engagement were assessed retrospectively. Emotional engagement was additionally measured by recording participants' electrodermal and cardiovascular activity continuously while watching. Analyses of variance were used to test the effects of introduction format, camera focus and their interaction. The audiovisual introduction induced a stronger blood pressure response during watching the introduction (p = 0.048, [Formula: see text]= 0.05) and the consultation part of the vignette (p = 0.051, [Formula: see text]= 0.05), when compared to the written introduction. With respect to camera focus, results revealed that the variant focusing on the patient at emotional moments evoked a higher level of electrodermal activity (p = 0.003, [Formula: see text]= 0.06), when compared to the other two variants. Furthermore, an interaction effect was shown on self-reported emotional engagement (p = 0.045, [Formula: see text]= 0.04): the physician-only variant resulted in lower emotional engagement if the vignette was preceded by the audiovisual introduction. No effects were shown on the other dimensions of self-reported engagement. Our findings imply that using an audiovisual introduction combined with alternating camera focus depicting patient's emotions results in the highest levels of emotional engagement in analogue patients. This evidence can inform methodological decisions during the development of video vignettes, and thereby enhance the ecological validity of future video-vignettes studies.
Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.
Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas
2016-03-01
Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.
Architecture of PAU survey camera readout electronics
NASA Astrophysics Data System (ADS)
Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo
2012-07-01
PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.
Uncooled infrared sensors: rapid growth and future perspective
NASA Astrophysics Data System (ADS)
Balcerak, Raymond S.
2000-07-01
The uncooled infrared cameras are now available for both the military and commercial markets. The current camera technology incorporates the fruits of many years of development, focusing on the details of pixel design, novel material processing, and low noise read-out electronics. The rapid insertion of cameras into systems is testimony to the successful completion of this 'first phase' of development. In the military market, the first uncooled infrared cameras will be used for weapon sights, driver's viewers and helmet mounted cameras. Major commercial applications include night driving, security, police and fire fighting, and thermography, primarily for preventive maintenance and process control. The technology for the next generation of cameras is even more demanding, but within reach. The paper outlines the technology program planned for the next generation of cameras, and the approaches to further enhance performance, even to the radiation limit of thermal detectors.
Using a plenoptic camera to measure distortions in wavefronts affected by atmospheric turbulence
NASA Astrophysics Data System (ADS)
Eslami, Mohammed; Wu, Chensheng; Rzasa, John; Davis, Christopher C.
2012-10-01
Ideally, as planar wave fronts travel through an imaging system, all rays, or vectors pointing in the direction of the propagation of energy are parallel, and thus the wave front is focused to a particular point. If the wave front arrives at an imaging system with energy vectors that point in different directions, each part of the wave front will be focused at a slightly different point on the sensor plane and result in a distorted image. The Hartmann test, which involves the insertion of a series of pinholes between the imaging system and the sensor plane, was developed to sample the wavefront at different locations and measure the distortion angles at different points in the wave front. An adaptive optic system, such as a deformable mirror, is then used to correct for these distortions and allow the planar wave front to focus at the point desired on the sensor plane, thereby correcting the distorted image. The apertures of a pinhole array limit the amount of light that reaches the sensor plane. By replacing the pinholes with a microlens array each bundle of rays is focused to brighten the image. Microlens arrays are making their way into newer imaging technologies, such as "light field" or "plenoptic" cameras. In these cameras, the microlens array is used to recover the ray information of the incoming light by using post processing techniques to focus on objects at different depths. The goal of this paper is to demonstrate the use of these plenoptic cameras to recover the distortions in wavefronts. Taking advantage of the microlens array within the plenoptic camera, CODE-V simulations show that its performance can provide more information than a Shack-Hartmann sensor. Using the microlens array to retrieve the ray information and then backstepping through the imaging system provides information about distortions in the arriving wavefront.
2011-01-11
and its variance σ2Ûi are determined. Ûi = ûi + Pu,EN (PEN )−1 [( Ejc Njc ) − ( êi n̂i )] (15) σ2 Ûi = Pui − P u,EN i ( PENi )−1 PEN,ui (16) where...screen; the operator can click a robot’s camera view to select it as the Focus Robot. The Focus Robot’s camera stream is enlarged and displayed in the
Curiosity's Mars Hand Lens Imager (MAHLI): Inital Observations and Activities
NASA Technical Reports Server (NTRS)
Edgett, K. S.; Yingst, R. A.; Minitti, M. E.; Robinson, M. L.; Kennedy, M. R.; Lipkaman, L. J.; Jensen, E. H.; Anderson, R. C.; Bean, K. M.; Beegle, L. W.;
2013-01-01
MAHLI (Mars Hand Lens Imager) is a 2-megapixel focusable macro lens color camera on the turret on Curiosity's robotic arm. The investigation centers on stratigraphy, grain-scale texture, structure, mineralogy, and morphology of geologic materials at Curiosity's Gale robotic field site. MAHLI acquires focused images at working distances of 2.1 cm to infinity; for reference, at 2.1 cm the scale is 14 microns/pixel; at 6.9 cm it is 31 microns/pixel, like the Spirit and Opportunity Microscopic Imager (MI) cameras.
Analysis of Camera Arrays Applicable to the Internet of Things.
Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing
2016-03-22
The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.
Huveneers, Charlie; Fairweather, Peter G.
2018-01-01
Counting errors can bias assessments of species abundance and richness, which can affect assessments of stock structure, population structure and monitoring programmes. Many methods for studying ecology use fixed viewpoints (e.g. camera traps, underwater video), but there is little known about how this biases the data obtained. In the marine realm, most studies using baited underwater video, a common method for monitoring fish and nekton, have previously only assessed fishes using a single bait-facing viewpoint. To investigate the biases stemming from using fixed viewpoints, we added cameras to cover 360° views around the units. We found similar species richness for all observed viewpoints but the bait-facing viewpoint recorded the highest fish abundance. Sightings of infrequently seen and shy species increased with the additional cameras and the extra viewpoints allowed the abundance estimates of highly abundant schooling species to be up to 60% higher. We specifically recommend the use of additional cameras for studies focusing on shyer species or those particularly interested in increasing the sensitivity of the method by avoiding saturation in highly abundant species. Studies may also benefit from using additional cameras to focus observation on the downstream viewpoint. PMID:29892386
Fixed-focus camera objective for small remote sensing satellites
NASA Astrophysics Data System (ADS)
Topaz, Jeremy M.; Braun, Ofer; Freiman, Dov
1993-09-01
An athermalized objective has been designed for a compact, lightweight push-broom camera which is under development at El-Op Ltd. for use in small remote-sensing satellites. The high performance objective has a fixed focus setting, but maintains focus passively over the full range of temperatures encountered in small satellites. The lens is an F/5.0, 320 mm focal length Tessar type, operating over the range 0.5 - 0.9 micrometers . It has a 16 degree(s) field of view and accommodates various state-of-the-art silicon detector arrays. The design and performance of the objective is described in this paper.
NASA Astrophysics Data System (ADS)
Niemeyer, F.; Schima, R.; Grenzdörffer, G.
2013-08-01
Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.
Optical design of portable nonmydriatic fundus camera
NASA Astrophysics Data System (ADS)
Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang
2016-03-01
Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.
Head-coupled remote stereoscopic camera system for telepresence applications
NASA Astrophysics Data System (ADS)
Bolas, Mark T.; Fisher, Scott S.
1990-09-01
The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera
NASA Astrophysics Data System (ADS)
Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li
2014-09-01
With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.
Retinal fundus imaging with a plenoptic sensor
NASA Astrophysics Data System (ADS)
Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos
2018-02-01
Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
1990-07-01
electrohtic dissociation of the electrode mate- pedo applications seem to be still somewhat rial, and to provide a good gas evolution wlhich out of the...rod cathode. A unique feature of this preliminary experiment was the use of a prototype gated, intensified video camera. This camera is based on a...microprocessor controlled microchannel plate intensifier tube. The intensifier tube image is focused on a standard CCD video camera so that the object
Multi-Angle Snowflake Camera Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuefer, Martin; Bailey, J.
2016-07-01
The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less
Evaluation of modified portable digital camera for screening of diabetic retinopathy.
Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi
2009-01-01
To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.
NASA Technical Reports Server (NTRS)
1976-01-01
Trade studies were conducted to ensure the overall feasibility of the focal plane camera in a radial module. The primary variable in the trade studies was the location of the pickoff mirror, on axis versus off-axis. Two alternatives were: (1) the standard (electromagnetic focus) SECO submodule, and (2) the MOD 15 permanent magnet focus SECO submodule. The technical areas of concern were the packaging affected parameters of thermal dissipation, focal plane obscuration, and image quality.
Variable-focus liquid lens for miniature cameras
NASA Astrophysics Data System (ADS)
Kuiper, S.; Hendriks, B. H. W.
2004-08-01
The meniscus between two immiscible liquids can be used as an optical lens. A change in curvature of this meniscus by electrowetting leads to a change in focal distance. It is demonstrated that two liquids in a tube form a self-centered lens with a high optical quality. The motion of the lens during a focusing action was studied by observation through the transparent tube wall. Finally, a miniature achromatic camera module was designed and constructed based on this adjustable lens, showing that it is excellently suited for use in portable applications.
Camera calibration for multidirectional flame chemiluminescence tomography
NASA Astrophysics Data System (ADS)
Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun
2017-04-01
Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.
New ultrasensitive pickup device for deep-sea robots: underwater super-HARP color TV camera
NASA Astrophysics Data System (ADS)
Maruyama, Hirotaka; Tanioka, Kenkichi; Uchida, Tetsuo
1994-11-01
An ultra-sensitive underwater super-HARP color TV camera has been developed. The characteristics -- spectral response, lag, etc. -- of the super-HARP tube had to be designed for use underwater because the propagation of light in water is very different from that in air, and also depends on the light's wavelength. The tubes have new electrostatic focusing and magnetic deflection functions and are arranged in parallel to miniaturize the camera. A deep sea robot (DOLPHIN 3K) was fitted with this camera and used for the first sea test in Sagami Bay, Japan. The underwater visual information was clear enough to promise significant improvements in both deep sea surveying and safety. It was thus confirmed that the Super- HARP camera is very effective for underwater use.
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.; Jorge, Jorge M.
1998-01-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
NASA Astrophysics Data System (ADS)
Costa, Manuel F.; Jorge, Jorge M.
1997-12-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
Students' framing of laboratory exercises using infrared cameras
NASA Astrophysics Data System (ADS)
Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.
2015-12-01
Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N =30 ) partook in four IR-camera laboratory activities, designed around the predict-observe-explain approach of White and Gunstone. The activities involved central thermal concepts that focused on heat conduction and dissipative processes such as friction and collisions. Students' interactions within each activity were videotaped and the analysis focuses on how a purposefully selected group of three students engaged with the exercises. As the basis for an interpretative study, a "thick" narrative description of the students' epistemological and conceptual framing of the exercises and how they took advantage of the disciplinary affordance of IR cameras in the thermal domain is provided. Findings include that the students largely shared their conceptual framing of the four activities, but differed among themselves in their epistemological framing, for instance, in how far they found it relevant to digress from the laboratory instructions when inquiring into thermal phenomena. In conclusion, the study unveils the disciplinary affordances of infrared cameras, in the sense of their use in providing access to knowledge about macroscopic thermal science.
Who Goes There? Linking Remote Cameras and Schoolyard Science to Empower Action
ERIC Educational Resources Information Center
Tanner, Dawn; Ernst, Julie
2013-01-01
Taking Action Opportunities (TAO) is a curriculum that combines guided reflection, a focus on the local environment, and innovative use of wildlife technology to empower student action toward improving the environment. TAO is experientially based and uses remote cameras as a tool for schoolyard exploration. Through TAO, students engage in research…
LAMOST CCD camera-control system based on RTS2
NASA Astrophysics Data System (ADS)
Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng
2018-05-01
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.
Optical design of the SuMIRe/PFS spectrograph
NASA Astrophysics Data System (ADS)
Pascal, Sandrine; Vives, Sébastien; Barkhouser, Robert; Gunn, James E.
2014-07-01
The SuMIRe Prime Focus Spectrograph (PFS), developed for the 8-m class SUBARU telescope, will consist of four identical spectrographs, each receiving 600 fibers from a 2394 fiber robotic positioner at the telescope prime focus. Each spectrograph includes three spectral channels to cover the wavelength range [0.38-1.26] um with a resolving power ranging between 2000 and 4000. A medium resolution mode is also implemented to reach a resolving power of 5000 at 0.8 um. Each spectrograph is made of 4 optical units: the entrance unit which produces three corrected collimated beams and three camera units (one per spectral channel: "blue, "red", and "NIR"). The beam is split by using two large dichroics; and in each arm, the light is dispersed by large VPH gratings (about 280x280mm). The proposed optical design was optimized to achieve the requested image quality while simplifying the manufacturing of the whole optical system. The camera design consists in an innovative Schmidt camera observing a large field-of-view (10 degrees) with a very fast beam (F/1.09). To achieve such a performance, the classical spherical mirror is replaced by a catadioptric mirror (i.e meniscus lens with a reflective surface on the rear side of the glass, like a Mangin mirror). This article focuses on the optical architecture of the PFS spectrograph and the perfornance achieved. We will first described the global optical design of the spectrograph. Then, we will focus on the Mangin-Schmidt camera design. The analysis of the optical performance and the results obtained are presented in the last section.
Mechanically assisted liquid lens zoom system for mobile phone cameras
NASA Astrophysics Data System (ADS)
Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Berge, B.
2006-08-01
Camera systems with small form factor are an integral part of today's mobile phones which recently feature auto focus functionality. Ready to market solutions without moving parts have been developed by using the electrowetting technology. Besides virtually no deterioration, easy control electronics and simple and therefore cost-effective fabrication, this type of liquid lenses enables extremely fast settling times compared to mechanical approaches. As a next evolutionary step mobile phone cameras will be equipped with zoom functionality. We present first order considerations for the optical design of a miniaturized zoom system based on liquid-lenses and compare it to its mechanical counterpart. We propose a design of a zoom lens with a zoom factor of 2.5 considering state-of-the-art commercially available liquid lens products. The lens possesses auto focus capability and is based on liquid lenses and one additional mechanical actuator. The combination of liquid lenses and a single mechanical actuator enables extremely short settling times of about 20ms for the auto focus and a simplified mechanical system design leading to lower production cost and longer life time. The camera system has a mechanical outline of 24mm in length and 8mm in diameter. The lens with f/# 3.5 provides market relevant optical performance and is designed for an image circle of 6.25mm (1/2.8" format sensor).
A zonal wavefront sensor with multiple detector planes
NASA Astrophysics Data System (ADS)
Pathak, Biswajit; Boruah, Bosanta R.
2018-03-01
A conventional zonal wavefront sensor estimates the wavefront from the data captured in a single detector plane using a single camera. In this paper, we introduce a zonal wavefront sensor which comprises multiple detector planes instead of a single detector plane. The proposed sensor is based on an array of custom designed plane diffraction gratings followed by a single focusing lens. The laser beam whose wavefront is to be estimated is incident on the grating array and one of the diffracted orders from each grating is focused on the detector plane. The setup, by employing a beam splitter arrangement, facilitates focusing of the diffracted beams on multiple detector planes where multiple cameras can be placed. The use of multiple cameras in the sensor can offer several advantages in the wavefront estimation. For instance, the proposed sensor can provide superior inherent centroid detection accuracy that can not be achieved by the conventional system. It can also provide enhanced dynamic range and reduced crosstalk performance. We present here the results from a proof of principle experimental arrangement that demonstrate the advantages of the proposed wavefront sensing scheme.
Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2017-05-08
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.
Optical synthesizer for a large quadrant-array CCD camera: Center director's discretionary fund
NASA Technical Reports Server (NTRS)
Hagyard, Mona J.
1992-01-01
The objective of this program was to design and develop an optical device, an optical synthesizer, that focuses four contiguous quadrants of a solar image on four spatially separated CCD arrays that are part of a unique CCD camera system. This camera and the optical synthesizer will be part of the new NASA-Marshall Experimental Vector Magnetograph, and instrument developed to measure the Sun's magnetic field as accurately as present technology allows. The tasks undertaken in the program are outlined and the final detailed optical design is presented.
ERIC Educational Resources Information Center
Haglund, Jesper; Melander, Emil; Weiszflog, Matthias; Andersson, Staffan
2017-01-01
Background: University physics students were engaged in open-ended thermodynamics laboratory activities with a focus on understanding a chosen phenomenon or the principle of laboratory apparatus, such as thermal radiation and a heat pump. Students had access to handheld infrared (IR) cameras for their investigations. Purpose: The purpose of the…
A Framework for People Re-Identification in Multi-Camera Surveillance Systems
ERIC Educational Resources Information Center
Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud
2017-01-01
People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…
The Zwicky Transient Facility Camera
NASA Astrophysics Data System (ADS)
Dekany, Richard; Smith, Roger M.; Belicki, Justin; Delacroix, Alexandre; Duggan, Gina; Feeney, Michael; Hale, David; Kaye, Stephen; Milburn, Jennifer; Murphy, Patrick; Porter, Michael; Reiley, Daniel J.; Riddle, Reed L.; Rodriguez, Hector; Bellm, Eric C.
2016-08-01
The Zwicky Transient Facility Camera (ZTFC) is a key element of the ZTF Observing System, the integrated system of optoelectromechanical instrumentation tasked to acquire the wide-field, high-cadence time-domain astronomical data at the heart of the Zwicky Transient Facility. The ZTFC consists of a compact cryostat with large vacuum window protecting a mosaic of 16 large, wafer-scale science CCDs and 4 smaller guide/focus CCDs, a sophisticated vacuum interface board which carries data as electrical signals out of the cryostat, an electromechanical window frame for securing externally inserted optical filter selections, and associated cryo-thermal/vacuum system support elements. The ZTFC provides an instantaneous 47 deg2 field of view, limited by primary mirror vignetting in its Schmidt telescope prime focus configuration. We report here on the design and performance of the ZTF CCD camera cryostat and report results from extensive Joule-Thompson cryocooler tests that may be of broad interest to the instrumentation community.
High-Resolution Mars Camera Test Image of Moon (Infrared)
NASA Technical Reports Server (NTRS)
2005-01-01
This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.Sky camera geometric calibration using solar observations
Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan
2016-09-05
A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less
Lensless imaging for wide field of view
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Yagi, Yasushi
2015-02-01
It is desirable to engineer a small camera with a wide field of view (FOV) because of current developments in the field of wearable cameras and computing products, such as action cameras and Google Glass. However, typical approaches for achieving wide FOV, such as attaching a fisheye lens and convex mirrors, require a trade-off between optics size and the FOV. We propose camera optics that achieve a wide FOV, and are at the same time small and lightweight. The proposed optics are a completely lensless and catoptric design. They contain four mirrors, two for wide viewing, and two for focusing the image on the camera sensor. The proposed optics are simple and can be simply miniaturized, since we use only mirrors for the proposed optics and the optics are not susceptible to chromatic aberration. We have implemented the prototype optics of our lensless concept. We have attached the optics to commercial charge-coupled device/complementary metal oxide semiconductor cameras and conducted experiments to evaluate the feasibility of our proposed optics.
NASA Astrophysics Data System (ADS)
Wang, Xiaoyong; Guo, Chongling; Hu, Yongli; He, Hongyan
2017-11-01
The primary and secondary mirrors of onaxis three mirror anastigmatic (TMA) space camera are connected and supported by its front mirror-body structure, which affects both imaging performance and stability of the camera. In this paper, the carbon fiber reinforced plastics (CFRP) thin-walled cylinder and titanium alloy connecting rod have been used for the front mirror-body opto-mechanical structure of the long-focus on-axis and TMA space camera optical system. The front mirror-body component structure has then been optimized by finite element analysis (FEA) computing. Each performance of the front mirror-body structure has been tested by mechanics and vacuum experiments in order to verify the validity of such structure engineering design.
NASA Astrophysics Data System (ADS)
Malin, Michal C.; Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.
2017-08-01
The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from 1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the 2 m tall Remote Sensing Mast, have a 360° azimuth and 180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at 66 cm above the surface. Its fixed focus lens is in focus from 2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of 70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.
An interactive web-based system using cloud for large-scale visual analytics
NASA Astrophysics Data System (ADS)
Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.
2015-03-01
Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.
Application of infrared camera to bituminous concrete pavements: measuring vehicle
NASA Astrophysics Data System (ADS)
Janků, Michal; Stryk, Josef
2017-09-01
Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.
The sensory power of cameras and noise meters for protest surveillance in South Korea.
Kim, Eun-Sung
2016-06-01
This article analyzes sensory aspects of material politics in social movements, focusing on two police tools: evidence-collecting cameras and noise meters for protest surveillance. Through interviews with Korean political activists, this article examines the relationship between power and the senses in the material culture of Korean protests and asks why cameras and noise meters appeared in order to control contemporary peaceful protests in the 2000s. The use of cameras and noise meters in contemporary peaceful protests evidences the exercise of what Michel Foucault calls 'micro-power'. Building on material culture studies, this article also compares the visual power of cameras with the sonic power of noise meters, in terms of a wide variety of issues: the control of things versus words, impacts on protest size, differential effects on organizers and participants, and differences in timing regarding surveillance and punishment.
NASA Astrophysics Data System (ADS)
Simon, Eric; Craen, Pierre; Gaton, Hilario; Jacques-Sermet, Olivier; Laune, Frédéric; Legrand, Julien; Maillard, Mathieu; Tallaron, Nicolas; Verplanck, Nicolas; Berge, Bruno
2010-05-01
A new generation of liquid lenses based on electrowetting has been developed, using a multi-electrode design, enabling to induce optical tilt and focus corrections in the same component. The basic principle is to rely on a conical shape for supporting the liquid interface, the conical shape insuring a restoring force for the liquid liquid interface to come at the center position. The multi-electrode design enables to induce an average tilt of the liquid liquid interface when a bias voltage is applied to the different electrodes. This tilt is reversible, vanishing when voltage bias is cancelled. Possible application of this new lens component is the realization of miniature camera featuring auto-focus and optical image stabilization (OIS) without any mobile mechanical part. Experimental measurements of actual performances of liquid lens component will be presented : focus and tilt amplitude, residual optical wave front error and response time.
Computational photography with plenoptic camera and light field capture: tutorial.
Lam, Edmund Y
2015-11-01
Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.
Miniaturized unified imaging system using bio-inspired fluidic lens
NASA Astrophysics Data System (ADS)
Tsai, Frank S.; Cho, Sung Hwan; Qiao, Wen; Kim, Nam-Hyong; Lo, Yu-Hwa
2008-08-01
Miniaturized imaging systems have become ubiquitous as they are found in an ever-increasing number of devices, such as cellular phones, personal digital assistants, and web cameras. Until now, the design and fabrication methodology of such systems have not been significantly different from conventional cameras. The only established method to achieve focusing is by varying the lens distance. On the other hand, the variable-shape crystalline lens found in animal eyes offers inspiration for a more natural way of achieving an optical system with high functionality. Learning from the working concepts of the optics in the animal kingdom, we developed bio-inspired fluidic lenses for a miniature universal imager with auto-focusing, macro, and super-macro capabilities. Because of the enormous dynamic range of fluidic lenses, the miniature camera can even function as a microscope. To compensate for the image quality difference between the central vision and peripheral vision and the shape difference between a solid-state image sensor and a curved retina, we adopted a hybrid design consisting of fluidic lenses for tunability and fixed lenses for aberration and color dispersion correction. A design of the world's smallest surgical camera with 3X optical zoom capabilities is also demonstrated using the approach of hybrid lenses.
Optical correlator method and apparatus for particle image velocimetry processing
NASA Technical Reports Server (NTRS)
Farrell, Patrick V. (Inventor)
1991-01-01
Young's fringes are produced from a double exposure image of particles in a flowing fluid by passing laser light through the film and projecting the light onto a screen. A video camera receives the image from the screen and controls a spatial light modulator. The spatial modulator has a two dimensional array of cells the transmissiveness of which are controlled in relation to the brightness of the corresponding pixel of the video camera image of the screen. A collimated beam of laser light is passed through the spatial light modulator to produce a diffraction pattern which is focused onto another video camera, with the output of the camera being digitized and provided to a microcomputer. The diffraction pattern formed when the laser light is passed through the spatial light modulator and is focused to a point corresponds to the two dimensional Fourier transform of the Young's fringe pattern projected onto the screen. The data obtained fro This invention was made with U.S. Government support awarded by the Department of the Army (DOD) and NASA grand number(s): DOD #DAAL03-86-K0174 and NASA #NAG3-718. The U.S. Government has certain rights in this invention.
An affordable wearable video system for emergency response training
NASA Astrophysics Data System (ADS)
King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.
2009-02-01
Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.
One-Meter Telescope in Kolonica Saddle - 4 Years of Operation
NASA Astrophysics Data System (ADS)
Kudzej, I.; Dubovsky, P. A.
2010-12-01
The actual technical status of 1 meter Vihorlat National Telescope (VNT) at Astronomical Observatory at Kolonica Saddle is presented. Cassegrain and Nasmyth focus, autoguiding system, computer controlled focusing and fine movements and other improvements achieved recently. For two channel photoelectric photometer the system of channels calibration based on artificial light source is described. For CCD camera FLI PL1001E actually installed in Cassegrain focus we presents transformation coefficients from our instrumental to international photometric BVRI system. The measurements were done during regular observations when good photometry of the constant field stars was available. Before FLI camera acquisition we used SBIG ST9 camera. Transformation coefficients for this instrument are presented as well. In the second part of the paper we presents results of variable stars observations with 1 meter telescope in recent four years. The first experimental electronic measurements were done in 2006. Both with CCD cameras and with two channel photoelectric photometer. Starting in 2007 the regular observing program is in operation. There are only few stars suitable for two channel photoelectric photometer observation. Generally the photometer is better when fast brightness changes (time scale of seconds) must be recorded. Thus the majority of observations is done with CCD detectors. We presents an brief overview of most important observing programs: long term monitoring of selected intermediate polars, eclipse observations of SW Sex stars. Occasional observing campaigns were performed on several interesting objects: OT J071126.0+440405, V603 Aql, V471 Tau eclipse timings, Z And in outburst.
Review: comparison of PET rubidium-82 with conventional SPECT myocardial perfusion imaging
Ghotbi, Adam A; Kjær, Andreas; Hasbak, Philip
2014-01-01
Nuclear cardiology has for many years been focused on gamma camera technology. With ever improving cameras and software applications, this modality has developed into an important assessment tool for ischaemic heart disease. However, the development of new perfusion tracers has been scarce. While cardiac positron emission tomography (PET) so far largely has been limited to centres with on-site cyclotron, recent developments with generator produced perfusion tracers such as rubidium-82, as well as an increasing number of PET scanners installed, may enable a larger patient flow that may supersede that of gamma camera myocardial perfusion imaging. PMID:24028171
Optical stereo video signal processor
NASA Technical Reports Server (NTRS)
Craig, G. D. (Inventor)
1985-01-01
An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.
Temporal Coding of Volumetric Imagery
NASA Astrophysics Data System (ADS)
Llull, Patrick Ryan
'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.
Numerical analysis of wavefront measurement characteristics by using plenoptic camera
NASA Astrophysics Data System (ADS)
Lv, Yang; Ma, Haotong; Zhang, Xuanzhe; Ning, Yu; Xu, Xiaojun
2016-01-01
To take advantage of the large-diameter telescope for high-resolution imaging of extended targets, it is necessary to detect and compensate the wave-front aberrations induced by atmospheric turbulence. Data recorded by Plenoptic cameras can be used to extract the wave-front phases associated to the atmospheric turbulence in an astronomical observation. In order to recover the wave-front phase tomographically, a method of completing the large Field Of View (FOV), multi-perspective wave-front detection simultaneously is urgently demanded, and it is plenoptic camera that possesses this unique advantage. Our paper focuses more on the capability of plenoptic camera to extract the wave-front from different perspectives simultaneously. In this paper, we built up the corresponding theoretical model and simulation system to discuss wave-front measurement characteristics utilizing plenoptic camera as wave-front sensor. And we evaluated the performance of plenoptic camera with different types of wave-front aberration corresponding to the occasions of applications. In the last, we performed the multi-perspective wave-front sensing employing plenoptic camera as wave-front sensor in the simulation. Our research of wave-front measurement characteristics employing plenoptic camera is helpful to select and design the parameters of a plenoptic camera, when utilizing which as multi-perspective and large FOV wave-front sensor, which is expected to solve the problem of large FOV wave-front detection, and can be used for AO in giant telescopes.
Science, conservation, and camera traps
Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas
2011-01-01
Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.
Measurement of reach envelopes with a four-camera Selective Spot Recognition (SELSPOT) system
NASA Technical Reports Server (NTRS)
Stramler, J. H., Jr.; Woolford, B. J.
1983-01-01
The basic Selective Spot Recognition (SELSPOT) system is essentially a system which uses infrared LEDs and a 'camera' with an infrared-sensitive photodetector, a focusing lens, and some A/D electronics to produce a digital output representing an X and Y coordinate for each LED for each camera. When the data are synthesized across all cameras with appropriate calibrations, an XYZ set of coordinates is obtained for each LED at a given point in time. Attention is given to the operating modes, a system checkout, and reach envelopes and software. The Video Recording Adapter (VRA) represents the main addition to the basic SELSPOT system. The VRA contains a microprocessor and other electronics which permit user selection of several options and some interaction with the system.
Curiosity's Mars Hand Lens Imager (MAHLI) Investigation
Edgett, Kenneth S.; Yingst, R. Aileen; Ravine, Michael A.; Caplinger, Michael A.; Maki, Justin N.; Ghaemi, F. Tony; Schaffner, Jacob A.; Bell, James F.; Edwards, Laurence J.; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sullivan, Robert J.; Sumner, Dawn Y.; Thomas, Peter C.; Jensen, Elsa H.; Simmonds, John J.; Sengstacken, Aaron J.; Wilson, Reg G.; Goetz, Walter
2012-01-01
The Mars Science Laboratory (MSL) Mars Hand Lens Imager (MAHLI) investigation will use a 2-megapixel color camera with a focusable macro lens aboard the rover, Curiosity, to investigate the stratigraphy and grain-scale texture, structure, mineralogy, and morphology of geologic materials in northwestern Gale crater. Of particular interest is the stratigraphic record of a ?5 km thick layered rock sequence exposed on the slopes of Aeolis Mons (also known as Mount Sharp). The instrument consists of three parts, a camera head mounted on the turret at the end of a robotic arm, an electronics and data storage assembly located inside the rover body, and a calibration target mounted on the robotic arm shoulder azimuth actuator housing. MAHLI can acquire in-focus images at working distances from ?2.1 cm to infinity. At the minimum working distance, image pixel scale is ?14 μm per pixel and very coarse silt grains can be resolved. At the working distance of the Mars Exploration Rover Microscopic Imager cameras aboard Spirit and Opportunity, MAHLI?s resolution is comparable at ?30 μm per pixel. Onboard capabilities include autofocus, auto-exposure, sub-framing, video imaging, Bayer pattern color interpolation, lossy and lossless compression, focus merging of up to 8 focus stack images, white light and longwave ultraviolet (365 nm) illumination of nearby subjects, and 8 gigabytes of non-volatile memory data storage.
Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera
NASA Astrophysics Data System (ADS)
Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund
2016-03-01
We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.
Impact of multi-focused images on recognition of soft biometric traits
NASA Astrophysics Data System (ADS)
Chiesa, V.; Dugelay, J. L.
2016-09-01
In video surveillance semantic traits estimation as gender and age has always been debated topic because of the uncontrolled environment: while light or pose variations have been largely studied, defocused images are still rarely investigated. Recently the emergence of new technologies, as plenoptic cameras, yields to deal with these problems analyzing multi-focus images. Thanks to a microlens array arranged between the sensor and the main lens, light field cameras are able to record not only the RGB values but also the information related to the direction of light rays: the additional data make possible rendering the image with different focal plane after the acquisition. For our experiments, we use the GUC Light Field Face Database that includes pictures from the First Generation Lytro camera. Taking advantage of light field images, we explore the influence of defocusing on gender recognition and age estimation problems. Evaluations are computed on up-to-date and competitive technologies based on deep learning algorithms. After studying the relationship between focus and gender recognition and focus and age estimation, we compare the results obtained by images defocused by Lytro software with images blurred by more standard filters in order to explore the difference between defocusing and blurring effects. In addition we investigate the impact of deblurring on defocused images with the goal to better understand the different impacts of defocusing and standard blurring on gender and age estimation.
Image quality testing of assembled IR camera modules
NASA Astrophysics Data System (ADS)
Winters, Daniel; Erichsen, Patrik
2013-10-01
Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Darmanis, Spyridon; Toms, Andrew; Durman, Robert; Moore, Donna; Eyres, Keith
2007-07-01
To reduce the operating time in computer-assisted navigated total knee replacement (TKR), by improving communication between the infrared camera and the trackers placed on the patient. The innovation involves placing a routinely used laser pointer on top of the camera, so that the infrared cameras focus precisely on the trackers located on the knee to be operated on. A prospective randomized study was performed involving 40 patients divided into two groups, A and B. Both groups underwent navigated TKR, but for group B patients a laser pointer was used to improve the targeting capabilities of the cameras. Without the laser pointer, the camera had to move a mean 9.2 times in order to identify the trackers. With the introduction of the laser pointer, this was reduced to 0.9 times. Accordingly, the additional mean time required without the laser pointer was 11.6 minutes. Time delays are a major problem in computer-assisted surgery, and our technical suggestion can contribute towards reducing the delays associated with this particular application.
Marshall Grazing Incidence X-ray Spectrometer (MaGIXS) Slit-Jaw Imaging System
NASA Astrophysics Data System (ADS)
Wilkerson, P.; Champey, P. R.; Winebarger, A. R.; Kobayashi, K.; Savage, S. L.
2017-12-01
The Marshall Grazing Incidence X-ray Spectrometer is a NASA sounding rocket payload providing a 0.6 - 2.5 nm spectrum with unprecedented spatial and spectral resolution. The instrument is comprised of a novel optical design, featuring a Wolter1 grazing incidence telescope, which produces a focused solar image on a slit plate, an identical pair of stigmatic optics, a planar diffraction grating and a low-noise detector. When MaGIXS flies on a suborbital launch in 2019, a slit-jaw camera system will reimage the focal plane of the telescope providing a reference for pointing the telescope on the solar disk and aligning the data to supporting observations from satellites and other rockets. The telescope focuses the X-ray and EUV image of the sun onto a plate covered with a phosphor coating that absorbs EUV photons, which then fluoresces in visible light. This 10-week REU project was aimed at optimizing an off-axis mounted camera with 600-line resolution NTSC video for extremely low light imaging of the slit plate. Radiometric calculations indicate an intensity of less than 1 lux at the slit jaw plane, which set the requirement for camera sensitivity. We selected a Watec 910DB EIA charge-coupled device (CCD) monochrome camera, which has a manufacturer quoted sensitivity of 0.0001 lux at F1.2. A high magnification and low distortion lens was then identified to image the slit jaw plane from a distance of approximately 10 cm. With the selected CCD camera, tests show that at extreme low-light levels, we achieve a higher resolution than expected, with only a moderate drop in frame rate. Based on sounding rocket flight heritage, the launch vehicle attitude control system is known to stabilize the instrument pointing such that jitter does not degrade video quality for context imaging. Future steps towards implementation of the imaging system will include ruggedizing the flight camera housing and mounting the selected camera and lens combination to the instrument structure.
Key, Douglas J
2014-07-01
This study incorporates concurrent thermal camera imaging as a means of both safely extending the length of each treatment session within skin surface temperature tolerances and to demonstrate not only the homogeneous nature of skin surface temperature heating but the distribution of that heating pattern as a reflection of localization of subcutaneous fat distribution. Five subjects were selected because of a desire to reduce abdomen and flank fullness. Full treatment field thermal camera imaging was captured at 15 minute intervals, specifically at 15, 30, and 45 minutes into active treatment with the purpose of monitoring skin temperature and avoiding any patterns of skin temperature excess. Peak areas of heating corresponded anatomically to the patients' areas of greatest fat excess ie, visible "pinchable" fat. Preliminary observation of high-resolution thermal camera imaging used concurrently with focused field RF therapy show peak skin heating patterns overlying the areas of greatest fat excess.
Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment
NASA Astrophysics Data System (ADS)
Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.
2016-06-01
Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.
General-Purpose Serial Interface For Remote Control
NASA Technical Reports Server (NTRS)
Busquets, Anthony M.; Gupton, Lawrence E.
1990-01-01
Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.
Quantifying external focus of attention in sailing by means of action sport cameras.
Pluijms, Joost P; Cañal-Bruland, Rouwen; Hoozemans, Marco J M; Van Beek, Morris W; Böcker, Kaj; Savelsbergh, Geert J P
2016-08-01
The aim of the current study was twofold: (1) to validate the use of action sport cameras for quantifying focus of visual attention in sailing and (2) to apply this method to examine whether an external focus of attention is associated with better performance in upwind sailing. To test the validity of this novel quantification method, we first calculated the agreement between gaze location measures and head orientation measures in 13 sailors sailing upwind during training regattas using a head mounted eye tracker. The results confirmed that for measuring visual focus of attention in upwind sailing, the agreement for the two measures was high (intraclass correlation coefficient (ICC) = 0.97) and the 95% limits of agreement were acceptable (between -8.0% and 14.6%). In a next step, we quantified the focus of visual attention in sailing upwind as fast as possible by means of an action sport camera. We captured sailing performance, operationalised as boat speed in the direction of the wind, and environmental conditions using a GPS, compass and wind meter. Four trials, each lasting 1 min, were analysed for 15 sailors each, resulting in a total of 30 upwind speed trials on port tack and 30 upwind speed trials on starboard tack. The results revealed that in sailing - within constantly changing environments - the focus of attention is not a significant predictor for better upwind sailing performances. This implicates that neither external nor internal foci of attention was per se correlated with better performances. Rather, relatively large interindividual differences seem to indicate that different visual attention strategies can lead to similar performance outcomes.
Comparison of 10 digital SLR cameras for orthodontic photography.
Bister, D; Mordarai, F; Aveling, R M
2006-09-01
Digital photography is now widely used to document orthodontic patients. High quality intra-oral photography depends on a satisfactory 'depth of field' focus and good illumination. Automatic 'through the lens' (TTL) metering is ideal to achieve both the above aims. Ten current digital single lens reflex (SLR) cameras were tested for use in intra- and extra-oral photography as used in orthodontics. The manufacturers' recommended macro-lens and macro-flash were used with each camera. Handling characteristics, colour-reproducibility, quality of the viewfinder and flash recharge time were investigated. No camera took acceptable images in factory default setting or 'automatic' mode: this mode was not present for some cameras (Nikon, Fujifilm); led to overexposure (Olympus) or poor depth of field (Canon, Konica-Minolta, Pentax), particularly for intra-oral views. Once adjusted, only Olympus cameras were able to take intra- and extra-oral photographs without the need to change settings, and were therefore the easiest to use. All other cameras needed adjustments of aperture (Canon, Konica-Minolta, Pentax), or aperture and flash (Fujifilm, Nikon), making the latter the most complex to use. However, all cameras produced high quality intra- and extra-oral images, once appropriately adjusted. The resolution of the images is more than satisfactory for all cameras. There were significant differences relating to the quality of colour reproduction, size and brightness of the viewfinders. The Nikon D100 and Fujifilm S 3 Pro consistently scored best for colour fidelity. Pentax and Konica-Minolta had the largest and brightest viewfinders.
Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera
NASA Astrophysics Data System (ADS)
Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.
2017-10-01
Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.
NASA Astrophysics Data System (ADS)
Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan
2018-03-01
The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.
Uav Cameras: Overview and Geometric Calibration Benchmark
NASA Astrophysics Data System (ADS)
Cramer, M.; Przybilla, H.-J.; Zurhorst, A.
2017-08-01
Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.
Three dimensional measurement with an electrically tunable focused plenoptic camera
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2017-03-01
A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.
Three dimensional measurement with an electrically tunable focused plenoptic camera.
Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2017-03-01
A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.
Development of two-framing camera with large format and ultrahigh speed
NASA Astrophysics Data System (ADS)
Jiang, Xiaoguo; Wang, Yuan; Wang, Yi
2012-10-01
High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.
Utilizing Light-field Imaging Technology in Neurosurgery.
Chen, Brian R; Buchanan, Ian A; Kellis, Spencer; Kramer, Daniel; Ohiorhenuan, Ifije; Blumenfeld, Zack; Grisafe Ii, Dominic J; Barbaro, Michael F; Gogia, Angad S; Lu, James Y; Chen, Beverly B; Lee, Brian
2018-04-10
Traditional still cameras can only focus on a single plane for each image while rendering everything outside of that plane out of focus. However, new light-field imaging technology makes it possible to adjust the focus plane after an image has already been captured. This technology allows the viewer to interactively explore an image with objects and anatomy at varying depths and clearly focus on any feature of interest by selecting that location during post-capture viewing. These images with adjustable focus can serve as valuable educational tools for neurosurgical residents. We explore the utility of light-field cameras and review their strengths and limitations compared to other conventional types of imaging. The strength of light-field images is the adjustable focus, as opposed to the fixed-focus of traditional photography and video. A light-field image also is interactive by nature, as it requires the viewer to select the plane of focus and helps with visualizing the three-dimensional anatomy of an image. Limitations include the relatively low resolution of light-field images compared to traditional photography and video. Although light-field imaging is still in its infancy, there are several potential uses for the technology to complement traditional still photography and videography in neurosurgical education.
Utilizing Light-field Imaging Technology in Neurosurgery
Chen, Brian R; Kellis, Spencer; Kramer, Daniel; Ohiorhenuan, Ifije; Blumenfeld, Zack; Grisafe II, Dominic J; Barbaro, Michael F; Gogia, Angad S; Lu, James Y; Chen, Beverly B; Lee, Brian
2018-01-01
Traditional still cameras can only focus on a single plane for each image while rendering everything outside of that plane out of focus. However, new light-field imaging technology makes it possible to adjust the focus plane after an image has already been captured. This technology allows the viewer to interactively explore an image with objects and anatomy at varying depths and clearly focus on any feature of interest by selecting that location during post-capture viewing. These images with adjustable focus can serve as valuable educational tools for neurosurgical residents. We explore the utility of light-field cameras and review their strengths and limitations compared to other conventional types of imaging. The strength of light-field images is the adjustable focus, as opposed to the fixed-focus of traditional photography and video. A light-field image also is interactive by nature, as it requires the viewer to select the plane of focus and helps with visualizing the three-dimensional anatomy of an image. Limitations include the relatively low resolution of light-field images compared to traditional photography and video. Although light-field imaging is still in its infancy, there are several potential uses for the technology to complement traditional still photography and videography in neurosurgical education. PMID:29888163
Malin, Michal C; Ravine, Michael A; Caplinger, Michael A; Tony Ghaemi, F; Schaffner, Jacob A; Maki, Justin N; Bell, James F; Cameron, James F; Dietrich, William E; Edgett, Kenneth S; Edwards, Laurence J; Garvin, James B; Hallet, Bernard; Herkenhoff, Kenneth E; Heydari, Ezat; Kah, Linda C; Lemmon, Mark T; Minitti, Michelle E; Olson, Timothy S; Parker, Timothy J; Rowland, Scott K; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J; Sumner, Dawn Y; Aileen Yingst, R; Duston, Brian M; McNair, Sean; Jensen, Elsa H
2017-08-01
The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from ~1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the ~2 m tall Remote Sensing Mast, have a 360° azimuth and ~180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at ~66 cm above the surface. Its fixed focus lens is in focus from ~2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of ~70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.
Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.
2017-01-01
Abstract The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam‐34 has an f/8, 34 mm focal length lens, and the M‐100 an f/10, 100 mm focal length lens. The M‐34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M‐100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M‐34 can focus from 0.5 m to infinity, and the M‐100 from ~1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the ~2 m tall Remote Sensing Mast, have a 360° azimuth and ~180° elevation field of regard. Mars Descent Imager is fixed‐mounted to the bottom left front side of the rover at ~66 cm above the surface. Its fixed focus lens is in focus from ~2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of ~70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression. PMID:29098171
Matrix Determination of Reflectance of Hidden Object via Indirect Photography
2012-03-01
the hidden object. This thesis provides an alternative method of processing the camera images by modeling the system as a set of transport and...Distribution Function ( BRDF ). Figure 1. Indirect photography with camera field of view dictated by point of illumination. 3 1.3 Research Focus In an...would need to be modeled using radiometric principles. A large amount of the improvement in this process was due to the use of a blind
Metrology camera system of prime focus spectrograph for Suburu telescope
NASA Astrophysics Data System (ADS)
Wang, Shiang-Yu; Chou, Richard C. Y.; Huang, Pin-Jie; Ling, Hung-Hsu; Karr, Jennifer; Chang, Yin-Chang; Hu, Yen-Sang; Hsu, Shu-Fu; Chen, Hsin-Yo; Gunn, James E.; Reiley, Dan J.; Tamura, Naoyuki; Takato, Naruhisa; Shimono, Atsushi
2016-08-01
The Prime Focus Spectrograph (PFS) is a new optical/near-infrared multi-fiber spectrograph designed for the prime focus of the 8.2m Subaru telescope. PFS will cover a 1.3 degree diameter field with 2394 fibers to complement the imaging capabilities of Hyper SuprimeCam. To retain high throughput, the final positioning accuracy between the fibers and observing targets of PFS is required to be less than 10 microns. The metrology camera system (MCS) serves as the optical encoder of the fiber motors for the configuring of fibers. MCS provides the fiber positions within a 5 microns error over the 45 cm focal plane. The information from MCS will be fed into the fiber positioner control system for the closed loop control. MCS will be located at the Cassegrain focus of Subaru telescope in order to cover the whole focal plane with one 50M pixel Canon CMOS camera. It is a 380mm Schmidt type telescope which generates a uniform spot size with a 10 micron FWHM across the field for reasonable sampling of the point spread function. Carbon fiber tubes are used to provide a stable structure over the operating conditions without focus adjustments. The CMOS sensor can be read in 0.8s to reduce the overhead for the fiber configuration. The positions of all fibers can be obtained within 0.5s after the readout of the frame. This enables the overall fiber configuration to be less than 2 minutes. MCS will be installed inside a standard Subaru Cassgrain Box. All components that generate heat are located inside a glycol cooled cabinet to reduce the possible image motion due to heat. The optics and camera for MCS have been delivered and tested. The mechanical parts and supporting structure are ready as of spring 2016. The integration of MCS will start in the summer of 2016. In this report, the performance of the MCS components, the alignment and testing procedure as well as the status of the PFS MCS will be presented.
NASA Astrophysics Data System (ADS)
Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David
2012-06-01
Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.
Camera calibration: active versus passive targets
NASA Astrophysics Data System (ADS)
Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli
2011-11-01
Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.
Video segmentation and camera motion characterization using compressed data
NASA Astrophysics Data System (ADS)
Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain
1997-10-01
We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rilling, M; Centre de Recherche sur le Cancer, Hôtel-Dieu de Québec, Quebec City, QC; Département de radio-oncologie, CHU de Québec, Quebec City, QC
2015-06-15
Purpose: The purpose of this work is to simulate a multi-focus plenoptic camera used as the measuring device in a real-time three-dimensional scintillation dosimeter. Simulating and optimizing this realistic optical system will bridge the technological gap between concept validation and a clinically viable tool that can provide highly efficient, accurate and precise measurements for dynamic radiotherapy techniques. Methods: The experimental prototype, previously developed for proof of concept purposes, uses an off-the-shelf multi-focus plenoptic camera. With an array of interleaved microlenses of different focal lengths, this camera records spatial and angular information of light emitted by a plastic scintillator volume. Themore » three distinct microlens focal lengths were determined experimentally for use as baseline parameters by measuring image-to-object magnification for different distances in object space. A simulated plenoptic system was implemented using the non-sequential ray tracing software Zemax: this tool allows complete simulation of multiple optical paths by modeling interactions at interfaces such as scatter, diffraction, reflection and refraction. The active sensor was modeled based on the camera manufacturer specifications by a 2048×2048, 5 µm-pixel pitch sensor. Planar light sources, simulating the plastic scintillator volume, were employed for ray tracing simulations. Results: The microlens focal lengths were determined to be 384, 327 and 290 µm. A realistic multi-focus plenoptic system, with independently defined and optimizable specifications, was fully simulated. A f/2.9 and 54 mm-focal length Double Gauss objective was modeled as the system’s main lens. A three-focal length hexagonal microlens array of 250-µm thickness was designed, acting as an image-relay system between the main lens and sensor. Conclusion: Simulation of a fully modeled multi-focus plenoptic camera enables the decoupled optimization of the main lens and microlens specifications. This work leads the way to improving the 3D dosimeter’s achievable resolution, efficiency and build for providing a quality assurance tool fully meeting clinical needs. M.R. is financially supported by a Master’s Canada Graduate Scholarship from the NSERC. This research is also supported by the NSERC Industrial Research Chair in Optical Design.« less
PRIMAS: a real-time 3D motion-analysis system
NASA Astrophysics Data System (ADS)
Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans
1994-03-01
The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.
Impact of New Camera Technologies on Discoveries in Cell Biology.
Stuurman, Nico; Vale, Ronald D
2016-08-01
New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.
Megapixel mythology and photospace: estimating photospace for camera phones from large image sets
NASA Astrophysics Data System (ADS)
Hultgren, Bror O.; Hertel, Dirk W.
2008-01-01
It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.
Nonholonomic camera-space manipulation using cameras mounted on a mobile base
NASA Astrophysics Data System (ADS)
Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun
1998-10-01
The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.
Electrowetting-based liquid lenses for endoscopy
NASA Astrophysics Data System (ADS)
Kuiper, S.
2011-03-01
In endoscopy there is a need for cameras with adjustable focus. In flexible and capsule endoscopes conventional focus systems are not suitable, because of restrictions in diameter and lens displacement range. In this paper it is shown that electrowetting-based variable-focus liquid lenses can provide a solution. A theoretical comparison is made between displacing and deforming lenses, and a demonstrator was built to prove the optical feasibility of focusing with liquid lenses in endoscopes.
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
Relating transverse ray error and light fields in plenoptic camera images
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim; Tyo, J. Scott
2013-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. Center for Vocational Education.
This package of camera ready masters is one of a set of twelve documents describing the Career Planning Support System (CPSS) and its use. (CPSS is a comprehensive guidance program management system which (1) provides techniques to improve a high school's career guidance program, (2) focuses on the skills students need to make decisions about and…
NASA Astrophysics Data System (ADS)
Raghavan, Ajay; Saha, Bhaskar
2013-03-01
Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.
Automatic focusing system of BSST in Antarctic
NASA Astrophysics Data System (ADS)
Tang, Peng-Yi; Liu, Jia-Jing; Zhang, Guang-yu; Wang, Jian
2015-10-01
Automatic focusing (AF) technology plays an important role in modern astronomical telescopes. Based on the focusing requirement of BSST (Bright Star Survey Telescope) in Antarctic, an AF system is set up. In this design, functions in OpenCV is used to find stars, the algorithm of area, HFD or FWHM are used to degree the focus metric by choosing. Curve fitting method is used to find focus position as the method of camera moving. All these design are suitable for unattended small telescope.
Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System
NASA Astrophysics Data System (ADS)
Stebner, K.; Wieden, A.
2014-03-01
Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.
Design, demonstration and testing of low F-number LWIR panoramic imaging relay optics
NASA Astrophysics Data System (ADS)
Furxhi, Orges; Frascati, Joe; Driggers, Ronald
2018-04-01
Panoramic imaging is inherently wide field of view. High sensitivity uncooled Long Wave Infrared (LWIR) imaging requires low F-number optics. These two requirements result in short back working distance designs that, in addition to being costly, are challenging to integrate with commercially available uncooled LWIR cameras and cores. Common challenges include the relocation of the shutter flag, custom calibration of the camera dynamic range and NUC tables, focusing, and athermalization. Solutions to these challenges add to the system cost and make panoramic uncooled LWIR cameras commercially unattractive. In this paper, we present the design of Panoramic Imaging Relay Optics (PIRO) and show imagery and test results with one of the first prototypes. PIRO designs use several reflective surfaces (generally two) to relay a panoramic scene onto a real, donut-shaped image. The PIRO donut is imaged on the focal plane of the camera using a commercially-off-the-shelf (COTS) low F-number lens. This approach results in low component cost and effortless integration with pre-calibrated commercially available cameras and lenses.
Motionless active depth from defocus system using smart optics for camera autofocus applications
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2016-04-01
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
System for critical infrastructure security based on multispectral observation-detection module
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław
2013-10-01
Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents a structure and some elements of critical infrastructure protection solution which is based on a modular multisensor security system. System description is focused mainly on methodology of selection of sensors parameters. The results of the tests in real conditions are also presented.
The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity
NASA Astrophysics Data System (ADS)
Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.
2009-08-01
The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA and Onboard Processing. The DEA incorpo-rates the circuit elements required for data processing, compression, and buffering. It also includes all power conversion and regulation capabilities for both the DEA and the camera head. The DEA has an 8 GB non-volatile flash memory plus 128 MB volatile storage. Images can be commanded as full-frame or sub-frame and the camera has autofocus and autoexposure capa-bilities. MAHLI can also acquire 720p, ~7 Hz high definition video. Onboard processing includes options for Bayer pattern filter interpolation, JPEG-based compression, and focus stack merging (z-stacking). Malin Space Science Systems (MSSS) built and will operate the MAHLI. Alliance Spacesystems, LLC, designed and built the lens mechanical assembly. MAHLI shares common electronics, detector, and software designs with the MSL Mars Descent Imager (MARDI) and the 2 MSL Mast Cameras (Mastcam). Pre-launch images of geologic materials imaged by MAHLI are online at: http://www.msss.com/msl/mahli/prelaunch_images/.
Quantitative Imaging with a Mobile Phone Microscope
Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.
2014-01-01
Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072
A design of a high speed dual spectrometer by single line scan camera
NASA Astrophysics Data System (ADS)
Palawong, Kunakorn; Meemon, Panomsak
2018-03-01
A spectrometer that can capture two orthogonal polarization components of s light beam is demanded for polarization sensitive imaging system. Here, we describe the design and implementation of a high speed spectrometer for simultaneous capturing of two orthogonal polarization components, i.e. vertical and horizontal components, of light beam. The design consists of a polarization beam splitter, two polarization-maintain optical fibers, two collimators, a single line-scan camera, a focusing lens, and a reflection blaze grating. The alignment of two beam paths was designed to be symmetrically incident on the blaze side and reverse blaze side of reflection grating, respectively. The two diffracted beams were passed through the same focusing lens and focused on the single line-scan sensors of a CMOS camera. The two spectra of orthogonal polarization were imaged on 1000 pixels per spectrum. With the proposed setup, the amplitude and shape of the two detected spectra can be controlled by rotating the collimators. The technique for optical alignment of spectrometer will be presented and discussed. The two orthogonal polarization spectra can be simultaneously captured at a speed of 70,000 spectra per second. The high speed dual spectrometer can simultaneously detected two orthogonal polarizations, which is an important component for the development of polarization-sensitive optical coherence tomography. The performance of the spectrometer have been measured and analyzed.
A new adaptive light beam focusing principle for scanning light stimulation systems.
Bitzer, L A; Meseth, M; Benson, N; Schmechel, R
2013-02-01
In this article a novel principle to achieve optimal focusing conditions or rather the smallest possible beam diameter for scanning light stimulation systems is presented. It is based on the following methodology: First, a reference point on a camera sensor is introduced where optimal focusing conditions are adjusted and the distance between the light focusing optic and the reference point is determined using a laser displacement sensor. In a second step, this displacement sensor is used to map the topography of the sample under investigation. Finally, the actual measurement is conducted, using optimal focusing conditions in each measurement point at the sample surface, that are determined by the height difference between camera sensor and the sample topography. This principle is independent of the measurement values, the optical or electrical properties of the sample, the used light source, or the selected wavelength. Furthermore, the samples can be tilted, rough, bent, or of different surface materials. In the following the principle is implemented using an optical beam induced current system, but basically it can be applied to any other scanning light stimulation system. Measurements to demonstrate its operation are shown, using a polycrystalline silicon solar cell.
Focus determination for the James Webb Space Telescope Science Instruments: A Survey of Methods
NASA Technical Reports Server (NTRS)
Davila, Pamela S.; Bolcar, Matthew R.; Boss, B.; Dean, B.; Hapogian, J.; Howard, J.; Unger, B.; Wilson, M.
2006-01-01
The James Webb Space Telescope (JWST) is a segmented deployable telescope that will require on-orbit alignment using the Near Infrared Camera as a wavefront sensor. The telescope will be aligned by adjusting seven degrees of freedom on each of 18 primary mirror segments and five degrees of freedom on the secondary mirror to optimize the performance of the telescope and camera at a wavelength of 2 microns. With the completion of these adjustments, the telescope focus is set and the optical performance of each of the other science instruments should then be optimal without making further telescope focus adjustments for each individual instrument. This alignment approach requires confocality of the instruments after integration and alignment to the composite metering structure, which will be verified during instrument level testing at Goddard Space Flight Center with a telescope optical simulator. In this paper, we present the results from a study of several analytical approaches to determine the focus for each instrument. The goal of the study is to compare the accuracies obtained for each method, and to select the most feasible for use during optical testing.
Automatic Focus Adjustment of a Microscope
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance
2005-01-01
AUTOFOCUS is a computer program for use in a control system that automatically adjusts the position of an instrument arm that carries a microscope equipped with an electronic camera. In the original intended application of AUTOFOCUS, the imaging microscope would be carried by an exploratory robotic vehicle on a remote planet, but AUTOFOCUS could also be adapted to similar applications on Earth. Initially control software other than AUTOFOCUS brings the microscope to a position above a target to be imaged. Then the instrument arm is moved to lower the microscope toward the target: nominally, the target is approached from a starting distance of 3 cm in 10 steps of 3 mm each. After each step, the image in the camera is subjected to a wavelet transform, which is used to evaluate the texture in the image at multiple scales to determine whether and by how much the microscope is approaching focus. A focus measure is derived from the transform and used to guide the arm to bring the microscope to the focal height. When the analysis reveals that the microscope is in focus, image data are recorded and transmitted.
High-speed optical 3D sensing and its applications
NASA Astrophysics Data System (ADS)
Watanabe, Yoshihiro
2016-12-01
This paper reviews high-speed optical 3D sensing technologies for obtaining the 3D shape of a target using a camera. The focusing speed is from 100 to 1000 fps, exceeding normal camera frame rates, which are typically 30 fps. In particular, contactless, active, and real-time systems are introduced. Also, three example applications of this type of sensing technology are introduced, including surface reconstruction from time-sequential depth images, high-speed 3D user interaction, and high-speed digital archiving.
Computational cameras for moving iris recognition
NASA Astrophysics Data System (ADS)
McCloskey, Scott; Venkatesha, Sharath
2015-05-01
Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.
VizieR Online Data Catalog: Young stellar objects in NGC 6823 (Riaz+, 2012)
NASA Astrophysics Data System (ADS)
Riaz, B.; Martin, E. L.; Tata, R.; Monin, J.-L.; Phan-Bao, N.; Bouy, H.
2016-10-01
The optical V-, R- and I-band images were obtained using the Prime Focus camera [William Herschel Telescope (WHT)/Wide Field Camera (WFC) detector] mounted on 4-m WHT in La Palma, Canary Islands, Spain. Observations were performed in 2005 May, The NIR J-, H-, Ks-band images were obtained using the Infrared Side Port Imager (ISPI) mounted on Cerro Tololo Inter-American Observatory (CTIO) 4-m Blanco Telescope in Cerro Tololo, Chile. Observations were performed in 2007 March. (3 data files).
In-Home Exposure Therapy for Veterans with PTSD
2017-10-01
telehealth (HBT; Veterans stay at home and meet with the therapist using the computer and video cameras), and (3) PE delivered in home, in person (IHIP... video cameras), and (3) PE delivered in home, in person (IHIP; the therapist comes to the Veterans’ homes for treatment). We will be checking to see...when providing treatment in homes and through home based video technology. BODY: Our focus in the past year (30 Sept 2016 – 10 Oct 2017) has been to
System Architecture of the Dark Energy Survey Camera Readout Electronics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Theresa; /FERMILAB; Ballester, Otger
2010-05-27
The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overallmore » grounding scheme and early results of system tests.« less
The Atacama Cosmology Telescope: Instrument
NASA Astrophysics Data System (ADS)
Thornton, Robert J.; Atacama Cosmology Telescope Team
2010-01-01
The 6-meter Atacama Cosmology Telescope (ACT) is making detailed maps of the Cosmic Microwave Background at Cerro Toco in northern Chile. In this talk, I focus on the design and operation of the telescope and its commissioning instrument, the Millimeter Bolometer Array Camera. The camera contains three independent sets of optics that operate at 148 GHz, 217 GHz, and 277 GHz with arcminute resolution, each of which couples to a 1024-element array of Transition Edge Sensor (TES) bolometers. I will report on the camera performance, including the beam patterns, optical efficiencies, and detector sensitivities. Under development for ACT is a new polarimeter based on feedhorn-coupled TES devices that have improved sensitivity and are planned to operate at 0.1 K.
NASA Astrophysics Data System (ADS)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.
2015-02-01
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes
NASA Astrophysics Data System (ADS)
Teppati Losè, L.; Chiabrando, F.; Spanò, A.
2018-05-01
The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H
2015-02-01
Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.
Edgett, Kenneth S.; Caplinger, Michael A.; Maki, Justin N.; Ravine, Michael A.; Ghaemi, F. Tony; McNair, Sean; Herkenhoff, Kenneth E.; Duston, Brian M.; Wilson, Reg G.; Yingst, R. Aileen; Kennedy, Megan R.; Minitti, Michelle E.; Sengstacken, Aaron J.; Supulver, Kimberley D.; Lipkaman, Leslie J.; Krezoski, Gillian M.; McBride, Marie J.; Jones, Tessa L.; Nixon, Brian E.; Van Beek, Jason K.; Krysak, Daniel J.; Kirk, Randolph L.
2015-01-01
MAHLI (Mars Hand Lens Imager) is a 2-megapixel, Bayer pattern color CCD camera with a macro lens mounted on a rotatable turret at the end of the 2-meters-long robotic arm aboard the Mars Science Laboratory rover, Curiosity. The camera includes white and longwave ultraviolet LEDs to illuminate targets at night. Onboard data processing services include focus stack merging and data compression. Here we report on the results and status of MAHLI characterization and calibration, covering the pre-launch period from August 2008 through the early months of the extended surface mission through February 2015. Since landing in Gale crater in August 2012, MAHLI has been used for a wide range of science and engineering applications, including distinction among a variety of mafic, siliciclastic sedimentary rocks; investigation of grain-scale rock, regolith, and eolian sediment textures and structures; imaging of the landscape; inspection and monitoring of rover and science instrument hardware concerns; and supporting geologic sample selection, extraction, analysis, delivery, and documentation. The camera has a dust cover and focus mechanism actuated by a single stepper motor. The transparent cover was coated with a thin film of dust during landing, thus MAHLI is usually operated with the cover open. The camera focuses over a range from a working distance of 2.04 cm to infinity; the highest resolution images are at 13.9 µm per pixel; images acquired from 6.9 cm show features at the same scale as the Mars Exploration Rover Microscopic Imagers at 31 µm/pixel; and 100 µm/pixel is achieved at a working distance of ~26.5 cm. The very highest resolution images returned from Mars permit distinction of high contrast silt grains in the 30–40 µm size range. MAHLI has performed well; the images need no calibration in order to achieve most of the investigation’s science and engineering goals. The positioning and repeatability of robotic arm placement of the MAHLI camera head have been excellent on Mars, often with the hardware arriving within millimeters of expectation. Stability while imaging is usually such that the images are sharply focused; some exceptions—thought to result from motion induced by wind—have occurred during longer exposure LED-illuminated night imaging. Image calibration includes relative radiometric correction by removal of dark current and application of a flat field. Dark current is negligible to minor for typical daytime exposure durations and temperatures at the Gale field site. A pre-launch flat field product is usually applied to the data but new products created from images acquired by MAHLI of the Martian sky are superior and can provide a relative radiometric accuracy of ~6%. The camera lens imparts negligible distortion to its images; camera models derived from pre-launch data, with CAHV and CAHVOR parameters captured in their archived labels, can be applied to the images for analysis. MAHLI data and derived products, including pre-launch images, are archived with the NASA Planetary Data System (PDS). This report includes supplementary calibration and characterization data that are not available in the PDS archive (see supplement file MAHLITechRept0001_Supplement.zip).
Apparatus and method for laser beam diagnosis
Salmon, Jr., Joseph T.
1991-01-01
An apparatus and method is disclosed for accurate, real time monitoring of the wavefront curvature of a coherent laser beam. Knowing the curvature, it can be quickly determined whether the laser beam is collimated, or focusing (converging), or de-focusing (diverging). The apparatus includes a lateral interferometer for forming an interference pattern of the laser beam to be diagnosed. The interference pattern is imaged to a spatial light modulator (SLM), whose output is a coherent laser beam having an image of the interference pattern impressed on it. The SLM output is focused to obtain the far-field diffraction pattern. A video camera, such as CCD, monitors the far-field diffraction pattern, and provides an electrical output indicative of the shape of the far-field pattern. Specifically, the far-field pattern comprises a central lobe and side lobes, whose relative positions are indicative of the radius of curvature of the beam. The video camera's electrical output may be provided to a computer which analyzes the data to determine the wavefront curvature of the laser beam.
Apparatus and method for laser beam diagnosis
Salmon, J.T. Jr.
1991-08-27
An apparatus and method are disclosed for accurate, real time monitoring of the wavefront curvature of a coherent laser beam. Knowing the curvature, it can be quickly determined whether the laser beam is collimated, or focusing (converging), or de-focusing (diverging). The apparatus includes a lateral interferometer for forming an interference pattern of the laser beam to be diagnosed. The interference pattern is imaged to a spatial light modulator (SLM), whose output is a coherent laser beam having an image of the interference pattern impressed on it. The SLM output is focused to obtain the far-field diffraction pattern. A video camera, such as CCD, monitors the far-field diffraction pattern, and provides an electrical output indicative of the shape of the far-field pattern. Specifically, the far-field pattern comprises a central lobe and side lobes, whose relative positions are indicative of the radius of curvature of the beam. The video camera's electrical output may be provided to a computer which analyzes the data to determine the wavefront curvature of the laser beam. 11 figures.
Evaluate depth of field limits of fixed focus lens arrangements in thermal infrared
NASA Astrophysics Data System (ADS)
Schuster, Norbert
2016-05-01
More and more modern thermal imaging systems use uncooled detectors. High volume applications work with detectors that have a reduced pixel count (typically between 200x150 and 640x480). This reduces the usefulness of modern image treatment procedures such as wave front coding. On the other hand, uncooled detectors demand lenses with fast fnumbers, near f/1.0, which reduces the expected Depth of Field (DoF). What are the limits on resolution if the target changes distance to the camera system? The desire to implement lens arrangements without a focusing mechanism demands a deeper quantification of the DoF problem. A new approach avoids the classic "accepted image blur circle" and quantifies the expected DoF by the Through Focus MTF of the lens. This function is defined for a certain spatial frequency that provides a straightforward relation to the pixel pitch of imaging device. A certain minimum MTF-level is necessary so that the complete thermal imaging system can realize its basic functions, such as recognition or detection of specified targets. Very often, this technical tradeoff is approved with a certain lens. But what is the impact of changing the lens for one with a different focal length? Narrow field lenses, which give more details of targets in longer distances, tighten the DoF problem. A first orientation is given by the hyperfocal distance. It depends in a square relation on the focal length and in a linear relation on the through focus MTF of the lens. The analysis of these relations shows the contradicting requirements between higher thermal and spatial resolution, faster f-number and desired DoF. Furthermore, the hyperfocal distance defines the DoF-borders. Their relation between is such as the first order imaging formulas. A calculation methodology will be presented to transfer DoF-results from an approved combination lens and camera to another lens in combination with the initial camera. Necessary input for this prediction is the accepted DoF of the initial combination and the through focus MTFs of both lenses. The accepted DoF of the initial combination defines an application and camera related MTF-level, which must be provided also by the new lens. Examples are provided. The formula of the Diffraction-Limited-Through-Focus-MTF (DLTF) quantifies the physical limit and works without any ray trace. This relation respects the pixel pitch, the waveband and the aperture based f-number, but is independent of detector size. The DLTF has a steeper slope than the ray traced Through-Focus-MTF; its maximum is the diffraction limit. The DLTF predicts the DoF-relations quite precisely. Differences to ray trace results are discussed. Last calculations with modern detectors show that a static chosen MTF-level doesn't reflect the reality for the DoFproblem. The MTF-level to respect depends on application, pixel pitch, IR-camera and image treatment. A value of 0.250 at the detector Nyquist frequency seems to be a reasonable starting point for uncooled FPAs with 17μm pixel pitch.
Intraocular camera for retinal prostheses: Refractive and diffractive lens systems
NASA Astrophysics Data System (ADS)
Hauer, Michelle Christine
The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.
NASA Astrophysics Data System (ADS)
Holland, S. Douglas
1992-09-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
NASA Technical Reports Server (NTRS)
Holland, S. Douglas (Inventor)
1992-01-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
HIGH SPEED KERR CELL FRAMING CAMERA
Goss, W.C.; Gilley, L.F.
1964-01-01
The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)
Light field geometry of a Standard Plenoptic Camera.
Hahne, Christopher; Aggoun, Amar; Haxha, Shyqyri; Velisavljevic, Vladan; Fernández, Juan Carlos Jácome
2014-11-03
The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.
Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.
Porch, Timothy G; Erpelding, John E
2006-04-30
A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.
Improved CPAS Photogrammetric Capabilities for Engineering Development Unit (EDU) Testing
NASA Technical Reports Server (NTRS)
Ray, Eric S.; Bretz, David R.
2013-01-01
This paper focuses on two key improvements to the photogrammetric analysis capabilities of the Capsule Parachute Assembly System (CPAS) for the Orion vehicle. The Engineering Development Unit (EDU) system deploys Drogue and Pilot parachutes via mortar, where an important metric is the muzzle velocity. This can be estimated using a high speed camera pointed along the mortar trajectory. The distance to the camera is computed from the apparent size of features of known dimension. This method was validated with a ground test and compares favorably with simulations. The second major photogrammetric product is measuring the geometry of the Main parachute cluster during steady-state descent using onboard cameras. This is challenging as the current test vehicles are suspended by a single-point attachment unlike earlier stable platforms suspended under a confluence fitting. The mathematical modeling of fly-out angles and projected areas has undergone significant revision. As the test program continues, several lessons were learned about optimizing the camera usage, installation, and settings to obtain the highest quality imagery possible.
Optical performance analysis of plenoptic camera systems
NASA Astrophysics Data System (ADS)
Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas
2014-09-01
Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.
Using remote underwater video to estimate freshwater fish species richness.
Ebner, B C; Morgan, D L
2013-05-01
Species richness records from replicated deployments of baited remote underwater video stations (BRUVS) and unbaited remote underwater video stations (UBRUVS) in shallow (<1 m) and deep (>1 m) water were compared with those obtained from using fyke nets, gillnets and beach seines. Maximum species richness (14 species) was achieved through a combination of conventional netting and camera-based techniques. Chanos chanos was the only species not recorded on camera, whereas Lutjanus argentimaculatus, Selenotoca multifasciata and Gerres filamentosus were recorded on camera in all three waterholes but were not detected by netting. BRUVSs and UBRUVSs provided versatile techniques that were effective at a range of depths and microhabitats. It is concluded that cameras warrant application in aquatic areas of high conservation value with high visibility. Non-extractive video methods are particularly desirable where threatened species are a focus of monitoring or might be encountered as by-catch in net meshes. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.
Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto
2014-11-01
The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt series of 61 images within 30 minutes. Accuracy and repeatability were good enough to practical use (Figure 1). We successfully reduced the total acquisition time of a tomography tilt series in half than before.jmicro;63/suppl_1/i25/DFU066F1F1DFU066F1Fig. 1.Objective lens current change with a tilt angle during acquisition of tomography series (Sample: a rat hepatocyte, thickness: 2 m, magnification: 4k, acc. voltage: 2 MV). Tilt angle range is ±60 degree with 2 degree step angle. Two series were acquired in the same area. Both data were almost same and the deviation was smaller than the minimum step by manual, so auto-focus worked well. We also developed a computer-aided three dimensional (3D) visualization and analysis software for electron tomography "HawkC" which can sectionalize the 3D data semi-automatically[5,6]. If this auto-acquisition system is used with IMOD reconstruction software[7] and HawkC software, we will be able to do on-line UHVEM tomography. The system would help pathology examination in the future.This work was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, under a Grant-in-Aid for Scientific Research (Grant No. 23560024, 23560786), and SENTAN, Japan Science and Technology Agency, Japan. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Arain, Nabeel A; Cadeddu, Jeffrey A; Best, Sara L; Roshek, Thomas; Chang, Victoria; Hogg, Deborah C; Bergs, Richard; Fernandez, Raul; Webb, Erin M; Scott, Daniel J
2012-04-01
This study aimed to evaluate the surgeon performance and workload of a next-generation magnetically anchored camera compared with laparoscopic and flexible endoscopic imaging systems for laparoscopic and single-site laparoscopy (SSL) settings. The cameras included a 5-mm 30° laparoscope (LAP), a magnetically anchored (MAGS) camera, and a flexible endoscope (ENDO). The three camera systems were evaluated using standardized optical characteristic tests. Each system was used in random order for visualization during performance of a standardized suturing task by four surgeons. Each participant performed three to five consecutive repetitions as a surgeon and also served as a camera driver for other surgeons. Ex vivo testing was conducted in a laparoscopic multiport and SSL layout using a box trainer. In vivo testing was performed only in the multiport configuration and used a previously validated live porcine Nissen model. Optical testing showed superior resolution for MAGS at 5 and 10 cm compared with LAP or ENDO. The field of view ranged from 39 to 99°. The depth of focus was almost three times greater for MAGS (6-270 mm) than for LAP (2-88 mm) or ENDO (1-93 mm). Both ex vivo and in vivo multiport combined surgeon performance was significantly better for LAP than for ENDO, but no significant differences were detected for MAGS. For multiport testing, workload ratings were significantly less ex vivo for LAP and MAGS than for ENDO and less in vivo for LAP than for MAGS or ENDO. For ex vivo SSL, no significant performance differences were detected, but camera drivers rated the workload significantly less for MAGS than for LAP or ENDO. The data suggest that the improved imaging element of the next-generation MAGS camera has optical and performance characteristics that meet or exceed those of the LAP or ENDO systems and that the MAGS camera may be especially useful for SSL. Further refinements of the MAGS camera are encouraged.
Plenoptic background oriented schlieren imaging
NASA Astrophysics Data System (ADS)
Klemkowsky, Jenna N.; Fahringer, Timothy W.; Clifford, Christopher J.; Bathel, Brett F.; Thurow, Brian S.
2017-09-01
The combination of the background oriented schlieren (BOS) technique with the unique imaging capabilities of a plenoptic camera, termed plenoptic BOS, is introduced as a new addition to the family of schlieren techniques. Compared to conventional single camera BOS, plenoptic BOS is capable of sampling multiple lines-of-sight simultaneously. Displacements from each line-of-sight are collectively used to build a four-dimensional displacement field, which is a vector function structured similarly to the original light field captured in a raw plenoptic image. The displacement field is used to render focused BOS images, which qualitatively are narrow depth of field slices of the density gradient field. Unlike focused schlieren methods that require manually changing the focal plane during data collection, plenoptic BOS synthetically changes the focal plane position during post-processing, such that all focal planes are captured in a single snapshot. Through two different experiments, this work demonstrates that plenoptic BOS is capable of isolating narrow depth of field features, qualitatively inferring depth, and quantitatively estimating the location of disturbances in 3D space. Such results motivate future work to transition this single-camera technique towards quantitative reconstructions of 3D density fields.
Upgrading the Arecibo Potassium Lidar Receiver for Meridional Wind Measurements
NASA Astrophysics Data System (ADS)
Piccone, A. N.; Lautenbach, J.
2017-12-01
Lidar can be used to measure a plethora of variables: temperature, density of metals, and wind. This REU project is focused on the set up of a semi steerable telescope that will allow the measurement of meridional wind in the mesosphere (80-105 km) with Arecibo Observatory's potassium resonance lidar. This includes the basic design concept of a steering system that is able to turn the telescope to a maximum of 40°, alignment of the mirror with the telescope frame to find the correct focusing, and the triggering and programming of a CCD camera. The CCD camera's purpose is twofold: looking though the telescope and matching the stars in the field of view with a star map to accurately calibrate the steering system and determining the laser beam properties and position. Using LabVIEW, the frames from the CCD camera can be analyzed to identify the most intense pixel in the image (and therefore the brightest point in the laser beam or stars) by plotting average pixel values per row and column and locating the peaks of these plots. The location of this pixel can then be plotted, determining the jitter in the laser and position within the field of view of the telescope.
Robust and efficient modulation transfer function measurement with CMOS color sensors
NASA Astrophysics Data System (ADS)
Farsani, Raziyeh A.; Sure, Thomas; Apel, Uwe
2017-06-01
Increasing challenges of the industry to improve camera performance with control and test of the alignment process will be discussed in this paper. The major difficulties, such as special CFAs that have white/clear pixels instead of a Bayer pattern and non-homogeneous back light illumination of the targets, used for such tests, will be outlined and strategies on how to handle them will be presented. The proposed algorithms are applied to synthetically generated edges, as well as to experimental images taken from ADAS cameras in standard illumination conditions, to validate the approach. In addition, to consider the influence of the chromatic aberration of the lens and the CFA's influence on the total system MTF, the on-axis focus behavior of the camera module will be presented for each pixel class separately. It will be shown that the repeatability of the measurement results of the system MTF is improved, as a result of a more accurate and robust edge angle detection, elimination of systematic errors, using an improved lateral shift of the pixels and analytical modeling of the edge transition. Results also show the necessity to have separated measurements of contrast in the different pixel classes to ensure a precise focus position.
Analysis of Performance of Stereoscopic-Vision Software
NASA Technical Reports Server (NTRS)
Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert
2007-01-01
A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.
Earth's Radiation Belts: The View from Juno's Cameras
NASA Astrophysics Data System (ADS)
Becker, H. N.; Joergensen, J. L.; Hansen, C. J.; Caplinger, M. A.; Ravine, M. A.; Gladstone, R.; Versteeg, M. H.; Mauk, B.; Paranicas, C.; Haggerty, D. K.; Thorne, R. M.; Connerney, J. E.; Kang, S. S.
2013-12-01
Juno's cameras, particle instruments, and ultraviolet imaging spectrograph have been heavily shielded for operation within Jupiter's high radiation environment. However, varying quantities of >1-MeV electrons and >10-MeV protons will be energetic enough to penetrate instrument shielding and be detected as transient background signatures by the instruments. The differing shielding profiles of Juno's instruments lead to differing spectral sensitivities to penetrating electrons and protons within these regimes. This presentation will discuss radiation data collected by Juno in the Earth's magnetosphere during Juno's October 9, 2013 Earth flyby (559 km altitude at closest approach). The focus will be data from Juno's Stellar Reference Unit, Advanced Stellar Compass star cameras, and JunoCam imager acquired during coordinated proton measurements within the inner zone and during the spacecraft's inbound and outbound passages through the outer zone (L ~3-5). The background radiation signatures from these cameras will be correlated with dark count background data collected at these geometries by Juno's Ultraviolet Spectrograph (UVS) and Jupiter Energetic Particle Detector Instrument (JEDI). Further comparison will be made to Van Allen Probe data to calibrate Juno's camera results and contribute an additional view of the Earth's radiation environment during this unique event.
Medium-sized aperture camera for Earth observation
NASA Astrophysics Data System (ADS)
Kim, Eugene D.; Choi, Young-Wan; Kang, Myung-Seok; Kim, Ee-Eul; Yang, Ho-Soon; Rasheed, Ad. Aziz Ad.; Arshad, Ahmad Sabirin
2017-11-01
Satrec Initiative and ATSB have been developing a medium-sized aperture camera (MAC) for an earth observation payload on a small satellite. Developed as a push-broom type high-resolution camera, the camera has one panchromatic and four multispectral channels. The panchromatic channel has 2.5m, and multispectral channels have 5m of ground sampling distances at a nominal altitude of 685km. The 300mm-aperture Cassegrain telescope contains two aspheric mirrors and two spherical correction lenses. With a philosophy of building a simple and cost-effective camera, the mirrors incorporate no light-weighting, and the linear CCDs are mounted on a single PCB with no beam splitters. MAC is the main payload of RazakSAT to be launched in 2005. RazakSAT is a 180kg satellite including MAC, designed to provide high-resolution imagery of 20km swath width on a near equatorial orbit (NEqO). The mission objective is to demonstrate the capability of a high-resolution remote sensing satellite system on a near equatorial orbit. This paper describes the overview of the MAC and RarakSAT programmes, and presents the current development status of MAC focusing on key optical aspects of Qualification Model.
Konduru, Anil Reddy; Yelikar, Balasaheb R; Sathyashree, K V; Kumar, Ankur
2018-01-01
Open source technologies and mobile innovations have radically changed the way people interact with technology. These innovations and advancements have been used across various disciplines and already have a significant impact. Microscopy, with focus on visually appealing contrasting colors for better appreciation of morphology, forms the core of the disciplines such as Pathology, microbiology, and anatomy. Here, learning happens with the aid of multi-head microscopes and digital camera systems for teaching larger groups and in organizing interactive sessions for students or faculty of other departments. The cost of the original equipment manufacturer (OEM) camera systems in bringing this useful technology at all the locations is a limiting factor. To avoid this, we have used the low-cost technologies like Raspberry Pi, Mobile high definition link and 3D printing for adapters to create portable camera systems. Adopting these open source technologies enabled us to convert any binocular or trinocular microscope be connected to a projector or HD television at a fraction of the cost of the OEM camera systems with comparable quality. These systems, in addition to being cost-effective, have also provided the added advantage of portability, thus providing the much-needed flexibility at various teaching locations.
Use of wildlife webcams - Literature review and annotated bibliography
Ratz, Joan M.; Conk, Shannon J.
2010-01-01
The U.S. Fish and Wildlife Service National Conservation Training Center requested a literature review product that would serve as a resource to natural resource professionals interested in using webcams to connect people with nature. The literature review focused on the effects on the public of viewing wildlife through webcams and on information regarding installation and use of webcams. We searched the peer reviewed, published literature for three topics: wildlife cameras, virtual tourism, and technological nature. Very few publications directly addressed the effect of viewing wildlife webcams. The review of information on installation and use of cameras yielded information about many aspects of the use of remote photography, but not much specifically regarding webcams. Aspects of wildlife camera use covered in the literature review include: camera options, image retrieval, system maintenance and monitoring, time to assemble, power source, light source, camera mount, frequency of image recording, consequences for animals, and equipment security. Webcam technology is relatively new and more publication regarding the use of the technology is needed. Future research should specifically study the effect that viewing wildlife through webcams has on the viewers' conservation attitudes, behaviors, and sense of connectedness to nature.
Design of Belief Propagation Based on FPGA for the Multistereo CAFADIS Camera
Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel
2010-01-01
In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm. PMID:22163404
Design of belief propagation based on FPGA for the multistereo CAFADIS camera.
Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel
2010-01-01
In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.
Real Time Measures of Effectiveness
DOT National Transportation Integrated Search
2003-06-01
This report describes research that is focused on identifying and determining methods for automatically computing measures of effectiveness (MOEs) when supplied with real time information. The MOEs, along with detection devices such as cameras, roadw...
Study of magnetic perturbations on SEC vidicon tubes. [large space telescope
NASA Technical Reports Server (NTRS)
Long, D. C.; Zucchino, P.; Lowrance, J.
1973-01-01
A laboratory measurements program was conducted to determine the tolerances that must be imposed to achieve optimum performance from SEC-vidicon data sensors in the LST mission. These measurements along with other data were used to formulate recommendations regarding the necessary telemetry and remote control for the television data sensors when in orbit. The study encompassed the following tasks: (1) Conducted laboratory measurements of the perturbations which an external magnetic field produces on a magnetically focused, SEC-vidicon. Evaluated shielding approaches. (2) Experimentally evaluated the effects produced on overall performance by variations of the tube electrode potentials, and the focus, deflection and alignment fields. (3) Recommended the extent of ground control of camera parameters and camera parameter telemetry required for optimizing the performance of the television system in orbit. The experimental data are summarized in a set of graphs.
Multi-pinhole collimator design for small-object imaging with SiliSPECT: a high-resolution SPECT
NASA Astrophysics Data System (ADS)
Shokouhi, S.; Metzler, S. D.; Wilson, D. W.; Peterson, T. E.
2009-01-01
We have designed a multi-pinhole collimator for a dual-headed, stationary SPECT system that incorporates high-resolution silicon double-sided strip detectors. The compact camera design of our system enables imaging at source-collimator distances between 20 and 30 mm. Our analytical calculations show that using knife-edge pinholes with small-opening angles or cylindrically shaped pinholes in a focused, multi-pinhole configuration in combination with this camera geometry can generate narrow sensitivity profiles across the field of view that can be useful for imaging small objects at high sensitivity and resolution. The current prototype system uses two collimators each containing 127 cylindrically shaped pinholes that are focused toward a target volume. Our goal is imaging objects such as a mouse brain, which could find potential applications in molecular imaging.
Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.
Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P
2016-01-01
Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.
Comparison Between RGB and Rgb-D Cameras for Supporting Low-Cost Gnss Urban Navigation
NASA Astrophysics Data System (ADS)
Rossi, L.; De Gaetani, C. I.; Pagliari, D.; Realini, E.; Reguzzoni, M.; Pinto, L.
2018-05-01
A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation) and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.
Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis
Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.
2016-01-01
Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.
2015-01-12
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However,more » the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.« less
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
Optical design of space cameras for automated rendezvous and docking systems
NASA Astrophysics Data System (ADS)
Zhu, X.
2018-05-01
Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.
NASA Technical Reports Server (NTRS)
Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.
2011-01-01
We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.
A Simple Model of the Accommodating Lens of the Human Eye
ERIC Educational Resources Information Center
Oommen, Vinay; Kanthakumar, Praghalathan
2014-01-01
The human eye is often discussed as optically equivalent to a photographic camera. The iris is compared with the shutter, the pupil to the aperture, and the retina to the film, and both have lens systems to focus rays of light. Although many similarities exist, a major difference between the two systems is the mechanism involved in focusing an…
Large holographic 3D display for real-time computer-generated holography
NASA Astrophysics Data System (ADS)
Häussler, R.; Leister, N.; Stolle, H.
2017-06-01
SeeReal's concept of real-time holography is based on Sub-Hologram encoding and tracked Viewing Windows. This solution leads to significant reduction of pixel count and computation effort compared to conventional holography concepts. Since the first presentation of the concept, improved full-color holographic displays were built with dedicated components. The hologram is encoded on a spatial light modulator that is a sandwich of a phase-modulating and an amplitude-modulating liquid-crystal display and that modulates amplitude and phase of light. Further components are based on holographic optical elements for light collimation and focusing which are exposed in photopolymer films. Camera photographs show that only the depth region on which the focus of the camera lens is set is in focus while the other depth regions are out of focus. These photographs demonstrate that the 3D scene is reconstructed in depth and that accommodation of the eye lenses is supported. Hence, the display is a solution to overcome the accommodationconvergence conflict that is inherent for stereoscopic 3D displays. The main components, progress and results of the holographic display with 300 mm x 200 mm active area are described. Furthermore, photographs of holographic reconstructed 3D scenes are shown.
Spectral colors capture and reproduction based on digital camera
NASA Astrophysics Data System (ADS)
Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang
2018-01-01
The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.
NASA Astrophysics Data System (ADS)
Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.
2007-03-01
We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.
Compton camera study for high efficiency SPECT and benchmark with Anger system
NASA Astrophysics Data System (ADS)
Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.
2017-12-01
Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application for SPECT if compared to present commercial Anger systems, with particular focus on dose delivered to the patient, examination time, and spatial uncertainties.
NASA Astrophysics Data System (ADS)
Gaddam, Vamsidhar Reddy; Griwodz, Carsten; Halvorsen, Pâl.
2014-02-01
One of the most common ways of capturing wide eld-of-view scenes is by recording panoramic videos. Using an array of cameras with limited overlapping in the corresponding images, one can generate good panorama images. Using the panorama, several immersive display options can be explored. There is a two fold synchronization problem associated to such a system. One is the temporal synchronization, but this challenge can easily be handled by using a common triggering solution to control the shutters of the cameras. The other synchronization challenge is the automatic exposure synchronization which does not have a straight forward solution, especially in a wide area scenario where the light conditions are uncontrolled like in the case of an open, outdoor football stadium. In this paper, we present the challenges and approaches for creating a completely automatic real-time panoramic capture system with a particular focus on the camera settings. One of the main challenges in building such a system is that there is not one common area of the pitch that is visible to all the cameras that can be used for metering the light in order to nd appropriate camera parameters. One approach we tested is to use the green color of the eld grass. Such an approach provided us with acceptable results only in limited light conditions.A second approach was devised where the overlapping areas between adjacent cameras are exploited, thus creating pairs of perfectly matched video streams. However, there still existed some disparity between di erent pairs. We nally developed an approach where the time between two temporal frames is exploited to communicate the exposures among the cameras where we achieve a perfectly synchronized array. An analysis of the system and some experimental results are presented in this paper. In summary, a pilot-camera approach running in auto-exposure mode and then distributing the used exposure values to the other cameras seems to give best visual results.
Final Optical Design of PANIC, a Wide-Field Infrared Camera for CAHA
NASA Astrophysics Data System (ADS)
Cárdenas, M. C.; Gómez, J. Rodríguez; Lenzen, R.; Sánchez-Blanco, E.
We present the Final Optical Design of PANIC (PAnoramic Near Infrared camera for Calar Alto), a wide-field infrared imager for the Ritchey-Chrtien focus of the Calar Alto 2.2 m telescope. This will be the first instrument built under the German-Spanish consortium that manages the Calar Alto observatory. The camera optical design is a folded single optical train that images the sky onto the focal plane with a plate scale of 0.45 arcsec per 18 μm pixel. The optical design produces a well defined internal pupil available to reducing the thermal background by a cryogenic pupil stop. A mosaic of four detectors Hawaii 2RG of 2 k ×2 k, made by Teledyne, will give a field of view of 31.9 arcmin ×31.9 arcmin.
Multi-channel automotive night vision system
NASA Astrophysics Data System (ADS)
Lu, Gang; Wang, Li-jun; Zhang, Yi
2013-09-01
A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.
NASA Astrophysics Data System (ADS)
Lyuty, V. M.; Abdullayev, B. I.; Alekberov, I. A.; Gulmaliyev, N. I.; Mikayilov, Kh. M.; Rustamov, B. N.
2009-12-01
Short description of optical and electric scheme of CCD photometer with camera U-47 installed on the Cassegrain focus of ZEISS-600 telescope of the ShAO NAS Azerbaijan is provided. The reducer of focus with factor of reduction 1.7 is applied. It is calculated equivalent focal distances of a telescope with a focus reducer. General calculations of optimum distance from focal plane and t sizes of optical filters of photometer are presented.
Study of Permanent Magnet Focusing for Astronomical Camera Tubes
NASA Technical Reports Server (NTRS)
Long, D. C.; Lowrance, J. L.
1975-01-01
A design is developed of a permanent magnet assembly (PMA) useful as the magnetic focusing unit for the 35 and 70 mm (diagonal) format SEC tubes. Detailed PMA designs for both tubes are given, and all data on their magnetic configuration, size, weight, and structure of magnetic shields adequate to screen the camera tube from the earth's magnetic field are presented. A digital computer is used for the PMA design simulations, and the expected operational performance of the PMA is ascertained through the calculation of a series of photoelectron trajectories. A large volume where the magnetic field uniformity is greater than 0.5% appears obtainable, and the point spread function (PSF) and modulation transfer function(MTF) indicate nearly ideal performance. The MTF at 20 cycles per mm exceeds 90%. The weight and volume appear tractable for the large space telescope and ground based application.
Face Liveness Detection Using Defocus
Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun
2015-01-01
In order to develop security systems for identity authentication, face recognition (FR) technology has been applied. One of the main problems of applying FR technology is that the systems are especially vulnerable to attacks with spoofing faces (e.g., 2D pictures). To defend from these attacks and to enhance the reliability of FR systems, many anti-spoofing approaches have been recently developed. In this paper, we propose a method for face liveness detection using the effect of defocus. From two images sequentially taken at different focuses, three features, focus, power histogram and gradient location and orientation histogram (GLOH), are extracted. Afterwards, we detect forged faces through the feature-level fusion approach. For reliable performance verification, we develop two databases with a handheld digital camera and a webcam. The proposed method achieves a 3.29% half total error rate (HTER) at a given depth of field (DoF) and can be extended to camera-equipped devices, like smartphones. PMID:25594594
Applying image quality in cell phone cameras: lens distortion
NASA Astrophysics Data System (ADS)
Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje
2009-01-01
This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.
iPhone 4s and iPhone 5s Imaging of the Eye.
Jalil, Maaz; Ferenczy, Sandor R; Shields, Carol L
2017-01-01
To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable.
Listen; There's a Hell of a Good Universe Next Door; Let's Go
NASA Technical Reports Server (NTRS)
Rigby, Jane R.
2012-01-01
Scientific research is key to our nation's technological and economic development. One can attempt to focus research toward specific applications, but science has a way of surprising us. Think for example of the "charge-couple device", which was originally invented for memory storage, but became the modern digital camera that is used everywhere from camera phones to the Hubble Space Telescope. Using digital cameras, Hubble has taken pictures that reach back 12 billion light-years into the past, when the Universe was only 1-2 billion years old. Such results would never have been possible with the film cameras Hubble was originally supposed to use. Over the past two decades, Hubble and other telescopes have shown us much about the Universe -- many of these results are shocking. Our galaxy is swarming with planets; most of the mass in the Universe is invisible; and our Universe is accelerating ever faster and faster for unknown reasons. Thus, we live in a "hell of a good universe", to quote e.e. cummings, that we fundamentally don't understand. This means that you, as young scientists, have many worlds to discover
Voyager spacecraft images of Jupiter and Saturn
NASA Technical Reports Server (NTRS)
Birnbaum, M. M.
1982-01-01
The Voyager imaging system is described, noting that it is made up of a narrow-angle and a wide-angle TV camera, each in turn consisting of optics, a filter wheel and shutter assembly, a vidicon tube, and an electronics subsystem. The narrow-angle camera has a focal length of 1500 mm; its field of view is 0.42 deg and its focal ratio is f/8.5. For the wide-angle camera, the focal length is 200 mm, the field of view 3.2 deg, and the focal ratio of f/3.5. Images are exposed by each camera through one of eight filters in the filter wheel on the photoconductive surface of a magnetically focused and deflected vidicon having a diameter of 25 mm. The vidicon storage surface (target) is a selenium-sulfur film having an active area of 11.14 x 11.14 mm; it holds a frame consisting of 800 lines with 800 picture elements per line. Pictures of Jupiter, Saturn, and their moons are presented, with short descriptions given of the area being viewed.
Motor vehicle injuries in Qatar: time trends in a rapidly developing Middle Eastern nation.
Mamtani, Ravinder; Al-Thani, Mohammed H; Al-Thani, Al-Anoud Mohammed; Sheikh, Javaid I; Lowenfels, Albert B
2012-04-01
Despite their wealth and modern road systems, traffic injury rates in Middle Eastern countries are generally higher than those in Western countries. The authors examined traffic injuries in Qatar during 2000-2010, a period of rapid population growth, focusing on the impact of speed control cameras installed in 2007 on overall injury rates and mortality. During the period 2000-2006, prior to camera installation, the mean (SD) vehicular injury death rate per 100,000 was 19.9±4.1. From 2007 to 2010, the mean (SD) vehicular death rates were significantly lower: 14.7±1.5 (p=0.028). Non-fatal severe injury rates also declined, but mild injury rates increased, perhaps because of increased traffic congestion and improved notification. It is possible that speed cameras decreased speeding enough to affect the death rate, without affecting overall injury rates. These data suggest that in a rapidly growing Middle Eastern country, photo enforcement (speed) cameras can be an important component of traffic control, but other measures will be required for maximum impact.
Capturing the plenoptic function in a swipe
NASA Astrophysics Data System (ADS)
Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi
2016-09-01
Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.
Motor vehicle injuries in Qatar: time trends in a rapidly developing Middle Eastern nation
Al-Thani, Mohammed H; Al-Thani, Al-Anoud Mohammed; Sheikh, Javaid I; Lowenfels, Albert B
2011-01-01
Despite their wealth and modern road systems, traffic injury rates in Middle Eastern countries are generally higher than those in Western countries. The authors examined traffic injuries in Qatar during 2000–2010, a period of rapid population growth, focusing on the impact of speed control cameras installed in 2007 on overall injury rates and mortality. During the period 2000–2006, prior to camera installation, the mean (SD) vehicular injury death rate per 100 000 was 19.9±4.1. From 2007 to 2010, the mean (SD) vehicular death rates were significantly lower: 14.7±1.5 (p=0.028). Non-fatal severe injury rates also declined, but mild injury rates increased, perhaps because of increased traffic congestion and improved notification. It is possible that speed cameras decreased speeding enough to affect the death rate, without affecting overall injury rates. These data suggest that in a rapidly growing Middle Eastern country, photo enforcement (speed) cameras can be an important component of traffic control, but other measures will be required for maximum impact. PMID:21994881
Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean
Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave
2009-01-01
Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729
Khokhlova, Vera A.; Shmeleva, Svetlana M.; Gavrilov, Leonid R.; Martin, Eleanor; Sadhoo, Neelaksh; Shaw, Adam
2013-01-01
Considerable progress has been achieved in the use of infrared (IR) techniques for qualitative mapping of acoustic fields of high intensity focused ultrasound (HIFU) transducers. The authors have previously developed and demonstrated a method based on IR camera measurement of the temperature rise induced in an absorber less than 2 mm thick by ultrasonic bursts of less than 1 s duration. The goal of this paper was to make the method more quantitative and estimate the absolute intensity distributions by determining an overall calibration factor for the absorber and camera system. The implemented approach involved correlating the temperature rise measured in an absorber using an IR camera with the pressure distribution measured in water using a hydrophone. The measurements were conducted for two HIFU transducers and a flat physiotherapy transducer of 1 MHz frequency. Corresponding correction factors between the free field intensity and temperature were obtained and allowed the conversion of temperature images to intensity distributions. The system described here was able to map in good detail focused and unfocused ultrasound fields with sub-millimeter structure and with local time average intensity from below 0.1 W/cm2 to at least 50 W/cm2. Significantly higher intensities could be measured simply by reducing the duty cycle. PMID:23927199
Khokhlova, Vera A; Shmeleva, Svetlana M; Gavrilov, Leonid R; Martin, Eleanor; Sadhoo, Neelaksh; Shaw, Adam
2013-08-01
Considerable progress has been achieved in the use of infrared (IR) techniques for qualitative mapping of acoustic fields of high intensity focused ultrasound (HIFU) transducers. The authors have previously developed and demonstrated a method based on IR camera measurement of the temperature rise induced in an absorber less than 2 mm thick by ultrasonic bursts of less than 1 s duration. The goal of this paper was to make the method more quantitative and estimate the absolute intensity distributions by determining an overall calibration factor for the absorber and camera system. The implemented approach involved correlating the temperature rise measured in an absorber using an IR camera with the pressure distribution measured in water using a hydrophone. The measurements were conducted for two HIFU transducers and a flat physiotherapy transducer of 1 MHz frequency. Corresponding correction factors between the free field intensity and temperature were obtained and allowed the conversion of temperature images to intensity distributions. The system described here was able to map in good detail focused and unfocused ultrasound fields with sub-millimeter structure and with local time average intensity from below 0.1 W/cm(2) to at least 50 W/cm(2). Significantly higher intensities could be measured simply by reducing the duty cycle.
NASA Astrophysics Data System (ADS)
Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.
2014-06-01
This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.
Camera Installation on a Beach AT-11
1950-02-21
Researchers at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory conducted an extensive investigation into the composition of clouds and their effect on aircraft icing. The researcher in this photograph is installing cameras on a Beach AT-11 Kansan in order to photograph water droplets during flights through clouds. The twin engine AT-11 was the primary training aircraft for World War II bomber crews. The NACA acquired this aircraft in January 1946, shortly after the end of the war. The NACA Lewis’ icing research during the war focused on the resolution of icing problems for specific military aircraft. In 1947 the laboratory broadened its program and began systematically measuring and categorizing clouds and water droplets. The three main thrusts of the Lewis icing flight research were the development of better instrumentation, the accumulation of data on ice buildup during flight, and the measurement of droplet sizes in clouds. The NACA researchers developed several types of measurement devices for the icing flights, including modified cameras. The National Research Council of Canada experimented with high-speed cameras with a large magnification lens to photograph the droplets suspended in the air. In 1951 NACA Lewis developed and flight tested their own camera with a magnification of 32. The camera, mounted to an external strut, could be used every five seconds as the aircraft reached speeds up to 150 miles per hour. The initial flight tests through cumulus clouds demonstrated that droplet size distribution could be studied.
Murine fundus fluorescein angiography: An alternative approach using a handheld camera.
Ehrenberg, Moshe; Ehrenberg, Scott; Schwob, Ouri; Benny, Ofra
2016-07-01
In today's modern pharmacologic approach to treating sight-threatening retinal vascular disorders, there is an increasing demand for a compact, mobile, lightweight and cost-effective fluorescein fundus camera to document the effects of antiangiogenic drugs on laser-induced choroidal neovascularization (CNV) in mice and other experimental animals. We have adapted the use of the Kowa Genesis Df Camera to perform Fundus Fluorescein Angiography (FFA) in mice. The 1 kg, 28 cm high camera has built-in barrier and exciter filters to allow digital FFA recording to a Compact Flash memory card. Furthermore, this handheld unit has a steady Indirect Lens Holder that firmly attaches to the main unit, that securely holds a 90 diopter lens in position, in order to facilitate appropriate focus and stability, for photographing the delicate central murine fundus. This easily portable fundus fluorescein camera can effectively record exceptional central retinal vascular detail in murine laser-induced CNV, while readily allowing the investigator to adjust the camera's position according to the variable head and eye movements that can randomly occur while the mouse is optimally anesthetized. This movable image recording device, with efficiencies of space, time, cost, energy and personnel, has enabled us to accurately document the alterations in the central choroidal and retinal vasculature following induction of CNV, implemented by argon-green laser photocoagulation and disruption of Bruch's Membrane, in the experimental murine model of exudative macular degeneration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Characterization and performance of PAUCam filters
NASA Astrophysics Data System (ADS)
Casas, R.; Cardiel-Sas, L.; Castander, F. J.; Díaz, C.; Gaweda, J.; Jiménez Rojas, J.; Jiménez, S.; Lamensans, M.; Padilla, C.; Rodriguez, F. J.; Sanchez, E.; Sevilla Noarbe, I.
2016-08-01
PAUCam is a large field of view camera designed to exploit the field delivered by the prime focus corrector of the William Herschel Telescope, at the Observatorio del Roque de los Muchachos. One of the new features of this camera is its filter system, placed within a few millimeters of the focal plane using eleven trays containing 40 narrow band and 6 broad band filters, working in vacuum at an operational temperature of 250K and in a focalized beam. In this contribution, we describe the performance of these filters both in the characterization tests at the laboratory.
Full color natural light holographic camera.
Kim, Myung K
2013-04-22
Full-color, three-dimensional images of objects under incoherent illumination are obtained by a digital holography technique. Based on self-interference of two beam-split copies of the object's optical field with differential curvatures, the apparatus consists of a beam-splitter, a few mirrors and lenses, a piezo-actuator, and a color camera. No lasers or other special illuminations are used for recording or reconstruction. Color holographic images of daylight-illuminated outdoor scenes and a halogen lamp-illuminated toy figure are obtained. From a recorded hologram, images can be calculated, or numerically focused, at any distances for viewing.
Development of Digital SLR Camera: PENTAX K-7
NASA Astrophysics Data System (ADS)
Kawauchi, Hiraku
The DSLR "PENTAX K-7" comes with an easy-to-carry, minimal yet functional small form factor, a long inherited identities of the PENTAX brand. Nevertheless for its compact body, this camera has up-to-date enhanced fundamental features such as high-quality viewfinder, enhanced shutter mechanism, extended continuous shooting capabilities, reliable exposure control, and fine-tuned AF systems, as well as strings of newest technologies such as movie recording capability and automatic leveling function. The main focus of this article is to reveal the ideas behind the concept making of this product and its distinguished features.
Combined approach to the Hubble Space Telescope wave-front distortion analysis
NASA Astrophysics Data System (ADS)
Roddier, Claude; Roddier, Francois
1993-06-01
Stellar images taken by the HST at various focus positions have been analyzed to estimate wave-front distortion. Rather than using a single algorithm, we found that better results were obtained by combining the advantages of various algorithms. For the planetary camera, the most accurate algorithms consistently gave a spherical aberration of -0.290-micron rms with a maximum deviation of 0.005 micron. Evidence was found that the spherical aberration is essentially produced by the primary mirror. The illumination in the telescope pupil plane was reconstructed and evidence was found for a slight camera misalignment.
Experiment on Uav Photogrammetry and Terrestrial Laser Scanning for Ict-Integrated Construction
NASA Astrophysics Data System (ADS)
Takahashi, N.; Wakutsu, R.; Kato, T.; Wakaizumi, T.; Ooishi, T.; Matsuoka, R.
2017-08-01
In the 2016 fiscal year the Ministry of Land, Infrastructure, Transport and Tourism of Japan started a program integrating construction and ICT in earthwork and concrete placing. The new program named "i-Construction" focusing on productivity improvement adopts such new technologies as UAV photogrammetry and TLS. We report a field experiment to investigate whether the procedures of UAV photogrammetry and TLS following the standards for "i-Construction" are feasible or not. In the experiment we measured an embankment of about 80 metres by 160 metres immediately after earthwork was done on the embankment. We used two sets of UAV and camera in the experiment. One is a larger UAV enRoute Zion QC730 and its onboard camera Sony α6000. The other is a smaller UAV DJI Phantom 4 and its dedicated onboard camera. Moreover, we used a terrestrial laser scanner FARO Focus3D X330 based on the phase shift principle. The experiment results indicate that the procedures of UAV photogrammetry using a QC730 with an α6000 and TLS using a Focus3D X330 following the standards for "i-Construction" would be feasible. Furthermore, the experiment results show that UAV photogrammetry using a lower price UAV Phantom 4 was unable to satisfy the accuracy requirement for "i-Construction." The cause of the low accuracy by Phantom 4 is under investigation. We also found that the difference of image resolution on the ground would not have a great influence on the measurement accuracy in UAV photogrammetry.
ProtoDESI: First On-Sky Technology Demonstration for the Dark Energy Spectroscopic Instrument
NASA Astrophysics Data System (ADS)
Fagrelius, Parker; Abareshi, Behzad; Allen, Lori; Ballester, Otger; Baltay, Charles; Besuner, Robert; Buckley-Geer, Elizabeth; Butler, Karen; Cardiel, Laia; Dey, Arjun; Duan, Yutong; Elliott, Ann; Emmet, William; Gershkovich, Irena; Honscheid, Klaus; Illa, Jose M.; Jimenez, Jorge; Joyce, Richard; Karcher, Armin; Kent, Stephen; Lambert, Andrew; Lampton, Michael; Levi, Michael; Manser, Christopher; Marshall, Robert; Martini, Paul; Paat, Anthony; Probst, Ronald; Rabinowitz, David; Reil, Kevin; Robertson, Amy; Rockosi, Connie; Schlegel, David; Schubnell, Michael; Serrano, Santiago; Silber, Joseph; Soto, Christian; Sprayberry, David; Summers, David; Tarlé, Greg; Weaver, Benjamin A.
2018-02-01
The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the universe using the baryon acoustic oscillations technique. The spectra of 35 million galaxies and quasars over 14,000 square degrees will be measured during a 5-year survey. A new prime focus corrector for the Mayall telescope at Kitt Peak National Observatory will deliver light to 5,000 individually targeted fiber-fed robotic positioners. The fibers in turn feed ten broadband multi-object spectrographs. We describe the ProtoDESI experiment, that was installed and commissioned on the 4-m Mayall telescope from 2016 August 14 to September 30. ProtoDESI was an on-sky technology demonstration with the goal to reduce technical risks associated with aligning optical fibers with targets using robotic fiber positioners and maintaining the stability required to operate DESI. The ProtoDESI prime focus instrument, consisting of three fiber positioners, illuminated fiducials, and a guide camera, was installed behind the existing Mosaic corrector on the Mayall telescope. A fiber view camera was mounted in the Cassegrain cage of the telescope and provided feedback metrology for positioning the fibers. ProtoDESI also provided a platform for early integration of hardware with the DESI Instrument Control System that controls the subsystems, provides communication with the Telescope Control System, and collects instrument telemetry data. Lacking a spectrograph, ProtoDESI monitored the output of the fibers using a fiber photometry camera mounted on the prime focus instrument. ProtoDESI was successful in acquiring targets with the robotically positioned fibers and demonstrated that the DESI guiding requirements can be met.
ERIC Educational Resources Information Center
Walker, Jearl
1983-01-01
Discusses the construction of lenses made out of ice, including the arrangement for mounting an ice lens on a camera. Also discusses brewing coffee in an ibrik (long-handled container tapering slightly toward the top), focusing on the physics of the brewing. (JN)
Han, Woong Kyu; Tan, Yung K; Olweny, Ephrem O; Yin, Gang; Liu, Zhuo-Wei; Faddegon, Stephen; Scott, Daniel J; Cadeddu, Jeffrey A
2013-04-01
To compare surgeon-assessed ergonomic and workload demands of magnetic anchoring and guidance system (MAGS) laparoendoscopic single-site surgery (LESS) nephrectomy with conventional LESS nephrectomy in a porcine model. Participants included two expert and five novice surgeons who each performed bilateral LESS nephrectomy in two nonsurvival animals using either the MAGS camera or conventional laparoscope. Task difficulty and workload demands of the surgeon and camera driver were assessed using the validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire. Surgeons were also asked to score 6 parameters on a Likert scale (range 1=low/easy to 5=high/hard): procedure-associated workload, ergonomics, technical challenge, visualization, accidental events, and instrument handling. Each step of the nephrectomy was also timed and instrument clashing was quantified. Scores for each parameter on the Likert scale were significantly lower for MAGS-LESS nephrectomy. Mean number of internal and external clashes were significantly lower for the MAGS camera (p<0.001). Mean task times for each procedure were shorter for experts than for novices, but this was not statistically significant. NASA-TLX workload ratings by the surgeon and camera driver showed that MAGS resulted in a significantly lower workload than the conventional laparoscope during LESS nephrectomy (p<0.05). The use of the MAGS camera during LESS nephrectomy lowers the task workload for both the surgeon and camera driver when compared to conventional laparoscope use. Subjectively, it appears to also improve surgeons' impressions of ergonomics and technical challenge. Pending approval for clinical use, further evaluation in the clinical setting is warranted.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.
2015-01-01
Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851
Keall, M D; Fildes, B; Newstead, S
2017-02-01
Backover injuries to pedestrians are a significant road safety issue, but their prevalence is underestimated as the majority of such injuries are often outside the scope of official road injury recording systems, which just focus on public roads. Based on experimental evidence, reversing cameras have been found to be effective in reducing the rate of collisions when reversing; the evidence for the effectiveness of reverse parking sensors has been mixed. The wide availability of these technologies in recent model vehicles provides impetus for real-world evaluations using crash data. A logistic model was fitted to data from crashes that occurred on public roads constituting 3172 pedestrian injuries in New Zealand and four Australian States to estimate the odds of backover injury (compared to other sorts of pedestrian injury crashes) for the different technology combinations fitted as standard equipment (both reversing cameras and sensors; just reversing cameras; just sensors; neither cameras nor sensors) controlling for vehicle type, jurisdiction, speed limit area and year of manufacture restricted to the range 2007-2013. Compared to vehicles without any of these technologies, reduced odds of backover injury were estimated for all three of these technology configurations: 0.59 (95% CI 0.39-0.88) for reversing cameras by themselves; 0.70 (95% CI 0.49-1.01) for both reversing cameras and sensors; 0.69 (95% CI 0.47-1.03) for reverse parking sensors by themselves. These findings are important as they are the first to our knowledge to present an assessment of real-world safety effectiveness of these technologies. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karakaya, Mahmut; Qi, Hairong
This paper addresses the communication and energy efficiency in collaborative visual sensor networks (VSNs) for people localization, a challenging computer vision problem of its own. We focus on the design of a light-weight and energy efficient solution where people are localized based on distributed camera nodes integrating the so-called certainty map generated at each node, that records the target non-existence information within the camera s field of view. We first present a dynamic itinerary for certainty map integration where not only each sensor node transmits a very limited amount of data but that a limited number of camera nodes ismore » involved. Then, we perform a comprehensive analytical study to evaluate communication and energy efficiency between different integration schemes, i.e., centralized and distributed integration. Based on results obtained from analytical study and real experiments, the distributed method shows effectiveness in detection accuracy as well as energy and bandwidth efficiency.« less
Characterization of lens based photoacoustic imaging system.
Francis, Kalloor Joseph; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund
2017-12-01
Some of the challenges in translating photoacoustic (PA) imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF). Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.
A highly sensitive underwater video system for use in turbid aquaculture ponds.
Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C
2016-08-24
The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.
A highly sensitive underwater video system for use in turbid aquaculture ponds
NASA Astrophysics Data System (ADS)
Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.
2016-08-01
The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.
Stratified charge rotary engine - Internal flow studies at the MSU engine research laboratory
NASA Technical Reports Server (NTRS)
Hamady, F.; Kosterman, J.; Chouinard, E.; Somerton, C.; Schock, H.; Chun, K.; Hicks, Y.
1989-01-01
High-speed visualization and laser Doppler velocimetry (LDV) systems consisting of a 40-watt copper vapor laser, mirrors, cylindrical lenses, a high speed camera, a synchronization timing system, and a particle generator were developed for the study of the fuel spray-air mixing flow characteristics within the combustion chamber of a motored rotary engine. The laser beam is focused down to a sheet approximately 1 mm thick, passing through the combustion chamber and illuminates smoke particles entrained in the intake air. The light scattered off the particles is recorded by a high speed rotating prism camera. Movies are made showing the air flow within the combustion chamber. The results of a movie showing the development of a high-speed (100 Hz) high-pressure (68.94 MPa, 10,000 psi) fuel jet are also discussed. The visualization system is synchronized so that a pulse generated by the camera triggers the laser's thyratron.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-01-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454
Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields.
Xiao, Zhaolin; Wang, Qing; Zhou, Guoqing; Yu, Jingyi
2017-05-01
When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, and so on. In this paper, we present a different solution that first detects and then removes angular aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the angular aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing versus non-aliasing regions and angular aliasing removal. Experiments on both synthetic scene and real light field data sets (camera array and Lytro camera) demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.
A highly sensitive underwater video system for use in turbid aquaculture ponds
Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.
2016-01-01
The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health. PMID:27554201
The application of retinal fundus camera imaging in dementia: A systematic review.
McGrory, Sarah; Cameron, James R; Pellegrini, Enrico; Warren, Claire; Doubal, Fergus N; Deary, Ian J; Dhillon, Baljean; Wardlaw, Joanna M; Trucco, Emanuele; MacGillivray, Thomas J
2017-01-01
The ease of imaging the retinal vasculature, and the evolving evidence suggesting this microvascular bed might reflect the cerebral microvasculature, presents an opportunity to investigate cerebrovascular disease and the contribution of microvascular disease to dementia with fundus camera imaging. A systematic review and meta-analysis was carried out to assess the measurement of retinal properties in dementia using fundus imaging. Ten studies assessing retinal properties in dementia were included. Quantitative measurement revealed significant yet inconsistent pathologic changes in vessel caliber, tortuosity, and fractal dimension. Retinopathy was more prevalent in dementia. No association of age-related macular degeneration with dementia was reported. Inconsistent findings across studies provide tentative support for the application of fundus camera imaging as a means of identifying changes associated with dementia. The potential of fundus image analysis in differentiating between dementia subtypes should be investigated using larger well-characterized samples. Future work should focus on refining and standardizing methods and measurements.
Cryogenic solid Schmidt camera as a base for future wide-field IR systems
NASA Astrophysics Data System (ADS)
Yudin, Alexey N.
2011-11-01
Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.
NASA Astrophysics Data System (ADS)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-11-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.
Camera sensor arrangement for crop/weed detection accuracy in agronomic images.
Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo
2013-04-02
In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.
Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter
NASA Astrophysics Data System (ADS)
Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.
2013-02-01
Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.
NASA Astrophysics Data System (ADS)
Ogawa, Kazunori; Shirai, Kei; Sawada, Hirotaka; Arakawa, Masahiko; Honda, Rie; Wada, Koji; Ishibashi, Ko; Iijima, Yu-ichi; Sakatani, Naoya; Nakazawa, Satoru; Hayakawa, Hajime
2017-07-01
An artificial impact experiment is scheduled for 2018-2019 in which an impactor will collide with asteroid 162137 Ryugu (1999 JU3) during the asteroid rendezvous phase of the Hayabusa2 spacecraft. The small carry-on impactor (SCI) will shoot a 2-kg projectile at 2 km/s to create a crater 1-10 m in diameter with an expected subsequent ejecta curtain of a 100-m scale on an ideal sandy surface. A miniaturized deployable camera (DCAM3) unit will separate from the spacecraft at about 1 km from impact, and simultaneously conduct optical observations of the experiment. We designed and developed a camera system (DCAM3-D) in the DCAM3, specialized for scientific observations of impact phenomenon, in order to clarify the subsurface structure, construct theories of impact applicable in a microgravity environment, and identify the impact point on the asteroid. The DCAM3-D system consists of a miniaturized camera with a wide-angle and high-focusing performance, high-speed radio communication devices, and control units with large data storage on both the DCAM3 unit and the spacecraft. These components were successfully developed under severe constraints of size, mass and power, and the whole DCAM3-D system has passed all tests verifying functions, performance, and environmental tolerance. Results indicated sufficient potential to conduct the scientific observations during the SCI impact experiment. An operation plan was carefully considered along with the configuration and a time schedule of the impact experiment, and pre-programed into the control unit before the launch. In this paper, we describe details of the system design concept, specifications, and the operating plan of the DCAM3-D system, focusing on the feasibility of scientific observations.
Real-time image restoration for iris recognition systems.
Kang, Byung Jun; Park, Kang Ryoung
2007-12-01
In the field of biometrics, it has been reported that iris recognition techniques have shown high levels of accuracy because unique patterns of the human iris, which has very many degrees of freedom, are used. However, because conventional iris cameras have small depth-of-field (DOF) areas, input iris images can easily be blurred, which can lead to lower recognition performance, since iris patterns are transformed by the blurring caused by optical defocusing. To overcome these problems, an autofocusing camera can be used. However, this inevitably increases the cost, size, and complexity of the system. Therefore, we propose a new real-time iris image-restoration method, which can increase the camera's DOF without requiring any additional hardware. This paper presents five novelties as compared to previous works: 1) by excluding eyelash and eyelid regions, it is possible to obtain more accurate focus scores from input iris images; 2) the parameter of the point spread function (PSF) can be estimated in terms of camera optics and measured focus scores; therefore, parameter estimation is more accurate than it has been in previous research; 3) because the PSF parameter can be obtained by using a predetermined equation, iris image restoration can be done in real-time; 4) by using a constrained least square (CLS) restoration filter that considers noise, performance can be greatly enhanced; and 5) restoration accuracy can also be enhanced by estimating the weight value of the noise-regularization term of the CLS filter according to the amount of image blurring. Experimental results showed that iris recognition errors when using the proposed restoration method were greatly reduced as compared to those results achieved without restoration or those achieved using previous iris-restoration methods.
An Acoustic Charge Transport Imager for High Definition Television
NASA Technical Reports Server (NTRS)
Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard
1999-01-01
This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode with an output data rate of 5MHz, which gives a maximum frame rate of 4 frames per second. The MIT/Polaroid group developed two cameras under this program. The cameras have effectively four times the current video spatial resolution and at 60 frames per second are double the normal video frame rate.
NASA Astrophysics Data System (ADS)
Crause, Lisa A.; Carter, Dave; Daniels, Alroy; Evans, Geoff; Fourie, Piet; Gilbank, David; Hendricks, Malcolm; Koorts, Willie; Lategan, Deon; Loubser, Egan; Mouries, Sharon; O'Connor, James E.; O'Donoghue, Darragh E.; Potter, Stephen; Sass, Craig; Sickafoose, Amanda A.; Stoffels, John; Swanevelder, Pieter; Titus, Keegan; van Gend, Carel; Visser, Martin; Worters, Hannah L.
2016-08-01
SpUpNIC (Spectrograph Upgrade: Newly Improved Cassegrain) is the extensively upgraded Cassegrain Spectrograph on the South African Astronomical Observatory's 74-inch (1.9-m) telescope. The inverse-Cassegrain collimator mirrors and woefully inefficient Maksutov-Cassegrain camera optics have been replaced, along with the CCD and SDSU controller. All moving mechanisms are now governed by a programmable logic controller, allowing remote configuration of the instrument via an intuitive new graphical user interface. The new collimator produces a larger beam to match the optically faster Folded-Schmidt camera design and nine surface-relief diffraction gratings offer various wavelength ranges and resolutions across the optical domain. The new camera optics (a fused silica Schmidt plate, a slotted fold flat and a spherically figured primary mirror, both Zerodur, and a fused silica field-flattener lens forming the cryostat window) reduce the camera's central obscuration to increase the instrument throughput. The physically larger and more sensitive CCD extends the available wavelength range; weak arc lines are now detectable down to 325 nm and the red end extends beyond one micron. A rear-of-slit viewing camera has streamlined the observing process by enabling accurate target placement on the slit and facilitating telescope focus optimisation. An interactive quick-look data reduction tool further enhances the user-friendliness of SpUpNI
Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria
2016-04-01
The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.
iPhone 4s and iPhone 5s Imaging of the Eye
Jalil, Maaz; Ferenczy, Sandor R.; Shields, Carol L.
2017-01-01
Background/Aims To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. Methods A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. Results In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. Conclusions iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable. PMID:28275604
NASA Astrophysics Data System (ADS)
Smith, H. J.; Barnes, T. G., III; Tull, R. G.; Nather, R. E.; Angel, R.; Meinel, A.; Macfarlane, M.; Brault, J.; Neugebauer, G.; Gillett, F.; Richardson, E. H.
Contents: Introductions (H. J. Smith). History of the project (H. J. Smith). Project constraints (T. G. Barnes III).Project constraints (R. G. Tull). Telescope concept (R. E. Nather). Auxiliary instruments (R. E. Nather). Paul-Baker prime focus (R. Angel). Prime focus and Nasmyth cameras (A. Meinel). Nasmyth focal reducers (M. MacFarlane). Spectrometry (R. Angel, R. G. Tull, J. Brault). Infrared sites (G. Neugebauer). IR instrumentation (F. Gillett). Prime focus imaging (E. H. Richardson). Primary mirror figure control (R. G. Tull).
Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures
NASA Astrophysics Data System (ADS)
Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino
2010-05-01
3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.
Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system
NASA Astrophysics Data System (ADS)
Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng
2009-02-01
This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.
The purpose of the field demonstration program is to gather technically reliable cost and performance information on selected condition assessment technologies under defined field conditions. The selected technologies include zoom camera, focused electrode leak location (FELL), ...
Instruments for Imaging from Far to Near
NASA Technical Reports Server (NTRS)
Mungas, Greg; Boynton, John; Sepulveda, Cesar
2009-01-01
The acronym CHAMP (signifying camera, hand lens, and microscope ) denotes any of several proposed optoelectronic instruments that would be capable of color imaging at working distances that could be varied continuously through a range from infinity down to several millimeters. As in any optical instrument, the magnification, depth of field, and spatial resolution would vary with the working distance. For example, in one CHAMP version, at a working distance of 2.5 m, the instrument would function as an electronic camera with a magnification of 1/100, whereas at a working distance of 7 mm, the instrument would function as a microscope/electronic camera with a magnification of 4.4. Moreover, as described below, when operating at or near the shortest-working-distance/highest-magnification combination, a CHAMP could be made to perform one or more spectral imaging functions. CHAMPs were originally intended to be used in robotic geological exploration of the Moon and Mars. The CHAMP concept also has potential for diverse terrestrial applications that could include remotely controlled or robotic geological exploration, prospecting, field microbiology, environmental surveying, and assembly- line inspection. A CHAMP (see figure) would include two lens cells: (1) a distal cell corresponding to the objective lens assembly of a conventional telescope or microscope and (2) a proximal cell that would contain the focusing camera lens assembly and the camera electronic image-detector chip, which would be of the active-pixel-sensor (APS) type. The distal lens cell would face outward from a housing, while the proximal lens cell would lie in a clean environment inside the housing. The proximal lens cell would contain a beam splitter that would enable simultaneous use of the imaging optics (that is, proximal and distal lens assemblies) for imaging and illumination of the field of view. The APS chip would be mounted on a focal plane on a side face of the beam splitter, while light for illuminating the field of view would enter the imaging optics via the end face of the beam splitter. The proximal lens cell would be mounted on a sled that could be translated along the optical axis for focus adjustment. The position of the CHAMP would initially be chosen at the desired working distance of the distal lens from (corresponding to an approximate desired magnification of) an object to be examined. During subsequent operation, the working distance would ordinarily remain fixed at the chosen value and the position of the proximal lens cell within the instrument would be adjusted for focus as needed.
Scintillator-fiber charged particle track-imaging detector
NASA Technical Reports Server (NTRS)
Binns, W. R.; Israel, M. H.; Klarmann, J.
1983-01-01
A scintillator-fiber charged-particle track-imaging detector was developed using a bundle of square cross section plastic scintillator fiber optics, proximity focused onto an image intensified charge injection device (CID) camera. The tracks of charged particle penetrating into the scintillator fiber bundle are projected onto the CID camera and the imaging information is read out in video format. The detector was exposed to beams of 15 MeV protons and relativistic Neon, Manganese, and Gold nuclei and images of their tracks were obtained. Details of the detector technique, properties of the tracks obtained, and preliminary range measurements of 15 MeV protons stopping in the fiber bundle are presented.
Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance
NASA Technical Reports Server (NTRS)
Jones, Brandon M.
2005-01-01
Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
STS-61 art concept of astronauts during HST servicing
1993-11-12
S93-48826 (November 1993) --- This artist's rendition of the 1993 Hubble Space Telescope (HST) servicing mission shows astronauts installing the new Wide Field/Planetary Camera (WF/PC 2). The instruments to replace the original camera and contains corrective optics that compensate for the telescope's flawed primary mirror. During the 11-plus day mission, astronauts are also scheduled to install the Corrective Optics Space Telescope Axial Replacement (COSTAR) -- an optics package that focuses and routes light to the other three instruments aboard the observatory -- a new set of solar array panels, and other hardware and components. The artwork was done for JPL by Paul Hudson.
Modification of a Kowa RC-2 fundus camera for self-photography without the use of mydriatics.
Philpott, D E; Bailey, P F; Harrison, G; Turnbill, C
1979-01-01
Research on retinal circulation during space flight required the development of a simple technique to provide self monitoring of blood vessel changes in the fundus without the use of mydriatics. A Kowa RC-2 fundus camera was modified for self-photography by the use of a bite plate for positioning and cross hairs for focusing the subject's retina relative to the film plane. Dilation of the pupils without the use of mydriatics was accomplished by dark adaption of the subject. Pictures were obtained without pupil constriction by the use of a high speed strobe light. This method also has applications for clinical medicine.
Pulsed x-ray sources for characterization of gated framing cameras
NASA Astrophysics Data System (ADS)
Filip, Catalin V.; Koch, Jeffrey A.; Freeman, Richard R.; King, James A.
2017-08-01
Gated X-ray framing cameras are used to measure important characteristics of inertial confinement fusion (ICF) implosions such as size and symmetry, with 50 ps time resolution in two dimensions. A pulsed source of hard (>8 keV) X-rays, would be a valuable calibration device, for example for gain-droop measurements of the variation in sensitivity of the gated strips. We have explored the requirements for such a source and a variety of options that could meet these requirements. We find that a small-size dense plasma focus machine could be a practical single-shot X-ray source for this application if timing uncertainties can be overcome.
X-ray emission from high temperature plasmas
NASA Technical Reports Server (NTRS)
Harries, W. L.
1974-01-01
X-rays from a 25-hJ plasma focus apparatus were observed with pinhole cameras. The cameras consist of 0.4 mm diameter pinholes in 2 cm thick lead housing enclosing an X-ray intensifying screen at the image plane. Pictures recorded through thin aluminum foils or plastic sheets for X-ray energies sub gamma smaller than 15 keV show distributed X-ray emissions from the focussed plasma and from the anode surface. However, when thick absorbers are used, radial filamentary structure in the X-ray emission from the anode surface is revealed. Occasionally larger structures are observed in addition to the filaments. Possible mechanisms for the filamentary structure are discussed.
2012-01-01
auto focusing mechanisms, energy harvesting , and pico air vehicle design. Their advantages include nanometer positioning resolution, broadband frequency...high speed camera shutters and auto focusing mechanisms energy harvesting , and pico air vehicle design. Their advantages include nanometer positioning...THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Nomenclature d Piezoelectric constant (m/V = C/N) h
NASA Astrophysics Data System (ADS)
Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.
2001-08-01
The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.
Family Of Calibrated Stereometric Cameras For Direct Intraoral Use
NASA Astrophysics Data System (ADS)
Curry, Sean; Moffitt, Francis; Symes, Douglas; Baumrind, Sheldon
1983-07-01
In order to study empirically the relative efficiencies of different types of orthodontic appliances in repositioning teeth in vivo, we have designed and constructed a pair of fixed-focus, normal case, fully-calibrated stereometric cameras. One is used to obtain stereo photography of single teeth, at a scale of approximately 2:1, and the other is designed for stereo imaging of the entire dentition, study casts, facial structures, and other related objects at a scale of approximately 1:8. Twin lenses simultaneously expose adjacent frames on a single roll of 70 mm film. Physical flatness of the film is ensured by the use of a spring-loaded metal pressure plate. The film is forced against a 3/16" optical glass plate upon which is etched an array of 16 fiducial marks which divide the film format into 9 rectangular regions. Using this approach, it has been possible to produce photographs which are undistorted for qualitative viewing and from which quantitative data can be acquired by direct digitization of conventional photographic enlargements. We are in the process of designing additional members of this family of cameras. All calibration and data acquisition and analysis techniques previously developed will be directly applicable to these new cameras.
NASA Technical Reports Server (NTRS)
Wagenknecht, J.; Fredrickson, S.; Manning, T.; Jones, B.
2003-01-01
Engineers at NASA Johnson Space Center have designed, developed, and tested a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spaceflight activities. The technology demonstration system, known as the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam), has been integrated into the approximate form and function of a flight system. The primary focus has been to develop a system capable of providing external views of the International Space Station. The Mini AERCam system is spherical-shaped and less than eight inches in diameter. It has a full suite of guidance, navigation, and control hardware and software, and is equipped with two digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations. Tests have been performed in both a six degree-of-freedom closed-loop orbital simulation and on an air-bearing table. The Mini AERCam system can also be used as a test platform for evaluating algorithms and relative navigation for autonomous proximity operations and docking around the Space Shuttle Orbiter or the ISS.
Restoring the spatial resolution of refocus images on 4D light field
NASA Astrophysics Data System (ADS)
Lim, JaeGuyn; Park, ByungKwan; Kang, JooYoung; Lee, SeongDeok
2010-01-01
This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions. Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method compared to existing method.
Camera for Quasars in the Early Universe (CQUEAN)
NASA Astrophysics Data System (ADS)
Kim, Eunbin; Park, W.; Lim, J.; Jeong, H.; Kim, J.; Oh, H.; Pak, S.; Im, M.; Kuehne, J.
2010-05-01
The early universe of z ɳ is where the first stars, galaxies, and quasars formed, starting the re-ionization of the universe. The discovery and the study of quasars in the early universe allow us to witness the beginning of history of astronomical objects. In order to perform a medium-deep, medium-wide, imaging survey of quasars, we are developing an optical CCD camera, CQUEAN (Camera for QUasars in EArly uNiverse) which uses a 1024*1024 pixel deep-depletion CCD. It has an enhanced QE than conventional CCD at wavelength band around 1μm, thus it will be an efficient tool for observation of quasars at z > 7. It will be attached to the 2.1m telescope at McDonald Observatory, USA. A focal reducer is designed to secure a larger field of view at the cassegrain focus of 2.1m telescope. For long stable exposures, auto-guiding system will be implemented by using another CCD camera viewing an off-axis field. All these instruments will be controlled by the software written in python on linux platform. CQUEAN is expected to see the first light during summer in 2010.
ProtoDESI: First On-Sky Technology Demonstration for the Dark Energy Spectroscopic Instrument
Fagrelius, Parker; Abareshi, Behzad; Allen, Lori; ...
2018-01-15
The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the universe using the baryon acoustic oscillations technique. The spectra of 35 million galaxies and quasars over 14,000 square degrees will be measured during a 5-year survey. A new prime focus corrector for the Mayall telescope at Kitt Peak National Observatory will deliver light to 5,000 individually targeted fiber-fed robotic positioners. The fibers in turn feed ten broadband multi-object spectrographs. We describe the ProtoDESI experiment, that was installed and commissioned on the 4-m Mayall telescope from 2016 August 14 to September 30. ProtoDESI was anmore » on-sky technology demonstration with the goal to reduce technical risks associated with aligning optical fibers with targets using robotic fiber positioners and maintaining the stability required to operate DESI. The ProtoDESI prime focus instrument, consisting of three fiber positioners, illuminated fiducials, and a guide camera, was installed behind the existing Mosaic corrector on the Mayall telescope. A fiber view camera was mounted in the Cassegrain cage of the telescope and provided feedback metrology for positioning the fibers. ProtoDESI also provided a platform for early integration of hardware with the DESI Instrument Control System that controls the subsystems, provides communication with the Telescope Control System, and collects instrument telemetry data. In conclusion, lacking a spectrograph, ProtoDESI monitored the output of the fibers using a fiber photometry camera mounted on the prime focus instrument. ProtoDESI was successful in acquiring targets with the robotically positioned fibers and demonstrated that the DESI guiding requirements can be met.« less
ProtoDESI: First On-Sky Technology Demonstration for the Dark Energy Spectroscopic Instrument
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagrelius, Parker; Abareshi, Behzad; Allen, Lori
The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the universe using the baryon acoustic oscillations technique. The spectra of 35 million galaxies and quasars over 14,000 square degrees will be measured during a 5-year survey. A new prime focus corrector for the Mayall telescope at Kitt Peak National Observatory will deliver light to 5,000 individually targeted fiber-fed robotic positioners. The fibers in turn feed ten broadband multi-object spectrographs. We describe the ProtoDESI experiment, that was installed and commissioned on the 4-m Mayall telescope from 2016 August 14 to September 30. ProtoDESI was anmore » on-sky technology demonstration with the goal to reduce technical risks associated with aligning optical fibers with targets using robotic fiber positioners and maintaining the stability required to operate DESI. The ProtoDESI prime focus instrument, consisting of three fiber positioners, illuminated fiducials, and a guide camera, was installed behind the existing Mosaic corrector on the Mayall telescope. A fiber view camera was mounted in the Cassegrain cage of the telescope and provided feedback metrology for positioning the fibers. ProtoDESI also provided a platform for early integration of hardware with the DESI Instrument Control System that controls the subsystems, provides communication with the Telescope Control System, and collects instrument telemetry data. In conclusion, lacking a spectrograph, ProtoDESI monitored the output of the fibers using a fiber photometry camera mounted on the prime focus instrument. ProtoDESI was successful in acquiring targets with the robotically positioned fibers and demonstrated that the DESI guiding requirements can be met.« less
The speckle polarimeter of the 2.5-m telescope: Design and calibration
NASA Astrophysics Data System (ADS)
Safonov, B. S.; Lysenko, P. A.; Dodin, A. V.
2017-05-01
The speckle polarimeter is a facility instrument of the 2.5-mSAIMSU telescope that combines the features of a speckle interferometer and a polarimeter. The speckle polarimeter is designed for observations in several visible bands in the following modes: speckle interferometry, polarimetry, speckle polarimetry, and polaroastrometry. In this paper we describe the instrument design and the procedures for determining the angular scale of the camera and the position angle of the camera and the polarimeter. Our measurements of the parameters for the binary star HD 9165 are used as an example to demonstrate the technique of speckle interferometry. For bright objects the accuracy of astrometry is limited by the error of the correction for the distortion caused by the atmospheric dispersion compensator. At zenith distances less than 45◦ the additional relative measurement error of the separation is 0.7%, while the additional error of the position angle is 0.3°. In the absence of a dispersion compensator the accuracy of astrometry is limited by the uncertainty in the scale and position angle of the camera, which are 0.15% and 0.06°, respectively. We have performed polarimetric measurements of unpolarized stars and polarization standards. The instrumental polarization at the Cassegrain focus in the V band does not exceed 0.01%. The instrumental polarization for the Nasmyth focus varies between 2 and 4% within the visible range; we have constructed its model and give a method for its elimination from the measurements. For stars with an intrinsic polarization of less than 0.2% during observations at the Cassegrain focus the error is determined mainly by the photon and readout noises and can reach 5 × 10-5.
NASA Astrophysics Data System (ADS)
Brewer, I. D.; Werner, C. A.; Nadeau, P. A.
2010-12-01
UV camera systems are gaining popularity worldwide for quantifying SO2 column abundances and emission rates from volcanoes, which serve as primary measures of volcanic hazard and aid in eruption forecasting. To date many of the investigations have focused on fairly active and routinely monitored volcanoes under optimal conditions. Some recent studies have begun to recommend protocols and procedures for data collection, but additional questions still need to be addressed. In this study we attempt to answer these questions, and also present results from volcanoes that are rarely monitored. Conditions at these volcanoes are typically sub-optimal for UV camera measurements. Discussion of such data is essential in the assessment of the wider applicability of UV camera measurements for SO2 monitoring purposes. Data discussed herein consists of plume images from volcanoes with relatively low emission rates, with varying weather conditions and from various distances (2-12 km). These include Karangatang Volcano (Indonesia), Mount St. Helens (Washington, USA), and Augustine and Redoubt Volcanoes (Alaska, USA). High emission rate data were also collected at Kilauea Volcano (Hawaii, USA), and blue sky test images with no plume were collected at Mammoth Mountain (California, USA). All data were collected between 2008 and 2010 using both single-filter (307 nm) and dual-filter (307 nm/326 nm) systems and were accompanied by FLYSPEC measurements. With the dual-filter systems, both a filter wheel setup and a synchronous-imaging dual-camera setup were employed. Data collection and processing questions included (1) what is the detection limit of the camera, (2) how large is the variability in raw camera output, (3) how do camera optics affect the measurements and how can this be corrected, (4) how much variability is observed in calibration under various conditions, (5) what is the optimal workflow for image collection and processing, and (6) what is the range of camera operating conditions? Besides emission rates from these infrequently monitored volcanoes, the results of this study include a recommended workflow and procedure for image collection and calibration, and a MATLAB-based algorithm for batch processing, thereby enabling accurate emission rates at 1 Hz when a synchronous-imaging dual-camera setup is used.
ERIC Educational Resources Information Center
DeVoogd, Glenn, Ed.
This document contains the following papers focusing on contexts and activities in which teachers can use technology to promote learning with young children: (1) "Read, Write and Click: Using Digital Camera Technology in a Language Arts and Literacy K-5 Classroom" (Judith F. Robbins and Jacqueline Bedell); (2) "Technology for the…
Performance of the dark energy camera liquid nitrogen cooling system
NASA Astrophysics Data System (ADS)
Cease, H.; Alvarez, M.; Alvarez, R.; Bonati, M.; Derylo, G.; Estrada, J.; Flaugher, B.; Flores, R.; Lathrop, A.; Munoz, F.; Schmidt, R.; Schmitt, R. L.; Schultz, K.; Kuhlmann, S.; Zhao, A.
2014-01-01
The Dark Energy Camera, the Imager and its cooling system was installed onto the Blanco 4m telescope at the Cerro Tololo Inter-American Observatory in Chile in September 2012. The imager cooling system is a LN2 two-phase closed loop cryogenic cooling system. The cryogenic circulation processing is located off the telescope. Liquid nitrogen vacuum jacketed transfer lines are run up the outside of the telescope truss tubes to the imager inside the prime focus cage. The design of the cooling system along with commissioning experiences and initial cooling system performance is described. The LN2 cooling system with the DES imager was initially operated at Fermilab for testing, then shipped and tested in the Blanco Coudé room. Now the imager is operating inside the prime focus cage. It is shown that the cooling performance sufficiently cools the imager in a closed loop mode, which can operate for extended time periods without maintenance or LN2 fills.
Development of a high spatial resolution neutron imaging system and performance evaluation
NASA Astrophysics Data System (ADS)
Cao, Lei
The combination of a scintillation screen and a charged coupled device (CCD) camera is a digitized neutron imaging technology that has been widely employed for research and industry application. The maximum of spatial resolution of scintillation screens is in the range of 100 mum and creates a bottleneck for the further improvement of the overall system resolution. In this investigation, a neutron sensitive micro-channel plate (MCP) detector with pore pitch of 11.4 mum is combined with a cooled CCD camera with a pixel size of 6.8 mum to provide a high spatial resolution neutron imaging system. The optical path includes a high reflection front surface mirror for keeping the camera out of neutron beam and a macro lens for achieving the maximum magnification that could be achieved. All components are assembled into an aluminum light tight box with heavy radiation shielding to protect the camera as well as to provide a dark working condition. Particularly, a remote controlled stepper motor is also integrated into the system to provide on-line focusing ability. The best focus is guaranteed through use of an algorithm instead of perceptual observation. An evaluation routine not previously utilized in the field of neutron radiography is developed in this study. Routines like this were never previously required due to the lower resolution of other systems. Use of the augulation technique to obtain presampled MTF addresses the problem of aliasing associated with digital sampling. The determined MTF agrees well with the visual inspection of imaging a testing target. Other detector/camera combinations may be integrated into the system and their performances are also compared. The best resolution achieved by the system at the TRIGA Mark II reactor at the University of Texas at Austin is 16.2 lp/mm, which is equivalent to a minimum resolvable spacing of 30 mum. The noise performance of the device is evaluated in terms of the noise power spectrum (NPS) and the detective quantum efficiency (DQE) is calculated with above determined MTF and NPS.
MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halama, J.
2016-06-15
This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images for SPECT reconstructions. Become knowledgeable of items to be included in annual acceptance testing reports including CT dosimetry and PACS monitor measurements. T. Turkington, GE Healthcare.« less
In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.
Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P
2017-01-25
Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the literature. Statistically, the 3D accuracy obtained in the in-air environment was poorer (p<10 -5 ) than the one in the underwater environment, across all the tested camera configurations. Related to the repeatability of the camera parameters, we found a very low variability in both environments (1.7% and 2.9%, in-air and underwater). This result encourage the use of ASC technology to perform quantitative reconstruction both in-air and underwater environments. Copyright © 2016 Elsevier Ltd. All rights reserved.
First Results of the Athena Microscopic Imager Investigation
NASA Technical Reports Server (NTRS)
Herkenhoff, K.; Squyres, S.; Archinal, B.; Arvidson, R.; Bass, D.; Barrett, J.; Becker, K.; Becker, T.; Bell, J., III; Burr, D.
2004-01-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on an extendable arm, the Instrument Deployment Device (IDD). The MI acquires images at a spatial resolution of 30 microns/pixel over a broad spectral range (400 - 700 nm). The MI uses the same electronics design as the other MER cameras but its optics yield a field of view of 31 x 31 mm across a 1024 x 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 69 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (approx. 2 mm precision) is achieved by moving the IDD away from a rock target after contact is sensed. The MI optics are protected from the Martian environment by a retractable dust cover. This cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500 - 700 nm, allowing crude color information to be obtained by acquiring images with the cover open and closed. The MI science objectives, instrument design and calibration, operation, and data processing were described by Herkenhoff et al. Initial results of the MI experiment on both MER rovers ('Spirit' and 'Opportunity') are described below.
Athena Microscopic Imager investigation
NASA Astrophysics Data System (ADS)
Herkenhoff, K. E.; Squyres, S. W.; Bell, J. F.; Maki, J. N.; Arneson, H. M.; Bertelsen, P.; Brown, D. I.; Collins, S. A.; Dingizian, A.; Elliott, S. T.; Goetz, W.; Hagerott, E. C.; Hayes, A. G.; Johnson, M. J.; Kirk, R. L.; McLennan, S.; Morris, R. V.; Scherr, L. M.; Schwochert, M. A.; Shiraishi, L. R.; Smith, G. H.; Soderblom, L. A.; Sohl-Dickstein, J. N.; Wadsworth, M. V.
2003-11-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). The MI uses the same electronics design as the other MER cameras but has optics that yield a field of view of 31 × 31 mm across a 1024 × 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (~2 mm precision) is achieved by moving the IDD away from a rock target after the contact sensor has been activated. The MI optics are protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. MI data will be used to place other MER instrument data in context and to aid in petrologic and geologic interpretations of rocks and soils on Mars.
Apparatus and method for generating partially coherent illumination for photolithography
Sweatt, William C.
2001-01-01
The present invention introduces a novel scatter plate into the optical path of source light used for illuminating a replicated object. The scatter plate has been designed to interrupt a focused, incoming light beam by introducing between about 8 to 24 diffraction zones blazed onto the surface of the scatter plate which intercept the light and redirect it to a like number of different positions in the condenser entrance pupil each of which is determined by the relative orientation and the spatial frequency of the diffraction grating in each of the several zones. Light falling onto the scatter plate, therefore, generates a plurality of unphased sources of illumination as seen by the back half of the optical system. The system comprises a high brightness source, such as a laser, creating light which is taken up by a beam forming optic which focuses the incoming light into a condenser which in turn, focuses light into a field lens creating Kohler illumination image of the source in a camera entrance pupil. The light passing through the field lens illuminates a mask which interrupts the source light as either a positive or negative image of the object to be replicated. Light passing by the mask is focused into the entrance pupil of the lithographic camera creating an image of the mask onto a receptive media.
Winnowing the Field: Candidates, Caucuses, and Presidential Elections.
ERIC Educational Resources Information Center
Gore, Deborah, Ed.
1991-01-01
This issue features articles and activities that concern the history of the presidential election process in the United States, with a special focus on Iowa's role in that process. The following features are included: "Lights, Camera, Action!"; "Presidential Whoopla"; "From Tree Stumps to Living Rooms"; "Wild…
ERIC Educational Resources Information Center
Bradbury, Leslie; Gross, Lisa; Goodman, Jeff; Straits, William
2010-01-01
Digital photography energizes students and focuses their attention on their environment. The personal connection to science helps students develop a habit of mind in which everything they see inside or outside of school can prompt them to wonder and investigate. This article describes how first graders explore their school grounds with cameras in…
NASA Astrophysics Data System (ADS)
Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia
Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.
Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries
NASA Astrophysics Data System (ADS)
Koehl, M.; Delacourt, T.; Boutry, C.
2016-06-01
This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.
A wireless sensor network deployment for rural and forest fire detection and verification.
Lloret, Jaime; Garcia, Miguel; Bri, Diana; Sendra, Sandra
2009-01-01
Forest and rural fires are one of the main causes of environmental degradation in Mediterranean countries. Existing fire detection systems only focus on detection, but not on the verification of the fire. However, almost all of them are just simulations, and very few implementations can be found. Besides, the systems in the literature lack scalability. In this paper we show all the steps followed to perform the design, research and development of a wireless multisensor network which mixes sensors with IP cameras in a wireless network in order to detect and verify fire in rural and forest areas of Spain. We have studied how many cameras, sensors and access points are needed to cover a rural or forest area, and the scalability of the system. We have developed a multisensor and when it detects a fire, it sends a sensor alarm through the wireless network to a central server. The central server selects the closest wireless cameras to the multisensor, based on a software application, which are rotated to the sensor that raised the alarm, and sends them a message in order to receive real-time images from the zone. The camera lets the fire fighters corroborate the existence of a fire and avoid false alarms. In this paper, we show the test performance given by a test bench formed by four wireless IP cameras in several situations and the energy consumed when they are transmitting. Moreover, we study the energy consumed by each device when the system is set up. The wireless sensor network could be connected to Internet through a gateway and the images of the cameras could be seen from any part of the world.
NASA Astrophysics Data System (ADS)
Wolfe, C. A.; Lemmon, M. T.
2015-12-01
Dust in the Martian atmosphere influences energy deposition, dynamics, and the viability of solar powered exploration vehicles. The Viking, Pathfinder, Spirit, Opportunity, Phoenix, and Curiosity landers and rovers each included the ability to image the Sun with a science camera equipped with a neutral density filter. Direct images of the Sun not only provide the ability to measure extinction by dust and ice in the atmosphere, but also provide a variety of constraints on the Martian dust and water cycles. These observations have been used to characterize dust storms, to provide ground truth sites for orbiter-based global measurements of dust loading, and to help monitor solar panel performance. In the cost-constrained environment of Mars exploration, future missions may omit such cameras, as the solar-powered InSight mission has. We seek to provide a robust capability of determining atmospheric opacity from sky images taken with cameras that have not been designed for solar imaging, such as the engineering cameras onboard Opportunity and the Mars Hand Lens Imager (MAHLI) on Curiosity. Our investigation focuses primarily on the accuracy of a method that determines optical depth values using scattering models that implement the ratio of sky radiance measurements at different elevation angles, but at the same scattering angle. Operational use requires the ability to retrieve optical depth on a timescale useful to mission planning, and with an accuracy and precision sufficient to support both mission planning and validating orbital measurements. We will present a simulation-based assessment of imaging strategies and their error budgets, as well as a validation based on the comparison of direct extinction measurements from archival Navcam, Hazcam, and MAHLI camera data.
A Wireless Sensor Network Deployment for Rural and Forest Fire Detection and Verification
Lloret, Jaime; Garcia, Miguel; Bri, Diana; Sendra, Sandra
2009-01-01
Forest and rural fires are one of the main causes of environmental degradation in Mediterranean countries. Existing fire detection systems only focus on detection, but not on the verification of the fire. However, almost all of them are just simulations, and very few implementations can be found. Besides, the systems in the literature lack scalability. In this paper we show all the steps followed to perform the design, research and development of a wireless multisensor network which mixes sensors with IP cameras in a wireless network in order to detect and verify fire in rural and forest areas of Spain. We have studied how many cameras, sensors and access points are needed to cover a rural or forest area, and the scalability of the system. We have developed a multisensor and when it detects a fire, it sends a sensor alarm through the wireless network to a central server. The central server selects the closest wireless cameras to the multisensor, based on a software application, which are rotated to the sensor that raised the alarm, and sends them a message in order to receive real-time images from the zone. The camera lets the fire fighters corroborate the existence of a fire and avoid false alarms. In this paper, we show the test performance given by a test bench formed by four wireless IP cameras in several situations and the energy consumed when they are transmitting. Moreover, we study the energy consumed by each device when the system is set up. The wireless sensor network could be connected to Internet through a gateway and the images of the cameras could be seen from any part of the world. PMID:22291533
Alanso, Robert S.; McClintock, Brett T.; Lyren, Lisa M.; Boydston, Erin E.; Crooks, Kevin R.
2015-01-01
Abundance estimation of carnivore populations is difficult and has prompted the use of non-invasive detection methods, such as remotely-triggered cameras, to collect data. To analyze photo data, studies focusing on carnivores with unique pelage patterns have utilized a mark-recapture framework and studies of carnivores without unique pelage patterns have used a mark-resight framework. We compared mark-resight and mark-recapture estimation methods to estimate bobcat (Lynx rufus) population sizes, which motivated the development of a new "hybrid" mark-resight model as an alternative to traditional methods. We deployed a sampling grid of 30 cameras throughout the urban southern California study area. Additionally, we physically captured and marked a subset of the bobcat population with GPS telemetry collars. Since we could identify individual bobcats with photos of unique pelage patterns and a subset of the population was physically marked, we were able to use traditional mark-recapture and mark-resight methods, as well as the new “hybrid” mark-resight model we developed to estimate bobcat abundance. We recorded 109 bobcat photos during 4,669 camera nights and physically marked 27 bobcats with GPS telemetry collars. Abundance estimates produced by the traditional mark-recapture, traditional mark-resight, and “hybrid” mark-resight methods were similar, however precision differed depending on the models used. Traditional mark-recapture and mark-resight estimates were relatively imprecise with percent confidence interval lengths exceeding 100% of point estimates. Hybrid mark-resight models produced better precision with percent confidence intervals not exceeding 57%. The increased precision of the hybrid mark-resight method stems from utilizing the complete encounter histories of physically marked individuals (including those never detected by a camera trap) and the encounter histories of naturally marked individuals detected at camera traps. This new estimator may be particularly useful for estimating abundance of uniquely identifiable species that are difficult to sample using camera traps alone.
NASA Astrophysics Data System (ADS)
Harrild, M.; Webley, P. W.; Dehn, J.
2015-12-01
The ability to detect and monitor precursory events, thermal signatures, and ongoing volcanic activity in near-realtime is an invaluable tool. Volcanic hazards often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash to aircraft cruise altitudes. Using ground based remote sensing to detect and monitor this activity is essential, but the required equipment is often expensive and difficult to maintain, which increases the risk to public safety and the likelihood of financial impact. Our investigation explores the use of 'off the shelf' cameras, ranging from computer webcams to low-light security cameras, to monitor volcanic incandescent activity in near-realtime. These cameras are ideal as they operate in the visible and near-infrared (NIR) portions of the electromagnetic spectrum, are relatively cheap to purchase, consume little power, are easily replaced, and can provide telemetered, near-realtime data. We focus on the early detection of volcanic activity, using automated scripts that capture streaming online webcam imagery and evaluate each image according to pixel brightness, in order to automatically detect and identify increases in potentially hazardous activity. The cameras used here range in price from 0 to 1,000 and the script is written in Python, an open source programming language, to reduce the overall cost to potential users and increase the accessibility of these tools, particularly in developing nations. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures to be correlated to pixel brightness. Data collected from several volcanoes; (1) Stromboli, Italy (2) Shiveluch, Russia (3) Fuego, Guatemala (4) Popcatépetl, México, along with campaign data from Stromboli (June, 2013), and laboratory tests are presented here.
SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, S; Rao, A; Wendt, R
Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the cameramore » by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.« less
Cryogenic coefficient of thermal expansion measurements of type 440 and 630 stainless steel
NASA Astrophysics Data System (ADS)
Cease, H.; Alvarez, M.; Flaugher, B.; Montes, J.
2014-01-01
The Dark Energy Camera is now installed on the Blanco 4m telescope at the Cerro Tololo Inter-American Observatory in Chile. The camera is cooled to 170K using a closed loop two-phase liquid nitrogen system. A submerged centrifugal pump is used to circulate the liquid from the base of the telescope to the camera in the prime focus cage. As part of the pump maintenance schedule, the rotor shaft bearings are periodically replaced. Common bearing and shaft materials are type 440 and 630 (17-4 PH) stainless steel. The coefficient of thermal expansion of the materials used is needed to predict the shaft and bearing housing dimensional changes at the 77K pump operating temperature. The thermal expansion from room temperature to 77K of type 440 and 630 stainless steel is presented . Measurements are performed using the ASTM E228 standard with a quartz push-rod dilatometer test stand. Aluminum 6061-T6 is used to calibrate the test stand.
The system analysis of light field information collection based on the light field imaging
NASA Astrophysics Data System (ADS)
Wang, Ye; Li, Wenhua; Hao, Chenyang
2016-10-01
Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.
Large, high resolution integrating TV sensor for astronomical appliations
NASA Technical Reports Server (NTRS)
Spitzer, L. J.
1977-01-01
A magnetically focused SEC tube developed for photometric applications is described. Efforts to design a 70 mm version of the tube which meets the ST f/24 camera requirements of the space telescope are discussed. The photometric accuracy of the 70 mm tube is executed to equal that of the previously developed 35 mm tube. The tube meets the criterion of 50 percent response at 20 cycles/mm in the central region of the format, and, with the removal of the remaining magnetic parts, this spatial frequency is expected over almost all of the format. Since the ST f/24 camera requires sensitivity in the red as well as the ultraviolet and visible spectra, attempts were made to develop tubes with this ability. It was found that it may be necessary to choose between red and u.v. sensitivity and tradeoff red sensitivity for low background. Results of environmental tests indicate no substantive problems in utilizing it in a flight camera system that will meet the space shuttle launch requirements.
Automated exterior inspection of an aircraft with a pan-tilt-zoom camera mounted on a mobile robot
NASA Astrophysics Data System (ADS)
Jovančević, Igor; Larnier, Stanislas; Orteu, Jean-José; Sentenac, Thierry
2015-11-01
This paper deals with an automated preflight aircraft inspection using a pan-tilt-zoom camera mounted on a mobile robot moving autonomously around the aircraft. The general topic is image processing framework for detection and exterior inspection of different types of items, such as closed or unlatched door, mechanical defect on the engine, the integrity of the empennage, or damage caused by impacts or cracks. The detection step allows to focus on the regions of interest and point the camera toward the item to be checked. It is based on the detection of regular shapes, such as rounded corner rectangles, circles, and ellipses. The inspection task relies on clues, such as uniformity of isolated image regions, convexity of segmented shapes, and periodicity of the image intensity signal. The approach is applied to the inspection of four items of Airbus A320: oxygen bay handle, air-inlet vent, static ports, and fan blades. The results are promising and demonstrate the feasibility of an automated exterior inspection.
Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera
Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing
2018-01-01
The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; ...
2016-11-28
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
Visual object recognition for mobile tourist information systems
NASA Astrophysics Data System (ADS)
Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander
2005-03-01
We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.
Calibration of EFOSC2 Broadband Linear Imaging Polarimetry
NASA Astrophysics Data System (ADS)
Wiersema, K.; Higgins, A. B.; Covino, S.; Starling, R. L. C.
2018-03-01
The European Southern Observatory Faint Object Spectrograph and Camera v2 is one of the workhorse instruments on ESO's New Technology Telescope, and is one of the most popular instruments at La Silla observatory. It is mounted at a Nasmyth focus, and therefore exhibits strong, wavelength and pointing-direction-dependent instrumental polarisation. In this document, we describe our efforts to calibrate the broadband imaging polarimetry mode, and provide a calibration for broadband B, V, and R filters to a level that satisfies most use cases (i.e. polarimetric calibration uncertainty 0.1%). We make our calibration codes public. This calibration effort can be used to enhance the yield of future polarimetric programmes with the European Southern Observatory Faint Object Spectrograph and Camera v2, by allowing good calibration with a greatly reduced number of standard star observations. Similarly, our calibration model can be combined with archival calibration observations to post-process data taken in past years, to form the European Southern Observatory Faint Object Spectrograph and Camera v2 legacy archive with substantial scientific potential.
Iodine filter imaging system for subtraction angiography using synchrotron radiation
NASA Astrophysics Data System (ADS)
Umetani, K.; Ueda, K.; Takeda, T.; Itai, Y.; Akisada, M.; Nakajima, T.
1993-11-01
A new type of real-time imaging system was developed for transvenous coronary angiography. A combination of an iodine filter and a single energy broad-bandwidth X-ray produces two-energy images for the iodine K-edge subtraction technique. X-ray images are sequentially converted to visible images by an X-ray image intensifier. By synchronizing the timing of the movement of the iodine filter into and out of the X-ray beam, two output images of the image intensifier are focused side by side on the photoconductive layer of a camera tube by an oscillating mirror. Both images are read out by electron beam scanning of a 1050-scanning-line video camera within a camera frame time of 66.7 ms. One hundred ninety two pairs of iodine-filtered and non-iodine-filtered images are stored in the frame memory at a rate of 15 pairs/s. In vivo subtracted images of coronary arteries in dogs were obtained in the form of motion pictures.
SAAO's new robotic telescope and WiNCam (Wide-field Nasmyth Camera)
NASA Astrophysics Data System (ADS)
Worters, Hannah L.; O'Connor, James E.; Carter, David B.; Loubser, Egan; Fourie, Pieter A.; Sickafoose, Amanda; Swanevelder, Pieter
2016-08-01
The South African Astronomical Observatory (SAAO) is designing and manufacturing a wide-field camera for use on two of its telescopes. The initial concept was of a Prime focus camera for the 74" telescope, an equatorial design made by Grubb Parsons, where it would employ a 61mmx61mm detector to cover a 23 arcmin diameter field of view. However, while in the design phase, SAAO embarked on the process of acquiring a bespoke 1-metre robotic alt-az telescope with a 43 arcmin field of view, which needs a homegrown instrument suite. The Prime focus camera design was thus adapted for use on either telescope, increasing the detector size to 92mmx92mm. Since the camera will be mounted on the Nasmyth port of the new telescope, it was dubbed WiNCam (Wide-field Nasmyth Camera). This paper describes both WiNCam and the new telescope. Producing an instrument that can be swapped between two very different telescopes poses some unique challenges. At the Nasmyth port of the alt-az telescope there is ample circumferential space, while on the 74 inch the available envelope is constrained by the optical footprint of the secondary, if further obscuration is to be avoided. This forces the design into a cylindrical volume of 600mm diameter x 250mm height. The back focal distance is tightly constrained on the new telescope, shoehorning the shutter, filter unit, guider mechanism, a 10mm thick window and a tip/tilt mechanism for the detector into 100mm depth. The iris shutter and filter wheel planned for prime focus could no longer be accommodated. Instead, a compact shutter with a thickness of less than 20mm has been designed in-house, using a sliding curtain mechanism to cover an aperture of 125mmx125mm, while the filter wheel has been replaced with 2 peripheral filter cartridges (6 filters each) and a gripper to move a filter into the beam. We intend using through-vacuum wall PCB technology across the cryostat vacuum interface, instead of traditional hermetic connector-based wiring. This has advantages in terms of space saving and improved performance. Measures are being taken to minimise the risk of damage during an instrument change. The detector is cooled by a Stirling cooler, which can be disconnected from the cooler unit without risking damage. Each telescope has a dedicated cooler unit into which the coolant hoses of WiNCam will plug. To overcome an inherent drawback of Stirling coolers, an active vibration damper is incorporated. During an instrument change, the autoguider remains on the telescope, and the filter magazines, shutter and detector package are removed as a single unit. The new alt-az telescope, manufactured by APM-Telescopes, is a 1-metre f/8 Ritchey-Chrétien with optics by LOMO. The field flattening optics were designed by Darragh O'Donoghue to have high UV throughput and uniform encircled energy over the 100mm diameter field. WiNCam will be mounted on one Nasmyth port, with the second port available for SHOC (Sutherland High-speed Optical Camera) and guest instrumentation. The telescope will be located in Sutherland, where an existing dome is being extensively renovated to accommodate it. Commissioning is planned for the second half of 2016.
Advances in Heavy Ion Beam Probe Technology and Operation on MST
NASA Astrophysics Data System (ADS)
Demers, D. R.; Connor, K. A.; Schoch, P. M.; Radke, R. J.; Anderson, J. K.; Craig, D.; den Hartog, D. J.
2003-10-01
A technique to map the magnetic field of a plasma via spectral imaging is being developed with the Heavy Ion Beam Probe on the Madison Symmetric Torus. The technique will utilize two-dimensional images of the ion beam in the plasma, acquired by two CCD cameras, to generate a three-dimensional reconstruction of the beam trajectory. This trajectory, and the known beam ion mass, energy and charge-state, will be used to determine the magnetic field of the plasma. A suitable emission line has not yet been observed since radiation from the MST plasma is both broadband and intense. An effort to raise the emission intensity from the ion beam by increasing beam focus and current has been undertaken. Simulations of the accelerator ion optics and beam characteristics led to a technique, confirmed by experiment, that achieves a narrower beam and marked increase in ion current near the plasma surface. The improvements arising from these simulations will be discussed. Realization of the magnetic field mapping technique is contingent upon accurate reconstruction of the beam trajectory from the camera images. Simulations of two camera CCD images, including the interior of MST, its various landmarks and beam trajectories have been developed. These simulations accept user input such as camera locations, resolution via pixellization and noise. The quality of the images simulated with these and other variables will help guide the selection of viewing port pairs, image size and camera specifications. The results of these simulations will be presented.
Characterization of Vegetation using the UC Davis Remote Sensing Testbed
NASA Astrophysics Data System (ADS)
Falk, M.; Hart, Q. J.; Bowen, K. S.; Ustin, S. L.
2006-12-01
Remote sensing provides information about the dynamics of the terrestrial biosphere with continuous spatial and temporal coverage on many different scales. We present the design and construction of a suite of instrument modules and network infrastructure with size, weight and power constraints suitable for small scale vehicles, anticipating vigorous growth in unmanned aerial vehicles (UAV) and other mobile platforms. Our approach provides the rapid deployment and low cost acquisition of high aerial imagery for applications requiring high spatial resolution and revisits. The testbed supports a wide range of applications, encourages remote sensing solutions in new disciplines and demonstrates the complete range of engineering knowledge required for the successful deployment of remote sensing instruments. The initial testbed is deployed on a Sig Kadet Senior remote controlled plane. It includes an onboard computer with wireless radio, GPS, inertia measurement unit, 3-axis electronic compass and digital cameras. The onboard camera is either a RGB digital camera or a modified digital camera with red and NIR channels. Cameras were calibrated using selective light sources, an integrating spheres and a spectrometer, allowing for the computation of vegetation indices such as the NDVI. Field tests to date have investigated technical challenges in wireless communication bandwidth limits, automated image geolocation, and user interfaces; as well as image applications such as environmental landscape mapping focusing on Sudden Oak Death and invasive species detection, studies on the impact of bird colonies on tree canopies, and precision agriculture.
Cheap streak camera based on the LD-S-10 intensifier tube
NASA Astrophysics Data System (ADS)
Dashevsky, Boris E.; Krutik, Mikhail I.; Surovegin, Alexander L.
1992-01-01
Basic properties of a new streak camera and its test results are reported. To intensify images on its screen, we employed modular G1 tubes, the LD-A-1.0 and LD-A-0.33, enabling magnification of 1.0 and 0.33, respectively. If necessary, the LD-A-0.33 tube may be substituted by any other image intensifier of the LDA series, the choice to be determined by the size of the CCD matrix with fiber-optical windows. The reported camera employs a 12.5- mm-long CCD strip consisting of 1024 pixels, each 12 X 500 micrometers in size. Registered radiation was imaged on a 5 X 0.04 mm slit diaphragm tightly connected with the LD-S- 10 fiber-optical input window. Electrons escaping the cathode are accelerated in a 5 kV electric field and focused onto a phosphor screen covering a fiber-optical plate as they travel between deflection plates. Sensitivity of the latter was 18 V/mm, which implies that the total deflecting voltage was 720 V per 40 mm of the screen surface, since reversed-polarity scan pulses +360 V and -360 V were applied across the deflection plate. The streak camera provides full scan times over the screen of 15, 30, 50, 100, 250, and 500 ns. Timing of the electrically or optically driven camera was done using a 10 ns step-controlled-delay (0 - 500 ns) circuit.
QWIP technology for both military and civilian applications
NASA Astrophysics Data System (ADS)
Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.
2001-10-01
Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.
Intelligent viewing control for robotic and automation systems
NASA Astrophysics Data System (ADS)
Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.
1994-10-01
We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors
NASA Astrophysics Data System (ADS)
Han, Ling
Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.
A Digital Approach to Learning Petrology
NASA Astrophysics Data System (ADS)
Reid, M. R.
2011-12-01
In the undergraduate igneous and metamorphic petrology course at Northern Arizona University, we are employing petrographic microscopes equipped with relatively inexpensive ( $200) digital cameras that are linked to pen-tablet computers. The camera-tablet systems can assist student learning in a variety of ways. Images provided by the tablet computers can be used for helping students filter the visually complex specimens they examine. Instructors and students can simultaneously view the same petrographic features captured by the cameras and exchange information about them by pointing to salient features using the tablet pen. These images can become part of a virtual mineral/rock/texture portfolio tailored to individual student's needs. Captured digital illustrations can be annotated with digital ink or computer graphics tools; this activity emulates essential features of more traditional line drawings (visualizing an appropriate feature and selecting a representative image of it, internalizing the feature through studying and annotating it) while minimizing the frustration that many students feel about drawing. In these ways, we aim to help a student progress more efficiently from novice to expert. A number of our petrology laboratory exercises involve use of the camera-tablet systems for collaborative learning. Observational responsibilities are distributed among individual members of teams in order to increase interdependence and accountability, and to encourage efficiency. Annotated digital images are used to share students' findings and arrive at an understanding of an entire rock suite. This interdependence increases the individual's sense of responsibility for their work, and reporting out encourages students to practice use of technical vocabulary and to defend their observations. Pre- and post-course student interest in the camera-tablet systems has been assessed. In a post-course survey, the majority of students reported that, if available, they would use camera-tablet systems to capture microscope images (77%) and to make notes on images (71%). An informal focus group recommended introducing the cameras as soon as possible and having them available for making personal mineralogy/petrology portfolios. Because the stakes are perceived as high, use of the camera-tablet systems for peer-peer learning has been progressively modified to bolster student confidence in their collaborative efforts.
Realization of the ergonomics design and automatic control of the fundus cameras
NASA Astrophysics Data System (ADS)
Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye
2012-12-01
The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.
A simple integrated system for electrophysiologic recordings in animals
Slater, Bernard J.; Miller, Neil R.; Bernstein, Steven L.; Flower, Robert W.
2009-01-01
This technical note describes a modification to a fundus camera that permits simultaneous recording of pattern electroretinograms (pERGs) and pattern visual evoked potentials (pVEPs). The modification consists of placing an organic light-emitting diode (OLED) in the split-viewer pathway of a fundus camera, in a plane conjugate to the subject’s pupil. In this way, a focused image of the OLED can be delivered to a precisely known location on the retina. The advantage of using an OLED is that it can achieve high luminance while maintaining high contrast, and with minimal degradation over time. This system is particularly useful for animal studies, especially when precise retinal positioning is required. PMID:19137347
Underwater video enhancement using multi-camera super-resolution
NASA Astrophysics Data System (ADS)
Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.
2017-12-01
Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.
Time-resolved x-ray spectra from laser-generated high-density plasmas
NASA Astrophysics Data System (ADS)
Andiel, U.; Eidmann, Klaus; Witte, Klaus-Juergen
2001-04-01
We focused frequency doubled ultra short laser pulses on solid C, F, Na and Al targets, K-shell emission was systematically investigated by time resolved spectroscopy using a sub-ps streak camera. A large number of laser shots can be accumulated when triggering the camera with an Auston switch system at very high temporal precision. The system provides an outstanding time resolution of 1.7ps accumulating thousands of laser shots. The time duration of the He-(alpha) K-shell resonance lines was observed in the range of (2-4)ps and shows a decrease with the atomic number. The experimental results are well reproduced by hydro code simulations post processed with an atomic kinetics code.
NASA Technical Reports Server (NTRS)
Philpott, D. E.; Harrison, G.; Turnbill, C.; Bailey, P. F.
1979-01-01
Research on retinal circulation during space flight required the development of a simple technique to provide self monitoring of blood vessel changes in the fundus without the use of mydriatics. A Kowa RC-2 fundus camera was modified for self-photography by the use of a bite plate for positioning and cross hairs for focusing the subject's retina relative to the film plane. Dilation of the pupils without the use of mydriatics was accomplished by dark-adaption of the subject. Pictures were obtained without pupil constriction by the use of a high speed strobe light. This method also has applications for clinical medicine.
The High Energy Detector of Simbol-X
NASA Astrophysics Data System (ADS)
Meuris, A.; Limousin, O.; Lugiez, F.; Gevin, O.; Blondel, C.; Le Mer, I.; Pinsard, F.; Cara, C.; Goetschy, A.; Martignac, J.; Tauzin, G.; Hervé, S.; Laurent, P.; Chipaux, R.; Rio, Y.; Fontignie, J.; Horeau, B.; Authier, M.; Ferrando, P.
2009-05-01
The High Energy Detector (HED) is one of the three detection units on board the Simbol-X detector spacecraft. It is placed below the Low Energy Detector so as to collect focused photons in the energy range from 8 to 80 keV. It consists of a mosaic of 64 independent cameras, divided in 8 sectors. Each elementary detection unit, called Caliste, is the hybridization of a 256-pixel Cadmium Telluride (CdTe) detector with full custom front-end electronics into a unique component. The status of the HED design will be reported. The promising results obtained from the first micro-camera prototypes called Caliste 64 and Caliste 256 will be presented to illustrate the expected performance of the instrument.
FreeTure: A Free software to capTure meteors for FRIPON
NASA Astrophysics Data System (ADS)
Audureau, Yoan; Marmo, Chiara; Bouley, Sylvain; Kwon, Min-Kyung; Colas, François; Vaubaillon, Jérémie; Birlan, Mirel; Zanda, Brigitte; Vernazza, Pierre; Caminade, Stephane; Gattecceca, Jérôme
2014-02-01
The Fireball Recovery and Interplanetary Observation Network (FRIPON) is a French project started in 2014 which will monitor the sky, using 100 all-sky cameras to detect meteors and to retrieve related meteorites on the ground. There are several detection software all around. Some of them are proprietary. Also, some of them are hardware dependent. We present here the open source software for meteor detection to be installed on the FRIPON network's stations. The software will run on Linux with gigabit Ethernet cameras and we plan to make it cross platform. This paper is focused on the meteor detection method used for the pipeline development and the present capabilities.
Multiplexed time-lapse photomicrography of cultured cells.
Heye, R R; Kiebler, E W; Arnzen, R J; Tolmach, L J
1982-01-01
A system of cinemicrography has been developed in which a single microscope and 16 mm camera are multiplexed to produce a time-lapse photographic record of many fields simultaneously. The field coordinates and focus are selected via a control console and entered into the memory of a dedicated microcomputer; they are then automatically recalled in sequence, thus permitting the photographing of additional fields in the interval between exposures of any given field. Sequential exposures of each field are isolated in separate sections of the film by means of a specially designed random-access camera that is also controlled by the microcomputer. The need to unscramble frames is thereby avoided, and the developed film can be directly analysed.
ERIC Educational Resources Information Center
Giles, Rebecca McMahon
2006-01-01
Exposure to cell phones, DVD players, video games, computers, digital cameras, and iPods has made today's young people more technologically advanced than those of any previous generation. As a result, parents are now concerned that their children are spending too much time in front of the computer. In this article, the author focuses her…
Play & Play Grounds. A Report.
ERIC Educational Resources Information Center
Stone, Jeannette Galambos
Using camera and tape recorder, a photographer and an early childhood specialist explored as a team the universe of children's outdoor play, seeking worthy and innovative ideas and stressing urban playground problems and solutions. The resulting photographs and text focus on (1) the characteristics of play, (2) the nature of playgrounds, and (3)…
NASA Astrophysics Data System (ADS)
McIntosh, Benjamin Patrick
Blindness due to Age-Related Macular Degeneration and Retinitis Pigmentosa is unfortunately both widespread and largely incurable. Advances in visual prostheses that can restore functional vision in those afflicted by these diseases have evolved rapidly from new areas of research in ophthalmology and biomedical engineering. This thesis is focused on further advancing the state-of-the-art of both visual prostheses and implantable biomedical devices. A novel real-time system with a high performance head-mounted display is described that enables enhanced realistic simulation of intraocular retinal prostheses. A set of visual psychophysics experiments is presented using the visual prosthesis simulator that quantify, in several ways, the benefit of foveation afforded by an eye-pointed camera (such as an eye-tracked extraocular camera or an implantable intraocular camera) as compared with a head-pointed camera. A visual search experiment demonstrates a significant improvement in the time to locate a target on a screen when using an eye-pointed camera. A reach and grasp experiment demonstrates a 20% to 70% improvement in time to grasp an object when using an eye-pointed camera, with the improvement maximized when the percept is blurred. A navigation and mobility experiment shows a 10% faster walking speed and a 50% better ability to avoid obstacles when using an eye-pointed camera. Improvements to implantable biomedical devices are also described, including the design and testing of VLSI-integrable positive mobile ion contamination sensors and humidity sensors that can validate the hermeticity of biomedical device packages encapsulated by hermetic coatings, and can provide early warning of leaks or contamination that may jeopardize the implant. The positive mobile ion contamination sensors are shown to be sensitive to externally applied contamination. A model is proposed to describe sensitivity as a function of device geometry, and verified experimentally. Guidelines are provided on the use of spare CMOS oxide and metal layers to maximize the hermeticity of an implantable microchip. In addition, results are presented on the design and testing of small form factor, very low power, integrated CMOS clock generation circuits that are stable enough to drive commercial image sensor arrays, and therefore can be incorporated in an intraocular camera for retinal prostheses.
Hyper Suprime-Cam: System design and verification of image quality
NASA Astrophysics Data System (ADS)
Miyazaki, Satoshi; Komiyama, Yutaka; Kawanomoto, Satoshi; Doi, Yoshiyuki; Furusawa, Hisanori; Hamana, Takashi; Hayashi, Yusuke; Ikeda, Hiroyuki; Kamata, Yukiko; Karoji, Hiroshi; Koike, Michitaro; Kurakami, Tomio; Miyama, Shoken; Morokuma, Tomoki; Nakata, Fumiaki; Namikawa, Kazuhito; Nakaya, Hidehiko; Nariai, Kyoji; Obuchi, Yoshiyuki; Oishi, Yukie; Okada, Norio; Okura, Yuki; Tait, Philip; Takata, Tadafumi; Tanaka, Yoko; Tanaka, Masayuki; Terai, Tsuyoshi; Tomono, Daigo; Uraguchi, Fumihiro; Usuda, Tomonori; Utsumi, Yousuke; Yamada, Yoshihiko; Yamanoi, Hitomi; Aihara, Hiroaki; Fujimori, Hiroki; Mineo, Sogo; Miyatake, Hironao; Oguri, Masamune; Uchida, Tomohisa; Tanaka, Manobu M.; Yasuda, Naoki; Takada, Masahiro; Murayama, Hitoshi; Nishizawa, Atsushi J.; Sugiyama, Naoshi; Chiba, Masashi; Futamase, Toshifumi; Wang, Shiang-Yu; Chen, Hsin-Yo; Ho, Paul T. P.; Liaw, Eric J. Y.; Chiu, Chi-Fang; Ho, Cheng-Lin; Lai, Tsang-Chih; Lee, Yao-Cheng; Jeng, Dun-Zen; Iwamura, Satoru; Armstrong, Robert; Bickerton, Steve; Bosch, James; Gunn, James E.; Lupton, Robert H.; Loomis, Craig; Price, Paul; Smith, Steward; Strauss, Michael A.; Turner, Edwin L.; Suzuki, Hisanori; Miyazaki, Yasuhito; Muramatsu, Masaharu; Yamamoto, Koei; Endo, Makoto; Ezaki, Yutaka; Ito, Noboru; Kawaguchi, Noboru; Sofuku, Satoshi; Taniike, Tomoaki; Akutsu, Kotaro; Dojo, Naoto; Kasumi, Kazuyuki; Matsuda, Toru; Imoto, Kohei; Miwa, Yoshinori; Suzuki, Masayuki; Takeshi, Kunio; Yokota, Hideo
2018-01-01
The Hyper Suprime-Cam (HSC) is an 870 megapixel prime focus optical imaging camera for the 8.2 m Subaru telescope. The wide-field corrector delivers sharp images of 0{^''.}2 (FWHM) in the HSC-i band over the entire 1.5° diameter field of view. The collimation of the camera with respect to the optical axis of the primary mirror is done with hexapod actuators, the mechanical accuracy of which is a few microns. Analysis of the remaining wavefront error in off-focus stellar images reveals that the collimation of the optical components meets design specifications. While there is a flexure of mechanical components, it also is within the design specification. As a result, the camera achieves its seeing-limited imaging on Maunakea during most of the time; the median seeing over several years of observing is 0.67" (FWHM) in the i band. The sensors use p-channel, fully depleted CCDs of 200 μm thickness (2048 × 4176 15 μm square pixels) and we employ 116 of them to pave the 50 cm diameter focal plane. The minimum interval between exposures is 34 s, including the time to read out arrays, to transfer data to the control computer, and to save them to the hard drive. HSC on Subaru uniquely features a combination of a large aperture, a wide field of view, sharp images and a high sensitivity especially at longer wavelengths, which makes the HSC one of the most powerful observing facilities in the world.
An automated system for whole microscopic image acquisition and analysis.
Bueno, Gloria; Déniz, Oscar; Fernández-Carrobles, María Del Milagro; Vállez, Noelia; Salido, Jesús
2014-09-01
The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented. © 2014 Wiley Periodicals, Inc.
Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera.
Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G; Nagarkar, Vivek V
2011-06-01
Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional "straight-cut" (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.
Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera
Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.
2011-01-01
Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors. PMID:21731108
Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera
NASA Astrophysics Data System (ADS)
Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.
2011-06-01
Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99 m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.
Yu, Ying; Shen, Guofeng; Zhou, Yufeng; Bai, Jingfeng; Chen, Yazhu
2013-11-01
With the popularity of ultrasound therapy in clinics, characterization of the acoustic field is important not only to the tolerability and efficiency of ablation, but also for treatment planning. A quantitative method was introduced to assess the intensity distribution of a focused ultrasound beam using a hydrophone and an infrared camera with no prior knowledge of the acoustic and thermal parameters of the absorber or the configuration of the array elements. This method was evaluated in both theoretical simulations and experimental measurements. A three-layer model was developed to calculate the acoustic field in the absorber, the absorbed acoustic energy during the sonication and the consequent temperature elevation. Experiments were carried out to measure the acoustic pressure with the hydrophone and the temperature elevation with the infrared camera. The percentage differences between the derived results and the simulation are <4.1% for on-axis intensity and <21.1% for -6-dB beam width at heating times up to 360 ms in the focal region of three phased-array ultrasound transducers using two different absorbers. The proposed method is an easy, quick and reliable approach to calibrating focused ultrasound transducers with satisfactory accuracy. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turkington, T.
This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images for SPECT reconstructions. Become knowledgeable of items to be included in annual acceptance testing reports including CT dosimetry and PACS monitor measurements. T. Turkington, GE Healthcare.« less
Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Warren
2004-06-01
There is significant motivation to provide robotic systems with improved autonomy as a means to significantly accelerate deactivation and decommissioning (D&D) operations while also reducing the associated costs, removing human operators from hazardous environments, and reducing the required burden and skill of human operators. To achieve improved autonomy, this project focused on the basic science challenges leading to the development of visual servo controllers. The challenge in developing these controllers is that a camera provides 2-dimensional image information about the 3-dimensional Euclidean-space through a perspective (range dependent) projection that can be corrupted by uncertainty in the camera calibration matrix andmore » by disturbances such as nonlinear radial distortion. Disturbances in this relationship (i.e., corruption in the sensor information) propagate erroneous information to the feedback controller of the robot, leading to potentially unpredictable task execution. This research project focused on the development of a visual servo control methodology that targets compensating for disturbances in the camera model (i.e., camera calibration and the recovery of range information) as a means to achieve predictable response by the robotic system operating in unstructured environments. The fundamental idea is to use nonlinear Lyapunov-based techniques along with photogrammetry methods to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current robotic applications. The outcome of this control methodology is a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature recognition and extraction to enable robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). The developed methodology has been reported in numerous peer-reviewed publications and the performance and enabling capabilities of the resulting visual servo control modules have been demonstrated on mobile robot and robot manipulator platforms.« less
NASA Astrophysics Data System (ADS)
Harrild, M.; Webley, P. W.; Dehn, J.
2016-12-01
An effective early warning system to detect volcanic activity is an invaluable tool, but often very expensive. Detecting and monitoring precursory events, thermal signatures, and ongoing eruptions in near real-time is essential, but conventional methods are often logistically challenging, expensive, and difficult to maintain. Our investigation explores the use of `off the shelf' webcams and low-light cameras, operating in the visible to near-infrared portions of the electromagnetic spectrum, to detect and monitor volcanic incandescent activity. Large databases of webcam imagery already exist at institutions around the world, but are often extremely underutilised and we aim to change this. We focus on the early detection of thermal signatures at volcanoes, using automated scripts to analyse individual images for changes in pixel brightness, allowing us to detect relative changes in thermally incandescent activity. Primarily, our work focuses on freely available streams of webcam images from around the world, which we can download and analyse in near real-time. When changes in activity are detected, an alert is sent to the users informing them of the changes in activity and a need for further investigation. Although relatively rudimentary, this technique provides constant monitoring for volcanoes in remote locations and developing nations, where it is not financially viable to deploy expensive equipment. We also purchased several of our own cameras, which were extensively tested in controlled laboratory settings with a black body source to determine their individual spectral response. Our aim is to deploy these cameras at active volcanoes knowing exactly how they will respond to varying levels of incandescence. They are ideal for field deployments as they are cheap (0-1,000), consume little power, are easily replaced, and can provide telemetered near real-time data. Data from Shiveluch volcano, Russia and our spectral response lab experiments are presented here.
Athena microscopic Imager investigation
Herkenhoff, K. E.; Squyres, S. W.; Bell, J.F.; Maki, J.N.; Arneson, H.M.; Bertelsen, P.; Brown, D.I.; Collins, S.A.; Dingizian, A.; Elliott, S.T.; Goetz, W.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Kirk, R.L.; McLennan, S.; Morris, R.V.; Scherr, L.M.; Schwochert, M.A.; Shiraishi, L.R.; Smith, G.H.; Soderblom, L.A.; Sohl-Dickstein, J. N.; Wadsworth, M.V.
2003-01-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). The MI uses the same electronics design as the other MER cameras but has optics that yield a field of view of 31 ?? 31 mm across a 1024 ?? 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (???2 mm precision) is achieved by moving the IDD away from a rock target after the contact sensor has been activated. The MI optics are protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. MI data will be used to place other MER instrument data in context and to aid in petrologic and geologic interpretations of rocks and soils on Mars. Copyright 2003 by the American Geophysical Union.
Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects
NASA Astrophysics Data System (ADS)
Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.
2013-07-01
As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.
A simple optical tweezers for trapping polystyrene particles
NASA Astrophysics Data System (ADS)
Shiddiq, Minarni; Nasir, Zulfa; Yogasari, Dwiyana
2013-09-01
Optical tweezers is an optical trap. For decades, it has become an optical tool that can trap and manipulate any particle from the very small size like DNA to the big one like bacteria. The trapping force comes from the radiation pressure of laser light which is focused to a group of particles. Optical tweezers has been used in many research areas such as atomic physics, medical physics, biophysics, and chemistry. Here, a simple optical tweezers has been constructed using a modified Leybold laboratory optical microscope. The ocular lens of the microscope has been removed for laser light and digital camera accesses. A laser light from a Coherent diode laser with wavelength λ = 830 nm and power 50 mW is sent through an immersion oil objective lens with magnification 100 × and NA 1.25 to a cell made from microscope slides containing polystyrene particles. Polystyrene particles with size 3 μm and 10 μm are used. A CMOS Thorlabs camera type DCC1545M with USB Interface and Thorlabs camera lens 35 mm are connected to a desktop and used to monitor the trapping and measure the stiffness of the trap. The camera is accompanied by camera software which makes able for the user to capture and save images. The images are analyzed using ImageJ and Scion macro. The polystyrene particles have been trapped successfully. The stiffness of the trap depends on the size of the particles and the power of the laser. The stiffness increases linearly with power and decreases as the particle size larger.
The MVACS Surface Stereo Imager on Mars Polar Lander
NASA Astrophysics Data System (ADS)
Smith, P. H.; Reynolds, R.; Weinberg, J.; Friedman, T.; Lemmon, M. T.; Tanner, R.; Reid, R. J.; Marcialis, R. L.; Bos, B. J.; Oquest, C.; Keller, H. U.; Markiewicz, W. J.; Kramm, R.; Gliem, F.; Rueffer, P.
2001-08-01
The Surface Stereo Imager (SSI), a stereoscopic, multispectral camera on the Mars Polar Lander, is described in terms of its capabilities for studying the Martian polar environment. The camera's two eyes, separated by 15.0 cm, provide the camera with range-finding ability. Each eye illuminates half of a single CCD detector with a field of view of 13.8° high by 14.3° wide and has 12 selectable filters between 440 and 1000 nm. The
Investigation of Parallax Issues for Multi-Lens Multispectral Camera Band Co-Registration
NASA Astrophysics Data System (ADS)
Jhan, J. P.; Rau, J. Y.; Haala, N.; Cramer, M.
2017-08-01
The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.
Accuracy of an optical active-marker system to track the relative motion of rigid bodies.
Maletsky, Lorin P; Sun, Junyi; Morton, Nicholas A
2007-01-01
The measurement of relative motion between two moving bones is commonly accomplished for in vitro studies by attaching to each bone a series of either passive or active markers in a fixed orientation to create a rigid body (RB). This work determined the accuracy of motion between two RBs using an Optotrak optical motion capture system with active infrared LEDs. The stationary noise in the system was quantified by recording the apparent change in position with the RBs stationary and found to be 0.04 degrees and 0.03 mm. Incremental 10 degrees rotations and 10-mm translations were made using a more precise tool than the Optotrak. Increasing camera distance decreased the precision or increased the range of values observed for a set motion and increased the error in rotation or bias between the measured and actual rotation. The relative positions of the RBs with respect to the camera-viewing plane had a minimal effect on the kinematics and, therefore, for a given distance in the volume less than or close to the precalibrated camera distance, any motion was similarly reliable. For a typical operating set-up, a 10 degrees rotation showed a bias of 0.05 degrees and a 95% repeatability limit of 0.67 degrees. A 10-mm translation showed a bias of 0.03 mm and a 95% repeatability limit of 0.29 mm. To achieve a high level of accuracy it is important to keep the distance between the cameras and the markers near the distance the cameras are focused to during calibration.
Best practices to optimize intraoperative photography.
Gaujoux, Sébastien; Ceribelli, Cecilia; Goudard, Geoffrey; Khayat, Antoine; Leconte, Mahaut; Massault, Pierre-Philippe; Balagué, Julie; Dousset, Bertrand
2016-04-01
Intraoperative photography is used extensively for communication, research, or teaching. The objective of the present work was to define, using a standardized methodology and literature review, the best technical conditions for intraoperative photography. Using either a smartphone camera, a bridge camera, or a single-lens reflex (SLR) camera, photographs were taken under various standard conditions by a professional photographer. All images were independently assessed blinded to technical conditions to define the best shooting conditions and methods. For better photographs, an SLR camera with manual settings should be used. Photographs should be centered and taken vertically and orthogonal to the surgical field with a linear scale to avoid error in perspective. The shooting distance should be about 75 cm using an 80-100-mm focal lens. Flash should be avoided and scialytic low-powered light should be used without focus. The operative field should be clean, wet surfaces should be avoided, and metal instruments should be hidden to avoid reflections. For SLR camera, International Organization for Standardization speed should be as low as possible, autofocus area selection mode should be on single point AF, shutter speed should be above 1/100 second, and aperture should be as narrow as possible, above f/8. For smartphone, use high dynamic range setting if available, use of flash, digital filter, effect apps, and digital zoom is not recommended. If a few basic technical rules are known and applied, high-quality photographs can be taken by amateur photographers and fit the standards accepted in clinical practice, academic communication, and publications. Copyright © 2016 Elsevier Inc. All rights reserved.
Camera Development for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Moncada, Roberto Jose
2017-01-01
With the Cherenkov Telescope Array (CTA), the very-high-energy gamma-ray universe, between 30 GeV and 300 TeV, will be probed at an unprecedented resolution, allowing deeper studies of known gamma-ray emitters and the possible discovery of new ones. This exciting project could also confirm the particle nature of dark matter by looking for the gamma rays produced by self-annihilating weakly interacting massive particles (WIMPs). The telescopes will use the imaging atmospheric Cherenkov technique (IACT) to record Cherenkov photons that are produced by the gamma-ray induced extensive air shower. One telescope design features dual-mirror Schwarzschild-Couder (SC) optics that allows the light to be finely focused on the high-resolution silicon photomultipliers of the camera modules starting from a 9.5-meter primary mirror. Each camera module will consist of a focal plane module and front-end electronics, and will have four TeV Array Readout with GSa/s Sampling and Event Trigger (TARGET) chips, giving them 64 parallel input channels. The TARGET chip has a self-trigger functionality for readout that can be used in higher logic across camera modules as well as across individual telescopes, which will each have 177 camera modules. There will be two sites, one in the northern and the other in the southern hemisphere, for full sky coverage, each spanning at least one square kilometer. A prototype SC telescope is currently under construction at the Fred Lawrence Whipple Observatory in Arizona. This work was supported by the National Science Foundation's REU program through NSF award AST-1560016.
NASA Astrophysics Data System (ADS)
Harrild, M.; Webley, P.; Dehn, J.
2014-12-01
Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.
NASA Astrophysics Data System (ADS)
Harrild, Martin; Webley, Peter; Dehn, Jonathan
2015-04-01
Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.
3D SAPIV particle field reconstruction method based on adaptive threshold.
Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi
2018-03-01
Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.
Portable retinal imaging for eye disease screening using a consumer-grade digital camera
NASA Astrophysics Data System (ADS)
Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter
2012-03-01
The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.
Spherical Images for Cultural Heritage: Survey and Documentation with the Nikon KM360
NASA Astrophysics Data System (ADS)
Gottardi, C.; Guerra, F.
2018-05-01
The work presented here focuses on the analysis of the potential of spherical images acquired with specific cameras for documentation and three-dimensional reconstruction of Cultural Heritage. Nowadays, thanks to the introduction of cameras able to generate panoramic images automatically, without the requirement of a stitching software to join together different photos, spherical images allow the documentation of spaces in an extremely fast and efficient way. In this particular case, the Nikon Key Mission 360 spherical camera was tested on the Tolentini's cloister, which used to be part of the convent of the close church and now location of the Iuav University of Venice. The aim of the research is based on testing the acquisition of spherical images with the KM360 and comparing the obtained photogrammetric models with data acquired from a laser scanning survey in order to test the metric accuracy and the level of detail achievable with this particular camera. This work is part of a wider research project that the Photogrammetry Laboratory of the Iuav University of Venice has been dealing with in the last few months; the final aim of this research project will be not only the comparison between 3D models obtained from spherical images and laser scanning survey's techniques, but also the examination of their reliability and accuracy with respect to the previous methods of generating spherical panoramas. At the end of the research work, we would like to obtain an operational procedure for spherical cameras applied to metric survey and documentation of Cultural Heritage.
Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori
2011-01-01
In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.
Detection of pointing errors with CMOS-based camera in intersatellite optical communications
NASA Astrophysics Data System (ADS)
Yu, Si-yuan; Ma, Jing; Tan, Li-ying
2005-01-01
For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.
ERIC Educational Resources Information Center
Martínez-Álvarez, Patricia
2017-01-01
Focusing on two bilingual children experiencing learning difficulties, I explore the scientific representations these students generate in an afterschool programme where they have opportunities to exercise agency. In the programme, children use a digital camera to document science in their lives and engage in conversations about the products they…
BodyHeat Encounter: Performing Technology in Pedagogical Spaces of Surveillance/Intimacy
ERIC Educational Resources Information Center
Fels, Lynn; Ricketts, Kathryn
2015-01-01
What occurs when videographer and performer encounter each other through the lens of a camera? This collaborative performative inquiry focuses on embodiment and emergent narrative as realized through an encounter between technology and the visceral body--a relational body that smells, touches, sees, hears and feels the emergent world through…
Video-Taping Dialogs, with Commentary to Teach Cultural Elements.
ERIC Educational Resources Information Center
Taylor, Harvey M.
Description of a project involving the use of the video-tape recorder in a beginning course in Japanese focuses on cultural implications of basic unit dialogues. Instant replay, close-up, and other camera techniques allow students to concentrate on cross-cultural phenomena which are normally not perceived without the use of media. General…
Lights, Camera, Action! Learning about Management with Student-Produced Video Assignments
ERIC Educational Resources Information Center
Schultz, Patrick L.; Quinn, Andrew S.
2014-01-01
In this article, we present a proposal for fostering learning in the management classroom through the use of student-produced video assignments. We describe the potential for video technology to create active learning environments focused on problem solving, authentic and direct experiences, and interaction and collaboration to promote student…
Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.
1983-08-15
obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey
Observation of interaction of shock wave with gas bubble by image converter camera
NASA Astrophysics Data System (ADS)
Yoshii, M.; Tada, M.; Tsuji, T.; Isuzugawa, Kohji
1995-05-01
When a spark discharge occurs at the first focal point of a semiellipsoid or a reflector located in water, a spherical shock wave is produced. A part of the wave spreads without reflecting on the reflector and is called direct wave in this paper. Another part reflects on the semiellipsoid and converges near the second focal point, that is named the focusing wave, and locally produces a high pressure. This phenomenon is applied to disintegrators of kidney stone. But it is concerned that cavitation bubbles induced in the body by the expansion wave following the focusing wave will injure human tissue around kidney stone. In this paper, in order to examine what happens when shock waves strike bubbles on human tissue, the aspect that an air bubble is truck by the spherical shock wave or its behavior is visualized by the schlieren system and its photographs are taken using an image converter camera. Besides,the variation of the pressure amplitude caused by the shock wave and the flow of water around the bubble is measured with a pressure probe.
Looking ever so much like an alien spacecraft, the Altus II remotely piloted aircraft shows off some
NASA Technical Reports Server (NTRS)
2002-01-01
Looking ever so much like an alien spacecraft, the Altus II remotely piloted aircraft shows off some of the instruments and camera lenses mounted in its nose for a lightning study over Florida flown during the summer of 2002. The Altus Cumulus Electrification Study (ACES), led by Dr. Richard Blakeslee of NASA Marshall Space Flight center, focused on the collection of electrical, magnetic and optical measurements of thunderstorms. Data collected will help scientists understand the development and life cycles of thunderstorms, which in turn may allow meteorologists to more accurately predict when destructive storms may hit. The Altus II, built by General Atomics Aeronautical Systems, Inc., is one of several remotely operated aircraft developed and matured under NASA's Environmental Research Aircraft and Sensor Technology (ERAST) program. The program focused on developing airframe, propulsion, control system and communications technologies to allow unmanned aerial vehicles (UAVs) to operate at very high altitudes for long durations while carrying a variety of sensors, cameras or other instruments for science experiments, surveillance or telecommunications relay missions.
First photometric properties of Dome C, Antarctica
NASA Astrophysics Data System (ADS)
Chadid, M.; Vernin, J.; Jeanneaux, F.; Mekarnia, D.; Trinquet, H.
2008-07-01
Here we present the first photometric extinction measurements in the visible range performed at Dome C in Antarctica, using PAIX photometer (Photometer AntarctIca eXtinction). It is made with "off the shelf" components, Audine camera at the focus of Blazhko telescope, a Meade M16 diaphragmed down to 15 cm. For an exposure time of 60 s without filter, a 10th V-magnitude star is measured with a precision of 1/100 mag. A first statistics over 16 nights in August 2007 leads to a 0.5 magnitude per air mass extinction, may be due to high altitude cirrus. This rather simple experiment shows that continuous observations can be performed at Dome C, allowing high frequency resolution on pulsation and asteroseismology studies. Light curves of one of RR Lyrae stars: SAra were established. They show the typical trend of a RRLyrae star. A recent sophisticated photometer, PAIX II, has been installed recently at Dome C during polar summer 2008, with a ST10 XME camera, automatic guiding, auto focusing and Johnson/Bessel UBVRI filter wheels.
A Wide-field Camera and Fully Remote Operations at the Wyoming Infrared Observatory
NASA Astrophysics Data System (ADS)
Findlay, Joseph R.; Kobulnicky, Henry A.; Weger, James S.; Bucher, Gerald A.; Perry, Marvin C.; Myers, Adam D.; Pierce, Michael J.; Vogel, Conrad
2016-11-01
Upgrades at the 2.3 meter Wyoming Infrared Observatory telescope have provided the capability for fully remote operations by a single operator from the University of Wyoming campus. A line-of-sight 300 Megabit s-1 11 GHz radio link provides high-speed internet for data transfer and remote operations that include several realtime video feeds. Uninterruptable power is ensured by a 10 kVA battery supply for critical systems and a 55 kW autostart diesel generator capable of running the entire observatory for up to a week. The construction of a new four-element prime-focus corrector with fused-silica elements allows imaging over a 40‧ field of view with a new 40962 UV-sensitive prime-focus camera and filter wheel. A new telescope control system facilitates the remote operations model and provides 20″ rms pointing over the usable sky. Taken together, these improvements pave the way for a new generation of sky surveys supporting space-based missions and flexible-cadence observations advancing emerging astrophysical priorities such as planet detection, quasar variability, and long-term time-domain campaigns.
Recent developments with the Mars Observer Camera graphite/epoxy structure
NASA Astrophysics Data System (ADS)
Telkamp, Arthur R.
1992-09-01
The Mars Observer Camera (MOC) is one of the instruments aboard the Mars Observer Spacecraft to be launched not later than September 1992, whose mission is to geologically and climatologically map the Martian surface and atmosphere over a period of one Martian year. This paper discusses the events in the development of MOC that took place in the past two years, with special attention given to the implementation of thermal blankets, shields, and thermal control paints to limit solar absorption while controlling stray light; vibration testing of Flight Unit No.1; and thermal expansion testing. Results are presented of thermal-vac testing Flight Unit No. 1. It was found that, although the temperature profiles were as predicted, the thermally-induced focus displacements were not.
Focal plane arrays based on Type-II indium arsenide/gallium antimonide superlattices
NASA Astrophysics Data System (ADS)
Delaunay, Pierre-Yves
The goal of this work is to demonstrate that Type-II InAs/GaSb superlattices can perform high quality infrared imaging from the middle (MWIR) to the long (LWIR) wavelength infrared range. Theoretically, focal plane arrays (FPAs) based on this technology could be operated at higher temperatures, with lower dark currents than the leading HgCdTe platform. This effort will focus on the fabrication of MWIR and LWIR FPAs with performance similar to existing infrared cameras. Some applications in the MWIR require fast, sensitive imagers able to sustain frame rates up to 100Hz. Such speed can only be achieved with photon detectors. However, these cameras need to be operated below 170K. Current research in this spectral band focuses on increasing the operating temperature of the FPA to a point where cooling could be performed with compact and reliable thermoelectric coolers. Type-II superlattice was used to demonstrate a camera that presented similar performance to HgCdTe and that could be operated up to room temperature. At 80K, the camera could detect temperature differences as low as 10 mK for an integration time shorter than 25 ms. In the LWIR, the electric performance of Type-II photodiodes is mainly limited by surface leakage. Aggressive processing steps such as hybridization and underfill can increase the dark current of the devices by several orders of magnitude. New cleaning and passivation techniques were used to reduce the dark current of FPA diodes by two orders of magnitudes. The absorbing GaSb substrate was also removed to increase the quantum efficiency of the devices up to 90%. At 80K, a FPA with a 9.6 microm 50%-cutoff in responsivity was able to detect temperature differences as low as 19 mK, only limited by the performance of the testing system. The non-uniformity in responsivity reached 3.8% for a 98.2% operability. The third generation of infrared cameras is based on multi-band imaging in order to improve the recognition capabilities of the imager. Preliminary detectors based on back to back diodes presented similar performance to single colors devices; the quantum efficiency was measured higher than 40% for both bands. Preliminary imaging results were demonstrated in the LWIR.
NASA Astrophysics Data System (ADS)
Masciotti, James M.; Rahim, Shaheed; Grover, Jarrett; Hielscher, Andreas H.
2007-02-01
We present a design for frequency domain instrument that allows for simultaneous gathering of magnetic resonance and diffuse optical tomographic imaging data. This small animal imaging system combines the high anatomical resolution of magnetic resonance imaging (MRI) with the high temporal resolution and physiological information provided by diffuse optical tomography (DOT). The DOT hardware comprises laser diodes and an intensified CCD camera, which are modulated up to 1 GHz by radio frequency (RF) signal generators. An optical imaging head is designed to fit inside the 4 cm inner diameter of a 9.4 T MRI system. Graded index fibers are used to transfer light between the optical hardware and the imaging head within the RF coil. Fiducial markers are integrated into the imaging head to allow the determination of the positions of the source and detector fibers on the MR images and to permit co-registration of MR and optical tomographic images. Detector fibers are arranged compactly and focused through a camera lens onto the photocathode of the intensified CCD camera.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
A probabilistic model of overt visual attention for cognitive robots.
Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G
2010-10-01
Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.
Taking the Observatory to the Astronomer
NASA Astrophysics Data System (ADS)
Bisque, T. M.
1997-05-01
Since 1992, Software Bisque's Remote Astronomy Software has been used by the Mt. Wilson Institute to allow interactive control of a 24" telescope and digital camera via modem. Software Bisque now introduces a comparable, relatively low-cost observatory system that allows powerful, yet "user-friendly" telescope and CCD camera control via the Internet. Utilizing software developed for the Windows 95/NT operating systems, the system offers point-and-click access to comprehensive celestial databases, extremely accurate telescope pointing, rapid download of digital CCD images by one or many users and flexible image processing software for data reduction and analysis. Our presentation will describe how the power of the personal computer has been leveraged to provide professional-level tools to the amateur astronomer, and include a description of this system's software and hardware components. The system software includes TheSky Astronomy Software?, CCDSoft CCD Astronomy Software?, TPoint Telescope Pointing Analysis System? software, Orchestrate? and, optionally, the RealSky CDs. The system hardware includes the Paramount GT-1100? Robotic Telescope Mount, as well as third party CCD cameras, focusers and optical tube assemblies.
Vision robot with rotational camera for searching ID tags
NASA Astrophysics Data System (ADS)
Kimura, Nobutaka; Moriya, Toshio
2008-02-01
We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.
Desired machines: cinema and the world in its own image.
Canales, Jimena
2011-09-01
In 1895 when the Lumière brothers unveiled their cinematographic camera, many scientists were elated. Scientists hoped that the machine would fulfill a desire that had driven research for nearly half a century: that of capturing the world in its own image. But their elation was surprisingly short-lived, and many researchers quickly distanced themselves from the new medium. The cinematographic camera was soon split into two machines, one for recording and one for projecting, enabling it to further escape from the laboratory. The philosopher Henri Bergson joined scientists, such as Etienne-Jules Marey, who found problems with the new cinematographic order. Those who had worked to make the dream come true found that their efforts had been subverted. This essay focuses on the desire to build a cinematographic camera, with the purpose of elucidating how dreams and reality mix in the development of science and technology. It is about desired machines and their often unexpected results. The interplay between what "is" (the technical), what "ought" (the ethical), and what "could" be (the fantastical) drives scientific research.
Multi-frame image processing with panning cameras and moving subjects
NASA Astrophysics Data System (ADS)
Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric
2014-06-01
Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.
Automated face detection for occurrence and occupancy estimation in chimpanzees.
Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S
2017-03-01
Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi-automated processing of footage. Using semi-automated ape face detection technology for processing camera trap footage requires only 2-4% of the time compared to manual analysis and allows to estimate site use by chimpanzees relatively reliably. © 2017 Wiley Periodicals, Inc.
Full-Frame Reference for Test Photo of Moon
NASA Technical Reports Server (NTRS)
2005-01-01
This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images. Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across. The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.Satellite markers: a simple method for ground truth car pose on stereo video
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco
2018-04-01
Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.
Measurement of luminance noise and chromaticity noise of LCDs with a colorimeter and a color camera
NASA Astrophysics Data System (ADS)
Roehrig, H.; Dallas, W. J.; Krupinski, E. A.; Redford, Gary R.
2007-09-01
This communication focuses on physical evaluation of image quality of displays for applications in medical imaging. In particular we were interested in luminance noise as well as chromaticity noise of LCDs. Luminance noise has been encountered in the study of monochrome LCDs for some time, but chromaticity noise is a new type of noise which has been encountered first when monochrome and color LCDs were compared in an ROC study. In this present study one color and one monochrome 3 M-pixel LCDs were studied. Both were DICOM calibrated with equal dynamic range. We used a Konica Minolta Chroma Meter CS-200 as well as a Foveon color camera to estimate luminance and chrominance variations of the displays. We also used a simulation experiment to estimate luminance noise. The measurements with the colorimeter were consistent. The measurements with the Foveon color camera were very preliminary as color cameras had never been used for image quality measurements. However they were extremely promising. The measurements with the colorimeter and the simulation results showed that the luminance and chromaticity noise of the color LCD were larger than that of the monochrome LCD. Under the condition that an adequate calibration method and image QA/QC program for color displays are available, we expect color LCDs may be ready for radiology in very near future.
TAUKAM: a new prime-focus camera for the Tautenburg Schmidt Telescope
NASA Astrophysics Data System (ADS)
Stecklum, Bringfried; Eislöffel, Jochen; Klose, Sylvio; Laux, Uwe; Löwinger, Tom; Meusinger, Helmut; Pluto, Michael; Winkler, Johannes; Dionies, Frank
2016-08-01
TAUKAM stands for "TAUtenburg KAMera", which will become the new prime-focus imager for the Tautenburg Schmidt telescope. It employs an e2v 6kx6k CCD and is under manufacture by Spectral Instruments Inc. We describe the design of the instrument and the auxiliary components, its specifications as well as the concept for integrating the device into the telescope infrastructure. First light is foreseen in 2017. TAUKAM will boost the observational capabilities of the telescope for what concerns optical wide-field surveys.
Thermal physics in practice and its confrontation with school physics
NASA Astrophysics Data System (ADS)
Vochozka, Vladimír; Tesař, Jiří; Bednář, Vít
2017-01-01
Concepts of heat, specific heat capacity and other terms of thermal physics are very abstract. For their better understanding it is necessary in teaching to include newly conceived experiments focused on the everyday experience of students. The paper evaluates the thermal phenomena with the help of infrared camera, respectively surface temperature sensors for on-line measurement. The article focuses on the experimental verification of the law of conservation of energy in thermal physics, comparing specific heat capacity of various substances and their confrontation with established experience of pupils.
VizieR Online Data Catalog: BVRI photometry of S5 0716+714 (Liao+, 2014)
NASA Astrophysics Data System (ADS)
Liao, N. H.; Bai, J. M.; Liu, H. T.; Weng, S. S.; Chen, L.; Li, F.
2016-04-01
The variability of S5 0716+714 was photometrically monitored in the optical bands at Yunnan Observatories, making use of the 2.4m telescope (http://www.gmg.org.cn/) and the 1.02m telescope (http://www1.ynao.ac.cn/~omt/). The 2.4m telescope, which began working in 2008 May, is located at the Lijiang Observatory of Yunnan Observatories, where the longitude is 100°01'51''E and the latitude is 26°42'32''N, with an altitude of 3193m. There are two photometric terminals. The PI VersArry 1300B CCD camera with 1340*1300 pixels covers a field of view 4'48''*4'40'' at the Cassegrain focus. The readout noise and gain are 6.05 electrons and 1.1 electrons ADU-1, respectively. The Yunnan Faint Object Spectrograph and Camera (YFOSC) has a field of view of about 10'*10' and 2000*2000 pixels for photometric observation. Each pixel corresponds to 0.283'' of the sky. The readout noise and gain of the YFOSC CCD are 7.5 electrons and 0.33 electrons ADU-1, respectively. The 1.02m telescope is located at the headquarters of Yunnan Observatories and is mainly used for photometry with standard Johnson UBV and Cousins RI filters. An Andor CCD camera with 2048*2048 pixels has been installed at its Cassegrain focus since 2008 May. The readout noise and gain are 7.8 electrons and 1.1 electrons ADU-1, respectively. (1 data file).
Wei, Hsiang-Chun; Su, Guo-Dung John
2012-01-01
Conventional camera modules with image sensors manipulate the focus or zoom by moving lenses. Although motors, such as voice-coil motors, can move the lens sets precisely, large volume, high power consumption, and long moving time are critical issues for motor-type camera modules. A deformable mirror (DM) provides a good opportunity to improve these issues. The DM is a reflective type optical component which can alter the optical power to focus the lights on the two dimensional optical image sensors. It can make the camera system operate rapidly. Ionic polymer metal composite (IPMC) is a promising electro-actuated polymer material that can be used in micromachining devices because of its large deformation with low actuation voltage. We developed a convenient simulation model based on Young's modulus and Poisson's ratio. We divided an ion exchange polymer, also known as Nafion®, into two virtual layers in the simulation model: one was expansive and the other was contractive, caused by opposite constant surface forces on each surface of the elements. Therefore, the deformation for different IPMC shapes can be described more easily. A standard experiment of voltage vs. tip displacement was used to verify the proposed modeling. Finally, a gear shaped IPMC actuator was designed and tested. Optical power of the IPMC deformable mirror is experimentally demonstrated to be 17 diopters with two volts. The needed voltage was about two orders lower than conventional silicon deformable mirrors and about one order lower than the liquid lens. PMID:23112648
NASA Astrophysics Data System (ADS)
Cucci, Costanza; Casini, Andrea; Stefani, Lorenzo; Picollo, Marcello; Jussila, Jouni
2017-07-01
For more than a decade, a number of studies and research projects have been devoted to customize hyperspectral imaging techniques to the specific needs of conservation and applications in museum context. A growing scientific literature definitely demonstrated the effectiveness of reflectance hyperspectral imaging for non-invasive diagnostics and highquality documentation of 2D artworks. Additional published studies tackle the problems of data-processing, with a focus on the development of algorithms and software platforms optimised for visualisation and exploitation of hyperspectral bigdata sets acquired on paintings. This scenario proves that, also in the field of Cultural Heritage (CH), reflectance hyperspectral imaging has nowadays reached the stage of mature technology, and is ready for the transition from the R&D phase to the large-scale applications. In view of that, a novel concept of hyperspectral camera - featuring compactness, lightness and good usability - has been developed by SPECIM, Spectral Imaging Ltd. (Oulu, Finland), a company in manufacturing products for hyperspectral imaging. The camera is proposed as new tool for novel applications in the field of Cultural Heritage. The novelty of this device relies in its reduced dimensions and weight and in its user-friendly interface, which make this camera much more manageable and affordable than conventional hyperspectral instrumentation. The camera operates in the 400-1000nm spectral range and can be mounted on a tripod. It can operate from short-distance (tens of cm) to long distances (tens of meters) with different spatial resolutions. The first release of the prototype underwent a preliminary in-depth experimentation at the IFAC-CNR laboratories. This paper illustrates the feasibility study carried out on the new SPECIM hyperspectral camera, tested under different conditions on laboratory targets and artworks with the specific aim of defining its potentialities and weaknesses in its use in the Cultural Heritage field.
SU-E-J-17: A Study of Accelerator-Induced Cerenkov Radiation as a Beam Diagnostic and Dosimetry Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bateman, F; Tosh, R
2014-06-01
Purpose: To investigate accelerator-induced Cerenkov radiation imaging as a possible beam diagnostic and medical dosimetry tool. Methods: Cerenkov emission produced by clinical accelerator beams in a water phantom was imaged using a camera system comprised of a high-sensitivity thermoelectrically-cooled CCD camera coupled to a large aperture (f/0.75) objective lens with 16:1 magnification. This large format lens allows a significant amount of the available Cerenkov light to be collected and focused onto the CCD camera to form the image. Preliminary images, obtained with 6 MV photon beams, used an unshielded camera mounted horizontally with the beam normal to the water surface,more » and confirmed the detection of Cerenkov radiation. Several improvements were subsequently made including the addition of radiation shielding around the camera, and altering of the beam and camera angles to give a more favorable geometry for Cerenkov light collection. A detailed study was then undertaken over a range of electron and photon beam energies and dose rates to investigate the possibility of using this technique for beam diagnostics and dosimetry. Results: A series of images were obtained at a fixed dose rate over a range of electron energies from 6 to 20 MeV. The location of maximum intensity was found to vary linearly with the energy of the beam. A linear relationship was also found between the light observed from a fixed point on the central axis and the dose rate for both photon and electron beams. Conclusion: We have found that the analysis of images of beam-induced Cerenkov light in a water phantom has potential for use as a beam diagnostic and medical dosimetry tool. Our future goals include the calibration of the light output in terms of radiation dose and development of a tomographic system for 3D Cerenkov imaging in water phantoms and other media.« less
Kids, Cameras, and the Curriculum: Focusing on Learning in the Primary Grades
ERIC Educational Resources Information Center
Dragan, Pat Barrett
2008-01-01
The author demonstrates how simple snapshots can open new entryways into literacy for all children and help educators view teaching and learning in new ways, building classroom-wide and school-wide community. The book offers projects that help children see real-world possibilities in literate behaviors by using photos as a stimulus for literacy…
Survey of United States Commercial Satellites in Geosynchronous Earth Orbit
1994-09-01
248 a. Imaging Sensors ...... ............ 248 (1) Return Beam Vidicon Camera . ... 249 (2) Scanners. ...... ............ 249 b. Nonimaging ...251 a. Imaging Microwave Sensors ......... .. 251 (1) Synthetic Aperture Radar . ... 251 b. Nonimaging Microwave Sensors ..... .. 252 (1) Radar...The stream of electrons travels alonq the axis oa the tube, constrained by focusing magnets, until it reaches the collector . Surrounding this electron
ISS EarthKam: Taking Photos of the Earth from Space
ERIC Educational Resources Information Center
Haste, Turtle
2008-01-01
NASA is involved in a project involving the International Space Station (ISS) and an Earth-focused camera called EarthKam, where schools, and ultimately students, are allowed to remotely program the EarthKAM to take images. Here the author describes how EarthKam was used to help middle school students learn about biomes and develop their…
New Project Details Low-Income Schools' Avenues to Success
ERIC Educational Resources Information Center
Sawchuk, Stephen
2008-01-01
The principal of Noyes Education Campus in the District of Columbia remembers when, in exchange for hefty staff bonuses for outstanding performance, the school hosted a team of visitors armed with video cameras, tape recorders, and piles of interview questions. The focus of their queries: To what did the school attribute its success? What had made…
Seeing with the Camera: Analysing Children's Photographs of Literacy in the Home.
ERIC Educational Resources Information Center
Moss, Gemma
2001-01-01
Examines the issues raised by photographs children took of reading in the home as part of a funded research project exploring the gendering of reading in the 7-9 age group. Focuses on the dilemmas the images pose for analysis, and what the images, considered in themselves, can be taken as evidence for. (SG)
The NASA 2003 Mars Exploration Rover Panoramic Camera (Pancam) Investigation
NASA Astrophysics Data System (ADS)
Bell, J. F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Morris, R. V.; Athena Team
2002-12-01
The Panoramic Camera System (Pancam) is part of the Athena science payload to be launched to Mars in 2003 on NASA's twin Mars Exploration Rover missions. The Pancam imaging system on each rover consists of two major components: a pair of digital CCD cameras, and the Pancam Mast Assembly (PMA), which provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360o of azimuth and from zenith to nadir, providing a complete view of the scene around the rover. Pancam utilizes two 1024x2048 Mitel frame transfer CCD detector arrays, each having a 1024x1024 active imaging area and 32 optional additional reference pixels per row for offset monitoring. Each array is combined with optics and a small filter wheel to become one "eye" of a multispectral, stereoscopic imaging system. The optics for both cameras consist of identical 3-element symmetrical lenses with an effective focal length of 42 mm and a focal ratio of f/20, yielding an IFOV of 0.28 mrad/pixel or a rectangular FOV of 16o\\x9D 16o per eye. The two eyes are separated by 30 cm horizontally and have a 1o toe-in to provide adequate parallax for stereo imaging. The cameras are boresighted with adjacent wide-field stereo Navigation Cameras, as well as with the Mini-TES instrument. The Pancam optical design is optimized for best focus at 3 meters range, and allows Pancam to maintain acceptable focus from infinity to within 1.5 meters of the rover, with a graceful degradation (defocus) at closer ranges. Each eye also contains a small 8-position filter wheel to allow multispectral sky imaging, direct Sun imaging, and surface mineralogic studies in the 400-1100 nm wavelength region. Pancam has been designed and calibrated to operate within specifications from -55oC to +5oC. An onboard calibration target and fiducial marks provide the ability to validate the radiometric and geometric calibration on Mars. Pancam relies heavily on use of the JPL ICER wavelet compression algorithm to maximize data return within stringent mission downlink limits. The scientific goals of the Pancam investigation are to: (a) obtain monoscopic and stereoscopic image mosaics to assess the morphology, topography, and geologic context of each MER landing site; (b) obtain multispectral visible to short-wave near-IR images of selected regions to determine surface color and mineralogic properties; (c) obtain multispectral images over a range of viewing geometries to constrain surface photometric and physical properties; and (d) obtain images of the Martian sky, including direct images of the Sun, to determine dust and aerosol opacity and physical properties. In addition, Pancam also serves a variety of operational functions on the MER mission, including (e) serving as the primary Sun-finding camera for rover navigation; (f) resolving objects on the scale of the rover wheels to distances of ~100 m to help guide navigation decisions; (g) providing stereo coverage adequate for the generation of digital terrain models to help guide and refine rover traverse decisions; (h) providing high resolution images and other context information to guide the selection of the most interesting in situ sampling targets; and (i) supporting acquisition and release of exciting E/PO products.
Enhancing Close-Up Image Based 3d Digitisation with Focus Stacking
NASA Astrophysics Data System (ADS)
Kontogianni, G.; Chliverou, R.; Koutsoudis, A.; Pavlidis, G.; Georgopoulos, A.
2017-08-01
The 3D digitisation of small artefacts is a very complicated procedure because of their complex morphological feature structures, concavities, rich decorations, high frequency of colour changes in texture, increased accuracy requirements etc. Image-based methods present a low cost, fast and effective alternative because laser scanning does not meet the accuracy requirements in general. A shallow Depth of Field (DoF) affects the image-based 3D reconstruction and especially the point matching procedure. This is visible not only in the total number of corresponding points but also in the resolution of the produced 3D model. The extension of the DoF is a very important task that should be incorporated in the data collection to attain a better quality of the image set and a better 3D model. An extension of the DoF can be achieved with many methods and especially with the use of the focus stacking technique. In this paper, the focus stacking technique was tested in a real-world experiment to digitise a museum artefact in 3D. The experiment conditions include the use of a full frame camera equipped with a normal lens (50mm), with the camera being placed close to the object. The artefact has already been digitised with a structured light system and that model served as the reference model in which 3D models were compared and the results were presented.
Method for stitching microbial images using a neural network
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Tolstova, I. V.
2017-05-01
Currently an analog microscope has a wide distribution in the following fields: medicine, animal husbandry, monitoring technological objects, oceanography, agriculture and others. Automatic method is preferred because it will greatly reduce the work involved. Stepper motors are used to move the microscope slide and allow to adjust the focus in semi-automatic or automatic mode view with transfer images of microbiological objects from the eyepiece of the microscope to the computer screen. Scene analysis allows to locate regions with pronounced abnormalities for focusing specialist attention. This paper considers the method for stitching microbial images, obtained of semi-automatic microscope. The method allows to keep the boundaries of objects located in the area of capturing optical systems. Objects searching are based on the analysis of the data located in the area of the camera view. We propose to use a neural network for the boundaries searching. The stitching image boundary is held of the analysis borders of the objects. To auto focus, we use the criterion of the minimum thickness of the line boundaries of object. Analysis produced the object located in the focal axis of the camera. We use method of recovery of objects borders and projective transform for the boundary of objects which are based on shifted relative to the focal axis. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.
Cooling the dark energy camera instrument
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, R.L.; Cease, H.; /Fermilab
2008-06-01
DECam, camera for the Dark Energy Survey (DES), is undergoing general design and component testing. For an overview see DePoy, et al in these proceedings. For a description of the imager, see Cease, et al in these proceedings. The CCD instrument will be mounted at the prime focus of the CTIO Blanco 4m telescope. The instrument temperature will be 173K with a heat load of 113W. In similar applications, cooling CCD instruments at the prime focus has been accomplished by three general methods. Liquid nitrogen reservoirs have been constructed to operate in any orientation, pulse tube cryocoolers have been usedmore » when tilt angles are limited and Joule-Thompson or Stirling cryocoolers have been used with smaller heat loads. Gifford-MacMahon cooling has been used at the Cassegrain but not at the prime focus. For DES, the combined requirements of high heat load, temperature stability, low vibration, operation in any orientation, liquid nitrogen cost and limited space available led to the design of a pumped, closed loop, circulating nitrogen system. At zenith the instrument will be twelve meters above the pump/cryocooler station. This cooling system expected to have a 10,000 hour maintenance interval. This paper will describe the engineering basis including the thermal model, unbalanced forces, cooldown time, the single and two-phase flow model.« less
Small form-factor VGA camera with variable focus by liquid lens
NASA Astrophysics Data System (ADS)
Oikarinen, Kari A.; Aikio, Mika
2010-05-01
We present the design of a 24 mm long variable focus lens for 1/4" sensor. The chosen CMOS color sensor has VGA (640×480) resolution and 5.6 μm pixel size. The lens utilizes one Varioptic Arctic 320 liquid lens that has a voltage-controllable focal length due to the electrowetting effect. There are no mechanical moving parts. The principle of operation of the liquid lens is explained briefly. We discuss designing optical systems with this type of lens. This includes a modeling approach that allows entering a voltage value to modify the configuration of the liquid lens. The presented design consists only of spherical glass surfaces. The choice to use spherical surfaces was made in order to decrease the costs of manufacturing and provide more predictable performance by the better established method. Fabrication tolerances are compensated by the adjustability of the liquid lens, further increasing the feasibility of manufacturing. The lens is manufactured and assembled into a demonstrator camera. It has an f-number of 2.5 and 40 degree full field of view. The effective focal length varies around 6 millimeters as the liquid lens is adjusted. In simulations we have achieved a focus distance controllable between 20 millimeters and infinity. The design differs from previous approaches by having the aperture stop in the middle of the system instead of in front.
Capturing exposures: using automated cameras to document environmental determinants of obesity.
Barr, Michelle; Signal, Louise; Jenkin, Gabrielle; Smith, Moira
2015-03-01
Children's exposure to food marketing across multiple everyday settings, a key environmental influence on health, has not yet been objectively documented. Wearable automated cameras (ACs) may have the potential to provide an objective account of this exposure. The purpose of this study is to assess the feasibility of using ACs to document children's exposure to food marketing in multiple settings. A convenience sample of six participants (aged 12) wore a SenseCam device for two full days. Following which, participants attended a focus group to ascertain their experiences of using the device. The collected data were analysed to determine participants' daily and setting specific exposure to 'healthy' and 'unhealthy' food marketing (in minutes). The focus group transcript was analysed using thematic analysis to identify the common themes. Participants collected usable data that could be analysed to determine participant's daily exposure (in minutes) to 'unhealthy' food marketing across a number of everyday settings. Results from the focus group discussion indicated that participants were comfortable wearing the device, after an initial adjustment period. ACs may be an effective tool for documenting children's exposure to food marketing in multiple settings. ACs provide a new method for documenting environmental determinants of obesity and likely other environmental impacts on health. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lee, Kwan Woo; Yoon, Hyo Sik; Song, Jong Min; Park, Kang Ryoung
2018-03-23
Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Estimating the gaze of a virtuality human.
Roberts, David J; Rae, John; Duckworth, Tobias W; Moore, Carl M; Aspin, Rob
2013-04-01
The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.
Smeets, Julien; Roellinghoff, Frauke; Janssens, Guillaume; Perali, Irene; Celani, Andrea; Fiorini, Carlo; Freud, Nicolas; Testa, Etienne; Prieels, Damien
2016-01-01
More and more camera concepts are being investigated to try and seize the opportunity of instantaneous range verification of proton therapy treatments offered by prompt gammas emitted along the proton tracks. Focusing on one-dimensional imaging with a passive collimator, the present study experimentally compared in combination with the first, clinically compatible, dedicated camera device the performances of instances of the two main options: a knife-edge slit (KES) and a multi-parallel slit (MPS) design. These two options were experimentally assessed in this specific context as they were previously demonstrated through analytical and numerical studies to allow similar performances in terms of Bragg peak retrieval precision and spatial resolution in a general context. Both collimators were prototyped according to the conclusions of Monte Carlo optimization studies under constraints of equal weight (40 mm tungsten alloy equivalent thickness) and of the specificities of the camera device under consideration (in particular 4 mm segmentation along beam axis and no time-of-flight discrimination, both of which less favorable to the MPS performance than to the KES one). Acquisitions of proton pencil beams of 100, 160, and 230 MeV in a PMMA target revealed that, in order to reach a given level of statistical precision on Bragg peak depth retrieval, the KES collimator requires only half the dose the present MPS collimator needs, making the KES collimator a preferred option for a compact camera device aimed at imaging only the Bragg peak position. On the other hand, the present MPS collimator proves more effective at retrieving the entrance of the beam in the target in the context of an extended camera device aimed at imaging the whole proton track within the patient.
Culture and importance of backgrounds: a cross-cultural study of photograph taking.
Zhang, Jie; Li, Chen; Smithson, Adam; Spann, Ethan; Ruan, Fang
2010-10-01
To compare the focus on targeted people while taking a photograph, samples of American and Chinese college students were randomly selected and asked to take casual pictures of people around them with digital cameras. About 200 photographs were rated for the focus on the intended target in the picture. American students were more likely to focus on the targeted individual, while the Chinese students were more likely to attend to the background and the environment of the targeted individual. The findings imply that for the Chinese college students, the environment can be equally important as the person. Possibly for Americans the environment is less important due to the more individualistic culture.
Isochromatic photoelasticity fringe patterns of PMMA in various shapes and stress applications
NASA Astrophysics Data System (ADS)
Manjit, Y.; Limpichaipanit, A.; Ngamjarurojana, A.
2018-03-01
The research focuses on isochromatic photoelastic fringe patterns in solid materials by using reflection mode in dark field polariscope. The optical setup consists of light source, polarizers, quarter wave plates, 577 nm optical pass filter, compensator and digital camera system. The fringe patterns were produced on the sample and fractional / integer number of fringe order was observed using Babinet compensator and digital camera system. The samples were circular and rectangular shape of PMMA coated with silver spray and compressed by hydraulic system at the top and the bottom. The results of the isochromatic fringe pattern were analyzed in horizontal and vertical positions. It was found that force and the number of isochromatic photoelastic fringe order depended on shape of sample, which reflects stress distribution behavior.
Hanlon, John A.; Gill, Timothy J.
2001-01-01
Machine tools can be accurately measured and positioned on manufacturing machines within very small tolerances by use of an autocollimator on a 3-axis mount on a manufacturing machine and positioned so as to focus on a reference tooling ball or a machine tool, a digital camera connected to the viewing end of the autocollimator, and a marker and measure generator for receiving digital images from the camera, then displaying or measuring distances between the projection reticle and the reference reticle on the monitoring screen, and relating the distances to the actual position of the autocollimator relative to the reference tooling ball. The images and measurements are used to set the position of the machine tool and to measure the size and shape of the machine tool tip, and examine cutting edge wear. patent
Theoretical performance model for single image depth from defocus.
Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme
2014-12-01
In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.
The NIKA2 Large Field-of-View Millimeter Continuum Camera for the 30-M IRAM Telescope
NASA Astrophysics Data System (ADS)
Monfardini, Alessandro
2018-01-01
We have constructed and deployed a multi-thousands pixels dual-band (150 and 260 GHz, respectively 2mm and 1.15mm wavelengths) camera to image an instantaneous field-of-view of 6.5arc-min and configurable to map the linear polarization at 260GHz. We are providing a detailed description of this instrument, named NIKA2 (New IRAM KID Arrays 2), in particular focusing on the cryogenics, the optics, the focal plane arrays based on Kinetic Inductance Detectors (KID) and the readout electronics. We are presenting the performance measured on the sky during the commissioning runs that took place between October 2015 and April 2017 at the 30-meter IRAM (Institute of Millimetric Radio Astronomy) telescope at Pico Veleta, and preliminary science-grade results.
Apparatus and method for generating partially coherent illumination for photolithography
Sweatt, W.C.
1999-07-06
The present invention relates an apparatus and method for creating a bright, uniform source of partially coherent radiation for illuminating a pattern, in order to replicate an image of said pattern with a high degree of acuity. The present invention introduces a novel scatter plate into the optical path of source light used for illuminating a replicated object. The scatter plate has been designed to interrupt a focused, incoming light beam by introducing between about 8 to 24 diffraction zones blazed onto the surface of the scatter plate which intercept the light and redirect it to a like number of different positions in the condenser entrance pupil each of which is determined by the relative orientation and the spatial frequency of the diffraction grating in each of the several zones. Light falling onto the scatter plate, therefore, generates a plurality of unphased sources of illumination as seen by the back half of the optical system. The system includes a high brightness source, such as a laser, creating light which is taken up by a beam forming optic which focuses the incoming light into a condenser which in turn, focuses light into a field lens creating Kohler illumination image of the source in a camera entrance pupil. The light passing through the field lens illuminates a mask which interrupts the source light as either a positive or negative image of the object to be replicated. Light passing by the mask is focused into the entrance pupil of the lithographic camera creating an image of the mask onto a receptive media. 7 figs.
Apparatus and method for generating partially coherent illumination for photolithography
Sweatt, William C.
1999-01-01
The present invention relates an apparatus and method for creating a bright, uniform source of partially coherent radiation for illuminating a pattern, in order to replicate an image of said pattern with a high degree of acuity. The present invention introduces a novel scatter plate into the optical path of source light used for illuminating a replicated object. The scatter plate has been designed to interrupt a focused, incoming light beam by introducing between about 8 to 24 diffraction zones blazed onto the surface of the scatter plate which intercept the light and redirect it to a like number of different positions in the condenser entrance pupil each of which is determined by the relative orientation and the spatial frequency of the diffraction grating in each of the several zones. Light falling onto the scatter plate, therefore, generates a plurality of unphased sources of illumination as seen by the back half of the optical system. The system includes a high brightness source, such as a laser, creating light which is taken up by a beam forming optic which focuses the incoming light into a condenser which in turn, focuses light into a field lens creating Kohler illumination image of the source in a camera entrance pupil. The light passing through the field lens illuminates a mask which interrupts the source light as either a positive or negative image of the object to be replicated. Light passing by the mask is focused into the entrance pupil of the lithographic camera creating an image of the mask onto a receptive media.
Depth perception camera for autonomous vehicle applications
NASA Astrophysics Data System (ADS)
Kornreich, Philipp
2013-05-01
An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. Since it provides numeric information of the distance from the camera to all points in its field of view it is ideally suited for autonomous vehicle navigation and robotic vision. This eliminates the LIDAR conventionally used for range measurements. The light arriving at a pixel through a convex lens adds constructively only if it comes from the object point in focus at this pixel. The light from all other object points cancels. Thus, the lens selects the point on the object who's range is to be determined. The range measurement is accomplished by short light guides at each pixel. The light guides contain a p - n junction and a pair of contacts along its length. They, too, contain light sensing elements along the length. The device uses ambient light that is only coherent in spherical shell shaped light packets of thickness of one coherence length. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel.
Developing Short Films of Geoscience Research
NASA Astrophysics Data System (ADS)
Shipman, J. S.; Webley, P. W.; Dehn, J.; Harrild, M.; Kienenberger, D.; Salganek, M.
2015-12-01
In today's prevalence of social media and networking, video products are becoming increasingly more useful to communicate research quickly and effectively to a diverse audience, including outreach activities as well as within the research community and to funding agencies. Due to the observational nature of geoscience, researchers often take photos and video footage to document fieldwork or to record laboratory experiments. Here we present how researchers can become more effective storytellers by collaborating with filmmakers to produce short documentary films of their research. We will focus on the use of traditional high-definition (HD) camcorders and HD DSLR cameras to record the scientific story while our research topic focuses on the use of remote sensing techniques, specifically thermal infrared imaging that is often used to analyze time varying natural processes such as volcanic hazards. By capturing the story in the thermal infrared wavelength range, in addition to traditional red-green-blue (RGB) color space, the audience is able to experience the world differently. We will develop a short film specifically designed using thermal infrared cameras that illustrates how visual storytellers can use these new tools to capture unique and important aspects of their research, convey their passion for earth systems science, as well as engage and captive the viewer.
Auto-Focused on Details in Yellowjacket on Mars
2015-05-22
This image from the Chemistry and Camera (ChemCam) instrument on NASA's Curiosity Mars rover shows detailed texture of a rock target called "Yellowjacket" on Mars' Mount Sharp. This was the first rock target for ChemCam after checkout of restored capability for autonomous focusing. The image covers a patch of rock surface about 2.5 inches (6 centimeters) across. It was taken on May 15, 2015, during the mission's 986th Martian day, or sol. ChemCam's Remote Micro-Imager camera, on top of Curiosity's mast, captured the image from a distance of about 8 feet (2.4 meters). ChemCam also hit the target with laser pulses and recorded spectrographic information from the resulting flashes to reveal the chemical composition. Yellowjacket, located near an area called "Logan Pass" on lower Mount Sharp, is a layered sedimentary rock. The laser analysis yielded a composition very close to that of Mars soil and unlike the lakebed sedimentary compositions observed at lower elevations earlier in the mission. The soil-like composition may indicate that the rock formed from sediment transported by wind, rather than by water. http://photojournal.jpl.nasa.gov/catalog/PIA19661
NASA Astrophysics Data System (ADS)
Takahashi, Tadayuki; Mitsuda, Kazuhisa; Kelley, Richard; Aarts, Henri; Aharonian, Felix; Akamatsu, Hiroki; Akimoto, Fumie; Allen, Steve; Anabuki, Naohisa; Angelini, Lorella; Arnaud, Keith; Asai, Makoto; Audard, Marc; Awaki, Hisamitsu; Azzarello, Philipp; Baluta, Chris; Bamba, Aya; Bando, Nobutaka; Bautz, Mark; Blandford, Roger; Boyce, Kevin; Brown, Greg; Cackett, Ed; Chernyakova, Mara; Coppi, Paolo; Costantini, Elisa; de Plaa, Jelle; den Herder, Jan-Willem; DiPirro, Michael; Done, Chris; Dotani, Tadayasu; Doty, John; Ebisawa, Ken; Eckart, Megan; Enoto, Teruaki; Ezoe, Yuichiro; Fabian, Andrew; Ferrigno, Carlo; Foster, Adam; Fujimoto, Ryuichi; Fukazawa, Yasushi; Funk, Stefan; Furuzawa, Akihiro; Galeazzi, Massimiliano; Gallo, Luigi; Gandhi, Poshak; Gendreau, Keith; Gilmore, Kirk; Haas, Daniel; Haba, Yoshito; Hamaguchi, Kenji; Hatsukade, Isamu; Hayashi, Takayuki; Hayashida, Kiyoshi; Hiraga, Junko; Hirose, Kazuyuki; Hornschemeier, Ann; Hoshino, Akio; Hughes, John; Hwang, Una; Iizuka, Ryo; Inoue, Yoshiyuki; Ishibashi, Kazunori; Ishida, Manabu; Ishimura, Kosei; Ishisaki, Yoshitaka; Ito, Masayuki; Iwata, Naoko; Iyomoto, Naoko; Kaastra, Jelle; Kallman, Timothy; Kamae, Tuneyoshi; Kataoka, Jun; Katsuda, Satoru; Kawahara, Hajime; Kawaharada, Madoka; Kawai, Nobuyuki; Kawasaki, Shigeo; Khangaluyan, Dmitry; Kilbourne, Caroline; Kimura, Masashi; Kinugasa, Kenzo; Kitamoto, Shunji; Kitayama, Tetsu; Kohmura, Takayoshi; Kokubun, Motohide; Kosaka, Tatsuro; Koujelev, Alex; Koyama, Katsuji; Krimm, Hans; Kubota, Aya; Kunieda, Hideyo; LaMassa, Stephanie; Laurent, Philippe; Lebrun, Francois; Leutenegger, Maurice; Limousin, Olivier; Loewenstein, Michael; Long, Knox; Lumb, David; Madejski, Grzegorz; Maeda, Yoshitomo; Makishima, Kazuo; Marchand, Genevieve; Markevitch, Maxim; Matsumoto, Hironori; Matsushita, Kyoko; McCammon, Dan; McNamara, Brian; Miller, Jon; Miller, Eric; Mineshige, Shin; Minesugi, Kenji; Mitsuishi, Ikuyuki; Miyazawa, Takuya; Mizuno, Tsunefumi; Mori, Hideyuki; Mori, Koji; Mukai, Koji; Murakami, Toshio; Murakami, Hiroshi; Mushotzky, Richard; Nagano, Hosei; Nagino, Ryo; Nakagawa, Takao; Nakajima, Hiroshi; Nakamori, Takeshi; Nakazawa, Kazuhiro; Namba, Yoshiharu; Natsukari, Chikara; Nishioka, Yusuke; Nobukawa, Masayoshi; Nomachi, Masaharu; O'Dell, Steve; Odaka, Hirokazu; Ogawa, Hiroyuki; Ogawa, Mina; Ogi, Keiji; Ohashi, Takaya; Ohno, Masanori; Ohta, Masayuki; Okajima, Takashi; Okamoto, Atsushi; Okazaki, Tsuyoshi; Ota, Naomi; Ozaki, Masanobu; Paerels, Fritzs; Paltani, Stéphane; Parmar, Arvind; Petre, Robert; Pohl, Martin; Porter, F. Scott; Ramsey, Brian; Reis, Rubens; Reynolds, Christopher; Russell, Helen; Safi-Harb, Samar; Sakai, Shin-ichiro; Sameshima, Hiroaki; Sanders, Jeremy; Sato, Goro; Sato, Rie; Sato, Yohichi; Sato, Kosuke; Sawada, Makoto; Serlemitsos, Peter; Seta, Hiromi; Shibano, Yasuko; Shida, Maki; Shimada, Takanobu; Shinozaki, Keisuke; Shirron, Peter; Simionescu, Aurora; Simmons, Cynthia; Smith, Randall; Sneiderman, Gary; Soong, Yang; Stawarz, Lukasz; Sugawara, Yasuharu; Sugita, Hiroyuki; Sugita, Satoshi; Szymkowiak, Andrew; Tajima, Hiroyasu; Takahashi, Hiromitsu; Takeda, Shin-ichiro; Takei, Yoh; Tamagawa, Toru; Tamura, Takayuki; Tamura, Keisuke; Tanaka, Takaaki; Tanaka, Yasuo; Tashiro, Makoto; Tawara, Yuzuru; Terada, Yukikatsu; Terashima, Yuichi; Tombesi, Francesco; Tomida, Hiroshi; Tsuboi, Yohko; Tsujimoto, Masahiro; Tsunemi, Hiroshi; Tsuru, Takeshi; Uchida, Hiroyuki; Uchiyama, Yasunobu; Uchiyama, Hideki; Ueda, Yoshihiro; Ueno, Shiro; Uno, Shinichiro; Urry, Meg; Ursino, Eugenio; de Vries, Cor; Wada, Atsushi; Watanabe, Shin; Werner, Norbert; White, Nicholas; Yamada, Takahiro; Yamada, Shinya; Yamaguchi, Hiroya; Yamasaki, Noriko; Yamauchi, Shigeo; Yamauchi, Makoto; Yatsu, Yoichi; Yonetoku, Daisuke; Yoshida, Atsumasa; Yuasa, Takayuki
2012-09-01
The joint JAXA/NASA ASTRO-H mission is the sixth in a series of highly successful X-ray missions initiated by the Institute of Space and Astronautical Science (ISAS). ASTRO-H will investigate the physics of the highenergy universe via a suite of four instruments, covering a very wide energy range, from 0.3 keV to 600 keV. These instruments include a high-resolution, high-throughput spectrometer sensitive over 0.3-12 keV with high spectral resolution of ΔE ≦ 7 eV, enabled by a micro-calorimeter array located in the focal plane of thin-foil X-ray optics; hard X-ray imaging spectrometers covering 5-80 keV, located in the focal plane of multilayer-coated, focusing hard X-ray mirrors; a wide-field imaging spectrometer sensitive over 0.4-12 keV, with an X-ray CCD camera in the focal plane of a soft X-ray telescope; and a non-focusing Compton-camera type soft gamma-ray detector, sensitive in the 40-600 keV band. The simultaneous broad bandpass, coupled with high spectral resolution, will enable the pursuit of a wide variety of important science themes.
DuOCam: A Two-Channel Camera for Simultaneous Photometric Observations of Stellar Clusters
NASA Astrophysics Data System (ADS)
Maier, Erin R.; Witt, Emily; Depoy, Darren L.; Schmidt, Luke M.
2017-01-01
We have designed the Dual Observation Camera (DuOCam), which uses commercial, off-the-shelf optics to perform simultaneous photometric observations of astronomical objects at red and blue wavelengths. Collected light enters DuOCam’s optical assembly, where it is collimated by a negative doublet lens. It is then separated by a 45 degree blue dichroic filter (transmission bandpass: 530 - 800 nm, reflection bandpass: 400 - 475 nm). Finally, the separated light is focused by two identical positive doublet lenses onto two independent charge-coupled devices (CCDs), the SBIG ST-8300M and the SBIG STF-8300M. This optical assembly converts the observing telescope to an f/11 system, which balances maximum field of view with optimum focus. DuOCam was commissioned on the McDonald Observatory 0.9m, f/13.5 telescope from July 21st - 24th, 2016. Observations of three globular and three open stellar clusters were carried out. The resulting data were used to construct R vs. B-R color magnitude diagrams for a selection of the observed clusters. The diagrams display the characteristic evolutionary track for a stellar cluster, including the main sequence and main sequence turn-off.
MO-AB-206-00: Nuclear Medicine Physics and Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images for SPECT reconstructions. Become knowledgeable of items to be included in annual acceptance testing reports including CT dosimetry and PACS monitor measurements. T. Turkington, GE Healthcare.« less
Quantifying biodiversity using digital cameras and automated image analysis.
NASA Astrophysics Data System (ADS)
Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.
2009-04-01
Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.
NASA Astrophysics Data System (ADS)
Selker, Ted
1983-05-01
Lens focusing using a hardware model of a retina (Reticon RL256 light sensitive array) with a low cost processor (8085 with 512 bytes of ROM and 512 bytes of RAM) was built. This system was developed and tested on a variety of visual stimuli to demonstrate that: a)an algorithm which moves a lens to maximize the sum of the difference of light level on adjacent light sensors will converge to best focus in all but contrived situations. This is a simpler algorithm than any previously suggested; b) it is feasible to use unmodified video sensor arrays with in-expensive processors to aid video camera use. In the future, software could be developed to extend the processor's usefulness, possibly to track an actor by panning and zooming to give a earners operator increased ease of framing; c) lateral inhibition is an adequate basis for determining best focus. This supports a simple anatomically motivated model of how our brain focuses our eyes.
OCAMS: The OSIRIS-REx Camera Suite
NASA Astrophysics Data System (ADS)
Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.
2018-02-01
The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.
Maximum likelihood estimation in calibrating a stereo camera setup.
Muijtjens, A M; Roos, J M; Arts, T; Hasman, A
1999-02-01
Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.
Improvement of passive THz camera images
NASA Astrophysics Data System (ADS)
Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw
2012-10-01
Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.
Marcus, Inna; Tung, Irene T; Dosunmu, Eniolami O; Thiamthat, Warakorn; Freedman, Sharon F
2013-12-01
To compare anterior segment findings identified in young children using digital photographic images from the Lytro light field camera to those observed clinically. This was a prospective study of children <9 years of age with an anterior segment abnormality. Clinically observed anterior segment examination findings for each child were recorded and several digital images of the anterior segment of each eye captured with the Lytro camera. The images were later reviewed by a masked examiner. Sensitivity of abnormal examination findings on Lytro imaging was calculated and compared to the clinical examination as the gold standard. A total of 157 eyes of 80 children (mean age, 4.4 years; range, 0.1-8.9) were included. Clinical examination revealed 206 anterior segment abnormalities altogether: lids/lashes (n = 21 eyes), conjunctiva/sclera (n = 28 eyes), cornea (n = 71 eyes), anterior chamber (n = 14 eyes), iris (n = 43 eyes), and lens (n = 29 eyes). Review of Lytro photographs of eyes with clinically diagnosed anterior segment abnormality correctly identified 133 of 206 (65%) of all abnormalities. Additionally, 185 abnormalities in 50 children were documented at examination under anesthesia. The Lytro camera was able to document most abnormal anterior segment findings in un-sedated young children. Its unique ability to allow focus change after image capture is a significant improvement on prior technology. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.
VizieR Online Data Catalog: XMM-Newton and Chandra monitoring of Sgr A* (Ponti+, 2015)
NASA Astrophysics Data System (ADS)
Ponti, G.; de, Marco B.; Morris, M. R.; Merloni, A.; Munoz-Darias, T.; Clavel, M.; Haggard, D.; Zhang, S.; Nandra, K.; Gillessen, S.; Mori, K.; Neilsen, J.; Rea, N.; Degenaar, N.; Terrier, R.; Goldwurm, A.
2018-01-01
As of 2014 November 11 the XMM-Newton archive contains 37 public observations that can be used for our analysis of Sgr A*. In addition, we consider four new observations aimed at monitoring the interaction between the G2 object and Sgr A*, performed in fall 2014 (see Table A4). A total of 41 XMM-Newton data sets are considered in this work. All the 46 Chandra observations accumulated between 1999 and 2011 and analysed here are obtained with the ACIS-I camera without any gratings on (see Table A1). From 2012 onwards, data from the ACIS-S camera were also employed. The 2012 Chandra "X-ray Visionary Project" (XVP) is composed of 38 High-Energy Transmission Grating (HETG) observations with the ACIS-S camera at the focus (Nowak et al. 2012ApJ...759...95N; Neilsen et al. 2013ApJ...774...42N; 2015ApJ...799..199N; Wang et al. 2013Sci...341..981W; see Table A2). The first two observations of the 2013 monitoring campaign were performed with the ACIS-I instrument, while the ACIS-S camera was employed in all the remaining observations, after the outburst of SGR J1745-2900 on 2013 April 25. Three observations between 2013 May and July were performed with the HETG on, while all the remaining ones do not employ any gratings (see Table A2). (4 data files).
Integrating Microscopic Analysis into Existing Quality Assurance Processes
NASA Astrophysics Data System (ADS)
Frühberger, Peter; Stephan, Thomas; Beyerer, Jürgen
When technical goods, like mainboards and other electronic components, are produced, quality assurance (QA) is very important. To achieve this goal, different optical microscopes can be used to analyze a variety of specimen to gain comprehensive information by combining the acquired sensor data. In many industrial processes, cameras are used to examine these technical goods. Those cameras can analyze complete boards at once and offer a high level of accuracy when used for completeness checks. When small defects, e.g. soldered points, need to be examined in detail, those wide area cameras are limited. Microscopes with large magnification need to be used to analyze those critical areas. But microscopes alone cannot fulfill this task within a limited time schedule, because microscopic analysis of complete motherboards of a certain size is time demanding. Microscopes are limited concerning their depth of field and depth of focus, which is why additional components like XY moving tables need to be used to examine the complete surface. Yet today's industrial production quality standards require a 100 % control of the soldered components within a given time schedule. This level of quality, while keeping inspection time low, can only be achieved when combining multiple inspection devices in an optimized manner. This paper presents results and methods of combining industrial cameras with microscopy instrumenting a classificatory based approach intending to keep already deployed QA processes in place but extending them with the purpose of increasing the quality level of the produced technical goods while maintaining high throughput.
SOFIA tracking image simulation
NASA Astrophysics Data System (ADS)
Taylor, Charles R.; Gross, Michael A. K.
2016-09-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) tracking camera simulator is a component of the Telescope Assembly Simulator (TASim). TASim is a software simulation of the telescope optics, mounting, and control software. Currently in its fifth major version, TASim is relied upon for telescope operator training, mission planning and rehearsal, and mission control and science instrument software development and testing. TASim has recently been extended for hardware-in-the-loop operation in support of telescope and camera hardware development and control and tracking software improvements. All three SOFIA optical tracking cameras are simulated, including the Focal Plane Imager (FPI), which has recently been upgraded to the status of a science instrument that can be used on its own or in parallel with one of the seven infrared science instruments. The simulation includes tracking camera image simulation of starfields based on the UCAC4 catalog at real-time rates of 4-20 frames per second. For its role in training and planning, it is important for the tracker image simulation to provide images with a realistic appearance and response to changes in operating parameters. For its role in tracker software improvements, it is vital to have realistic signal and noise levels and precise star positions. The design of the software simulation for precise subpixel starfield rendering (including radial distortion), realistic point-spread function as a function of focus, tilt, and collimation, and streaking due to telescope motion will be described. The calibration of the simulation for light sensitivity, dark and bias signal, and noise will also be presented
Monitoring tigers with confidence.
Linkie, Matthew; Guillera-Arroita, Gurutzeta; Smith, Joseph; Rayan, D Mark
2010-12-01
With only 5% of the world's wild tigers (Panthera tigris Linnaeus, 1758) remaining since the last century, conservationists urgently need to know whether or not the management strategies currently being employed are effectively protecting these tigers. This knowledge is contingent on the ability to reliably monitor tiger populations, or subsets, over space and time. In the this paper, we focus on the 2 seminal methodologies (camera trap and occupancy surveys) that have enabled the monitoring of tiger populations with greater confidence. Specifically, we: (i) describe their statistical theory and application in the field; (ii) discuss issues associated with their survey designs and state variable modeling; and, (iii) discuss their future directions. These methods have had an unprecedented influence on increasing statistical rigor within tiger surveys and, also, surveys of other carnivore species. Nevertheless, only 2 published camera trap studies have gone beyond single baseline assessments and actually monitored population trends. For low density tiger populations (e.g. <1 adult tiger/100 km(2)) obtaining sufficient precision for state variable estimates from camera trapping remains a challenge because of insufficient detection probabilities and/or sample sizes. Occupancy surveys have overcome this problem by redefining the sampling unit (e.g. grid cells and not individual tigers). Current research is focusing on developing spatially explicit capture-mark-recapture models and estimating abundance indices from landscape-scale occupancy surveys, as well as the use of genetic information for identifying and monitoring tigers. The widespread application of these monitoring methods in the field now enables complementary studies on the impact of the different threats to tiger populations and their response to varying management intervention. © 2010 ISZS, Blackwell Publishing and IOZ/CAS.
Electrowetting-Based Variable-Focus Lens for Miniature Systems
NASA Astrophysics Data System (ADS)
Hendriks, B. H. W.; Kuiper, S.; van As, M. A. J.; et al.
The meniscus between two immiscible liquids of different refractive indices can be used as a lens. A change in curvature of this meniscus by electrostatic control of the solid/liquid interfacial tension leads to a change in focal distance. It is demonstrated that two liquids in a tube form a self-centred variable-focus lens. The optical properties of this lens were investigated experimentally. We designed and constructed a miniature camera module based on this variable lens suitable for mobile applications. Furthermore, the liquid lens was applied in a Blu-ray Disc optical recording system to enable dual layer disc reading/writing.
A passive autofocus system by using standard deviation of the image on a liquid lens
NASA Astrophysics Data System (ADS)
Rasti, Pejman; Kesküla, Arko; Haus, Henry; Schlaak, Helmut F.; Anbarjafari, Gholamreza; Aabloo, Alvo; Kiefer, Rudolf
2015-04-01
Today most of applications have a small camera such as cell phones, tablets and medical devices. A micro lens is required in order to reduce the size of the devices. In this paper an auto focus system is used in order to find the best position of a liquid lens without any active components such as ultrasonic or infrared. In fact a passive auto focus system by using standard deviation of the images on a liquid lens which consist of a Dielectric Elastomer Actuator (DEA) membrane between oil and water is proposed.
Study of the Anatomy of the X-Ray and Neutron Production Scaling Laws in the Plasma Focus.
1980-05-15
plasma focus discharge in deuterium as an extension of our previous work on scaling laws of x-ray and neutron production. The structure of dense plasmoids which emit MeV ions has been recorded by ion imaging with pinhole camera and contact print techniques. The plasmoids are generated in the same region in which particle beams, neutron and x-ray emission reach a maximum of intensity. Sharply defined boundaries of the ion-beam source and of plasmoids have been obtained by ion track etching on plastic material
Fast Fiber-Coupled Imaging Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas
HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less
Universal ICT Picosecond Camera
NASA Astrophysics Data System (ADS)
Lebedev, Vitaly B.; Syrtzev, V. N.; Tolmachyov, A. M.; Feldman, Gregory G.; Chernyshov, N. A.
1989-06-01
The paper reports on the design of an ICI camera operating in the mode of linear or three-frame image scan. The camera incorporates two tubes: time-analyzing ICI PIM-107 1 with cathode S-11, and brightness amplifier PMU-2V (gain about 104) for the image shaped by the first tube. The camera is designed on the basis of streak camera AGAT-SF3 2 with almost the same power sources, but substantially modified pulse electronics. Schematically, the design of tube PIM-107 is depicted in the figure. The tube consists of cermet housing 1, photocathode 2 made in a separate vacuum volume and introduced into the housing by means of a manipulator. In a direct vicinity of the photocathode, accelerating electrode is located made of a fine-structure grid. An electrostatic lens formed by focusing electrode 4 and anode diaphragm 5 produces a beam of electrons with a "remote crossover". The authors have suggested this term for an electron beam whose crossover is 40 to 60 mm away from the anode diaphragm plane which guarantees high sensitivity of scan plates 6 with respect to multiaperture framing diaphragm 7. Beyond every diaphragm aperture, a pair of deflecting plates 8 is found shielded from compensation plates 10 by diaphragm 9. The electronic image produced by the photocathode is focused on luminescent screen 11. The tube is controlled with the help of two saw-tooth voltages applied in antiphase across plates 6 and 10. Plates 6 serve for sweeping the electron beam over the surface of diaphragm 7. The beam is either allowed toward the screen, or delayed by the diaphragm walls. In such a manner, three frames are obtained, the number corresponding to that of the diaphragm apertures. Plates 10 serve for stopping the compensation of the image streak sweep on the screen. To avoid overlapping of frames, plates 8 receive static potentials responsible for shifting frames on the screen. Changing the potentials applied to plates 8, one can control the spacing between frames and partially or fully overlap the frames. This sort of control is independent of the frequency of frame running and of their duration, and can only determine frame positioning on the screen. Since diaphragm 7 is located in the area of crossover and electron trajectories cross in the crossover, the frame is not decomposed into separate elements during its formation. The image is transferred onto the screen practically within the entire time of frame duration increasing the aperture ratio of the tube as compared to that in Ref. 3.
Vision-Based People Detection System for Heavy Machine Applications
Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick
2016-01-01
This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838
Vision-Based People Detection System for Heavy Machine Applications.
Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick
2016-01-20
This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.
A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.
Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun
2015-08-31
Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.
A Probabilistic Feature Map-Based Localization System Using a Monocular Camera
Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun
2015-01-01
Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284
Adaptive illumination source for multispectral vision system applied to material discrimination
NASA Astrophysics Data System (ADS)
Conde, Olga M.; Cobo, Adolfo; Cantero, Paulino; Conde, David; Mirapeix, Jesús; Cubillas, Ana M.; López-Higuera, José M.
2008-04-01
A multispectral system based on a monochrome camera and an adaptive illumination source is presented in this paper. Its preliminary application is focused on material discrimination for food and beverage industries, where monochrome, color and infrared imaging have been successfully applied for this task. This work proposes a different approach, in which the relevant wavelengths for the required discrimination task are selected in advance using a Sequential Forward Floating Selection (SFFS) Algorithm. A light source, based on Light Emitting Diodes (LEDs) at these wavelengths is then used to sequentially illuminate the material under analysis, and the resulting images are captured by a CCD camera with spectral response in the entire range of the selected wavelengths. Finally, the several multispectral planes obtained are processed using a Spectral Angle Mapping (SAM) algorithm, whose output is the desired material classification. Among other advantages, this approach of controlled and specific illumination produces multispectral imaging with a simple monochrome camera, and cold illumination restricted to specific relevant wavelengths, which is desirable for the food and beverage industry. The proposed system has been tested with success for the automatic detection of foreign object in the tobacco processing industry.
Passive radiation detection using optically active CMOS sensors
NASA Astrophysics Data System (ADS)
Dosiek, Luke; Schalk, Patrick D.
2013-05-01
Recently, there have been a number of small-scale and hobbyist successes in employing commodity CMOS-based camera sensors for radiation detection. For example, several smartphone applications initially developed for use in areas near the Fukushima nuclear disaster are capable of detecting radiation using a cell phone camera, provided opaque tape is placed over the lens. In all current useful implementations, it is required that the sensor not be exposed to visible light. We seek to build a system that does not have this restriction. While building such a system would require sophisticated signal processing, it would nevertheless provide great benefits. In addition to fulfilling their primary function of image capture, cameras would also be able to detect unknown radiation sources even when the danger is considered to be low or non-existent. By experimentally profiling the image artifacts generated by gamma ray and β particle impacts, algorithms are developed to identify the unique features of radiation exposure, while discarding optical interaction and thermal noise effects. Preliminary results focus on achieving this goal in a laboratory setting, without regard to integration time or computational complexity. However, future work will seek to address these additional issues.
Addressing challenges of modulation transfer function measurement with fisheye lens cameras
NASA Astrophysics Data System (ADS)
Deegan, Brian M.; Denny, Patrick E.; Zlokolica, Vladimir; Dever, Barry; Russell, Laura
2015-03-01
Modulation transfer function (MTF) is a well defined and accepted method of measuring image sharpness. The slanted edge test, as defined in ISO12233 is a standard method of calculating MTF, and is widely used for lens alignment and auto-focus algorithm verification. However, there are a number of challenges which should be considered when measuring MTF in cameras with fisheye lenses. Due to trade-offs related Petzval curvature, planarity of the optical plane is difficult to achieve in fisheye lenses. It is therefore critical to have the ability to accurately measure sharpness throughout the entire image, particularly for lens alignment. One challenge for fisheye lenses is that, because of the radial distortion, the slanted edges will have different angles, depending on the location within the image and on the distortion profile of the lens. Previous work in the literature indicates that MTF measurements are robust for angles between 2 and 10 degrees. Outside of this range, MTF measurements become unreliable. Also, the slanted edge itself will be curved by the lens distortion, causing further measurement problems. This study summarises the difficulties in the use of MTF for sharpness measurement in fisheye lens cameras, and proposes mitigations and alternative methods.
The gamma-ray Cherenkov telescope for the Cherenkov telescope array
NASA Astrophysics Data System (ADS)
Tibaldo, L.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kraus, M.; Lapington, J. S.; Laporte, P.; Lefaucheur, J.; Markoff, S.; Melse, T.; Mohrmann, L.; Molyneux, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayède, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Trichard, C.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium
2017-01-01
The Cherenkov Telescope Array (CTA) is a forthcoming ground-based observatory for very-high-energy gamma rays. CTA will consist of two arrays of imaging atmospheric Cherenkov telescopes in the Northern and Southern hemispheres, and will combine telescopes of different types to achieve unprecedented performance and energy coverage. The Gamma-ray Cherenkov Telescope (GCT) is one of the small-sized telescopes proposed for CTA to explore the energy range from a few TeV to hundreds of TeV with a field of view ≳ 8° and angular resolution of a few arcminutes. The GCT design features dual-mirror Schwarzschild-Couder optics and a compact camera based on densely-pixelated photodetectors as well as custom electronics. In this contribution we provide an overview of the GCT project with focus on prototype development and testing that is currently ongoing. We present results obtained during the first on-telescope campaign in late 2015 at the Observatoire de Paris-Meudon, during which we recorded the first Cherenkov images from atmospheric showers with the GCT multi-anode photomultiplier camera prototype. We also discuss the development of a second GCT camera prototype with silicon photomultipliers as photosensors, and plans toward a contribution to the realisation of CTA.
A 2.5m astronomical telescope project
NASA Astrophysics Data System (ADS)
Phaichith, Oudomsanith
2008-07-01
The paper reports a recently started project for a 2,5 meter diameter robotic telescope dedicated to astronomy and education for the University of Moscow's Sternberg Institute. As a prime contractor Sagem Defense Securite's REOSC department will take on the program design as well as the production of the optical components. The project includes the Alt-Az mount, the dome and its cooling and air stabilization system, the weather station, the high-resolution camera and realization, transport and installation on-site at the Kislovodsk solar station located in the Caucasus mountains as well as the initial training for the operators. The telescope will provide a wide field of view of 40 arcmin at the Cassegrain F/8 focus. An escapable and rotating tertiary mirror will allow to direct the light to the two Nasmyth foci and two student ports located at 90° from the Nasmyth foci. A 4k x 4k CCD camera cryogenically cooled to 140 K will be provided as a first light camera. All will be delivered by end 2009. Remotely controlled via the internet, the telescope will allow Russia to train doctors in astronomy, participate in international research projects and draw up the future specifications of a larger and more advanced telescope.
Extended depth of field system for long distance iris acquisition
NASA Astrophysics Data System (ADS)
Chen, Yuan-Lin; Hsieh, Sheng-Hsun; Hung, Kuo-En; Yang, Shi-Wen; Li, Yung-Hui; Tien, Chung-Hao
2012-10-01
Using biometric signatures for identity recognition has been practiced for centuries. Recently, iris recognition system attracts much attention due to its high accuracy and high stability. The texture feature of iris provides a signature that is unique for each subject. Currently most commercial iris recognition systems acquire images in less than 50 cm, which is a serious constraint that needs to be broken if we want to use it for airport access or entrance that requires high turn-over rate . In order to capture the iris patterns from a distance, in this study, we developed a telephoto imaging system with image processing techniques. By using the cubic phase mask positioned front of the camera, the point spread function was kept constant over a wide range of defocus. With adequate decoding filter, the blurred image was restored, where the working distance between the subject and the camera can be achieved over 3m associated with 500mm focal length and aperture F/6.3. The simulation and experimental results validated the proposed scheme, where the depth of focus of iris camera was triply extended over the traditional optics, while keeping sufficient recognition accuracy.
Validation of geometric models for fisheye lenses
NASA Astrophysics Data System (ADS)
Schneider, D.; Schwalbe, E.; Maas, H.-G.
The paper focuses on the photogrammetric investigation of geometric models for different types of optical fisheye constructions (equidistant, equisolid-angle, sterographic and orthographic projection). These models were implemented and thoroughly tested in a spatial resection and a self-calibrating bundle adjustment. For this purpose, fisheye images were taken with a Nikkor 8 mm fisheye lens on a Kodak DSC 14n Pro digital camera in a hemispherical calibration room. Both, the spatial resection and the bundle adjustment resulted in a standard deviation of unit weight of 1/10 pixel with a suitable set of simultaneous calibration parameters introduced into the camera model. The camera-lens combination was treated with all of the four basic models mentioned above. Using the same set of additional lens distortion parameters, the differences between the models can largely be compensated, delivering almost the same precision parameters. The relative object space precision obtained from the bundle adjustment was ca. 1:10 000 of the object dimensions. This value can be considered as a very satisfying result, as fisheye images generally have a lower geometric resolution as a consequence of their large field of view and also have a inferior imaging quality in comparison to most central perspective lenses.
NASA Astrophysics Data System (ADS)
Druart, Guillaume; Matallah, Noura; Guerineau, Nicolas; Magli, Serge; Chambon, Mathieu; Jenouvrier, Pierre; Mallet, Eric; Reibel, Yann
2014-06-01
Today, both military and civilian applications require miniaturized optical systems in order to give an imagery function to vehicles with small payload capacity. After the development of megapixel focal plane arrays (FPA) with micro-sized pixels, this miniaturization will become feasible with the integration of optical functions in the detector area. In the field of cooled infrared imaging systems, the detector area is the Detector-Dewar-Cooler Assembly (DDCA). SOFRADIR and ONERA have launched a new research and innovation partnership, called OSMOSIS, to develop disruptive technologies for DDCA to improve the performance and compactness of optronic systems. With this collaboration, we will break down the technological barriers of DDCA, a sealed and cooled environment dedicated to the infrared detectors, to explore Dewar-level integration of optics. This technological breakthrough will bring more compact multipurpose thermal imaging products, as well as new thermal capabilities such as 3D imagery or multispectral imagery. Previous developments will be recalled (SOIE and FISBI cameras) and new developments will be presented. In particular, we will focus on a dual-band MWIR-LWIR camera and a multichannel camera.
Detecting personnel around UGVs using stereo vision
NASA Astrophysics Data System (ADS)
Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.
2008-04-01
Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
High speed Infrared imaging method for observation of the fast varying temperature phenomena
NASA Astrophysics Data System (ADS)
Moghadam, Reza; Alavi, Kambiz; Yuan, Baohong
With new improvements in high-end commercial R&D camera technologies many challenges have been overcome for exploring the high-speed IR camera imaging. The core benefits of this technology is the ability to capture fast varying phenomena without image blur, acquire enough data to properly characterize dynamic energy, and increase the dynamic range without compromising the number of frames per second. This study presents a noninvasive method for determining the intensity field of a High Intensity Focused Ultrasound Device (HIFU) beam using Infrared imaging. High speed Infrared camera was placed above the tissue-mimicking material that was heated by HIFU with no other sensors present in the HIFU axial beam. A MATLAB simulation code used to perform a finite-element solution to the pressure wave propagation and heat equations within the phantom and temperature rise to the phantom was computed. Three different power levels of HIFU transducers were tested and the predicted temperature increase values were within about 25% of IR measurements. The fundamental theory and methods developed in this research can be used to detect fast varying temperature phenomena in combination with the infrared filters.
Forming images with thermal neutrons
NASA Astrophysics Data System (ADS)
Vanier, Peter E.; Forman, Leon
2003-01-01
Thermal neutrons passing through air have scattering lengths of about 20 meters. At further distances, the majority of neutrons emanating from a moderated source will scatter multiple times in the air before being detected, and will not retain information about the location of the source, except that their density will fall off somewhat faster than 1/r2. However, there remains a significant fraction of the neutrons that will travel 20 meters or more without scattering and can be used to create an image of the source. A few years ago, a proof-of-principle "camera" was demonstrated that could produce images of a scene containing sources of thermalized neutrons and could locate a source comparable in strength with an improvised nuclear device at ranges over 60 meters. The instrument makes use of a coded aperture with a uniformly redundant array of openings, analogous to those used in x-ray and gamma cameras. The detector is a position-sensitive He-3 proportional chamber, originally used for neutron diffraction. A neutron camera has many features in common with those designed for non-focusable photons, as well as some important differences. Potential applications include detecting nuclear smuggling, locating non-metallic land mines, assaying nuclear waste, and surveying for health physics purposes.
Monitoring and Modeling the Impact of Grazers Using Visual, Remote and Traditional Field Techniques
NASA Astrophysics Data System (ADS)
Roadknight, C. M.; Marshall, I. W.; Rose, R. J.
2009-04-01
The relationship between wild and domestic animals and the landscape they graze upon is important to soil erosion studies because they are a strong influence on vegetation cover (a key control on the rate of overland flow runoff), and also because the grazers contribute directly to sediment transport via carriage and indirectly by exposing fresh soil by trampling and burrowing/excavating. Quantifying the impacts of these effects on soil erosion and their dependence on grazing intensity, in complex semi-natural habitats has proved difficult. This is due to lack of manpower to collect sufficient data and weak standardization of data collection between observers. The advent of cheaper and more sophisticated digital camera technology and GPS tracking devices has lead to an increase in the amount of habitat monitoring information that is being collected. We report on the use of automated trail cameras to continuously capture images of grazer (sheep, rabbits, deer) activity in a variety of habitats at the Moor House nature reserve in northern England. As well as grazer activity these cameras also give valuable information on key climatic soil erosion factors such as snow, rain and wind and plant growth and thus allow the importance of a range of grazer activities and the grazing intensity to be estimated. GPS collars and more well established survey methods (erosion monitoring, dung counting and vegetation surveys) are being used to generate a detailed representation of land usage and plan camera siting. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the data processing time and increase focus on important subsets in the collected data. We also present a land usage model that estimates grazing intensity, grazer behaviours and their impact on soil coverage at sites where cameras have not been deployed, based on generalising from camera sites to other sites with similar morphology and ecology, where the GPS tracks indicate similar levels of grazer activity. This is ongoing research with results continually feeding back to the data collection regimes in terms of camera placement. This all makes a valuable contribution to the debate about the dynamics of grazing behaviour and its impact on soil erosion.
Small format digital photogrammetry for applications in the earth sciences
NASA Astrophysics Data System (ADS)
Rieke-Zapp, Dirk
2010-05-01
Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.
Overview of Athena Microscopic Imager Results
NASA Technical Reports Server (NTRS)
Herkenhoff, K.; Squyres, S.; Arvidson, R.; Bass, D.; Bell, J., III; Bertelsen, P.; Cabrol, N.; Ehlmann, B.; Farrand, W.; Gaddis, L.
2005-01-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on an extendable arm, the Instrument Deployment Device (IDD). The MI acquires images at a spatial resolution of 31 microns/pixel over a broad spectral range (400 - 700 nm). The MI uses the same electronics design as the other MER cameras but its optics yield a field of view of 32 32 mm across a 1024 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. The MI science objectives, instrument design and calibration, operation, and data processing were described by Herkenhoff et al. Initial results of the MI experiment on both MER rovers (Spirit and Opportunity) have been published previously. Highlights of these and more recent results are described.
NASA Astrophysics Data System (ADS)
Naqvi, Rizwan Ali; Park, Kang Ryoung
2016-06-01
Gaze tracking systems are widely used in human-computer interfaces, interfaces for the disabled, game interfaces, and for controlling home appliances. Most studies on gaze detection have focused on enhancing its accuracy, whereas few have considered the discrimination of intentional gaze fixation (looking at a target to activate or select it) from unintentional fixation while using gaze detection systems. Previous research methods based on the use of a keyboard or mouse button, eye blinking, and the dwell time of gaze position have various limitations. Therefore, we propose a method for discriminating between intentional and unintentional gaze fixation using a multimodal fuzzy logic algorithm applied to a gaze tracking system with a near-infrared camera sensor. Experimental results show that the proposed method outperforms the conventional method for determining gaze fixation.
Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection
NASA Astrophysics Data System (ADS)
Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2016-10-01
Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.
STS-109 Crew Interviews - Altman
NASA Technical Reports Server (NTRS)
2002-01-01
STS-109 crew Commander Scott D. Altman is seen during a prelaunch interview. He answers questions about his inspiration to become an astronaut and his career path. He gives details on the mission's goals and significance, which are all related to maintenance of the Hubble Space Telescope (HST). After the Columbia Orbiter's rendezvous with the HST, extravehicular activities (EVA) will be focused on several important tasks which include: (1) installing the Advanced Camera for Surveys; (2) installing a cooling system on NICMOS (Near Infrared Camera Multi-Object Spectrometer); (3) repairing the reaction wheel assembly; (4) installing additional solar arrays; (5) augmenting the power control unit; (6) working on the HST's gyros. The reaction wheel assembly task, a late addition to the mission, may necessitate the abandonment of one or more of the other tasks, such as the gyro work.
Upgrading and testing program for narrow band high resolution planetary IR imaging spectrometer
NASA Technical Reports Server (NTRS)
Wattson, R. B.; Rappaport, S.
1977-01-01
An imaging spectrometer, intended primarily for observations of the outer planets, which utilizes an acoustically tuned optical filter (ATOF) and a charge coupled device (CCD) television camera was modified to improve spatial resolution and sensitivity. The upgraded instrument was a spatial resolving power of approximately 1 arc second, as defined by an f/7 beam at the CCD position and it has this resolution over the 50 arc second field of view. Less vignetting occurs and sensitivity is four times greater. The spectral resolution of 15 A over the wavelength interval 6500 A - 11,000 A is unchanged. Mechanical utility has been increased by the use of a honeycomb optical table, mechanically rigid yet adjustable optical component mounts, and a camera focus translation stage. The upgraded instrument was used to observe Venus and Saturn.
ASPIRE - Airborne Spectro-Polarization InfraRed Experiment
NASA Astrophysics Data System (ADS)
DeLuca, E.; Cheimets, P.; Golub, L.; Madsen, C. A.; Marquez, V.; Bryans, P.; Judge, P. G.; Lussier, L.; McIntosh, S. W.; Tomczyk, S.
2017-12-01
Direct measurements of coronal magnetic fields are critical for taking the next step in active region and solar wind modeling and for building the next generation of physics-based space-weather models. We are proposing a new airborne instrument to make these key observations. Building on the successful Airborne InfraRed Spectrograph (AIR-Spec) experiment for the 2017 eclipse, we will design and build a spectro-polarimeter to measure coronal magnetic field during the 2019 South Pacific eclipse. The new instrument will use the AIR-Spec optical bench and the proven pointing, tracking, and stabilization optics. A new cryogenic spectro-polarimeter will be built focusing on the strongest emission lines observed during the eclipse. The AIR-Spec IR camera, slit jaw camera and data acquisition system will all be reused. The poster will outline the optical design and the science goals for ASPIRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Evan; Goodale, Wing; Burns, Steve
There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around windmore » turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system successfully captured 16,173 five-minute video segments in the field. During nighttime field trials using nIR, we found that bat-sized objects could not be detected more than 60 m from the camera system. This led to a decision to focus research efforts exclusively on daytime monitoring and to redirect resources towards improving the video post- processing viewer. We redesigned the bird event post-processing viewer, which substantially decreased the review time necessary to detect and identify flying objects. During daytime field trials, we determine that eagles could be detected up to 500 m away using the fisheye wide-angle lenses, and eagle-sized targets could be identified to species within 350 m of the camera system. We used distance sampling survey methods to describe the probability of detecting and identifying eagles and other aerofauna as a function of distance from the system. The previously developed 3-D algorithm for object isolation and tracking was tested, but the image rectification (flattening) required to obtain accurate distance measurements with fish-eye lenses was determined to be insufficient for distant eagles. We used MATLAB and OpenCV to improve fisheye lens rectification towards the center of the image, but accurate measurements towards the image corners could not be achieved. We believe that changing the fisheye lens to rectilinear lens would greatly improve position estimation, but doing so would result in a decrease in viewing angle and depth of field. Finally, we generated simplified shape profiles of birds to look for similarities between unknown animals and known species. With further development, this method could provide a mechanism for filtering large numbers of shapes to reduce data storage and processing. These advancements further refined the camera system and brought this new technology closer to market. Once commercialized, the stereo-optic camera system technology could be used to: a) research how different species interact with wind turbines in order to refine collision risk models and inform mitigation solutions; and b) monitor aerofauna interactions with terrestrial and offshore wind farms replacing costly human observers and allowing for long-term monitoring in the offshore environment. The camera system will provide developers and regulators with data on the risk that wind turbines present to aerofauna, which will reduce uncertainty in the environmental permitting process.« less
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua-Kuang (Inventor)
1990-01-01
System for optically recognizing and tracking a plurality of objects within a field of vision. Laser (46) produces a coherent beam (48). Beam splitter (24) splits the beam into object (26) and reference (28) beams. Beam expanders (50) and collimators (52) transform the beams (26, 28) into coherent collimated light beams (26', 28'). A two-dimensional SLM (54), disposed in the object beam (26'), modulates the object beam with optical information as a function of signals from a first camera (16) which develops X and Y signals reflecting the contents of its field of vision. A hololens (38), positioned in the object beam (26') subsequent to the modulator (54), focuses the object beam at a plurality of focal points (42). A planar transparency-forming film (32), disposed with the focal points on an exposable surface, forms a multiple position interference filter (62) upon exposure of the surface and development processing of the film (32). A reflector (53) directing the reference beam (28') onto the film (32), exposes the surface, with images focused by the hololens (38), to form interference patterns on the surface. There is apparatus (16', 64) for sensing and indicating light passage through respective ones of the positions of the filter (62), whereby recognition of objects corresponding to respective ones of the positions of the filter (62) is affected. For tracking, apparatus (64) focuses light passing through the filter (62) onto a matrix of CCD's in a second camera (16') to form a two-dimensional display of the recognized objects.
ERIC Educational Resources Information Center
Laursen, Sandra L.; Brickley, Annette
2011-01-01
Scientists' involvement in education has increased in recent years due to mechanisms such as the National Science Foundation's "broader impacts" expectations for research projects. The best investment of their effort lies in sharing their expertise on the nature and processes of science; film is one medium by which this can be done…
JPRS Report Science & Technology Japan
1989-06-02
Electronics •Superconducting Wiring in LSI •One Wafer Computer •Josephson Devices •SQUID Devices Infrared Sensor Magnetic Sensor •Superconducting...Guinier- de Wolff monochromatic focusing camera (CoK* radiation) and with Philips APD-10 auto-powder diffractometer (CuKÄ radiation). Pure Si was used as...crystallized and smooth surface. The values indicated in Fig. 2 were the thickness monitored by a quartz oscillating sensor located near the
Time and flow-direction responses of shear-styress-sensitive liquid crystal coatings
NASA Technical Reports Server (NTRS)
Reda, Daniel C.; Muraqtore, J. J.; Heinick, James T.
1994-01-01
Time and flow-direction responses of shear-stress liquid crystal coatings were exploresd experimentally. For the time-response experiments, coatings were exposed to transient, compressible flows created during the startup and off-design operation of an injector-driven supersonic wind tunnel. Flow transients were visualized with a focusing schlieren system and recorded with a 100 frame/s color video camera.
ERIC Educational Resources Information Center
Adar, Fran; Delhaye, Michel; DaSilva, Edouard
2007-01-01
The evolution of Raman instrumentation from the time of the initial report of the phenomenon in 1928 to 2006 is discussed. The first instruments were prism-based spectrographs using lenses for collimation and focusing and the 21st century instruments are also spectrographs, but they use CCD cameras. The Lippmann filter technology that appears to…
ERIC Educational Resources Information Center
Collinson, Craig; Dunne, Linda; Woolhouse, Clare
2012-01-01
The focus of this article is to consider visual portrayals and representations of disability. The images selected for analysis came from online university prospectuses as well as a governmental guidance framework on the tuition of dyslexic students. Greater understanding, human rights and cultural change have been characteristic of much UK…
Lights, Camera, Spectroscope! The Basics of Spectroscopy Disclosed Using a Computer Screen
ERIC Educational Resources Information Center
Garrido-Gonza´lez, Jose´ J.; Trillo-Alcala´, María; Sa´nchez-Arroyo, Antonio J.
2018-01-01
The generation of secondary colors in digital devices by means of the additive red, green, and blue color model (RGB) can be a valuable way to introduce students to the basics of spectroscopy. This work has been focused on the spectral separation of secondary colors of light emitted by a computer screen into red, green, and blue bands, and how the…
Determining the Amount of Copper(II) Ions in a Solution Using a Smartphone
ERIC Educational Resources Information Center
Montangero, Marc
2015-01-01
When dissolving copper in nitric acid, copper(II) ions produce a blue-colored solution. It is possible to determine the concentration of copper(II) ions, focusing on the hue of the color, using a smartphone camera. A free app can be used to measure the hue of the solution, and with the help of standard copper(II) solutions, one can graph a…
NASA Astrophysics Data System (ADS)
Brasington, J.
2015-12-01
Over the last five years, Structure-from-Motion photogrammetry has dramatically democratized the availability of high quality topographic data. This approach involves the use of a non-linear bundle adjustment to estimate simultaneously camera position, pose, distortion and 3D model coordinates. In contrast to traditional aerial photogrammetry, the bundle adjustment is typically solved without external constraints and instead ground control is used a posteriori to transform the modelled coordinates to an established datum using a similarity transformation. The limited data requirements, coupled with the ability to self-calibrate compact cameras, has led to a burgeoning of applications using low-cost imagery acquired terrestrially or from low-altitude platforms. To date, most applications have focused on relatively small spatial scales where relaxed logistics permit the use of dense ground control and high resolution, close-range photography. It is less clear whether this low-cost approach can be successfully upscaled to tackle larger, watershed-scale projects extending over 102-3 km2 where it could offer a competitive alternative to landscape modelling with airborne lidar. At such scales, compromises over the density of ground control, the speed and height of sensor platform and related image properties are inevitable. In this presentation we provide a systematic assessment of large-scale SfM terrain products derived for over 80 km2 of the braided Dart River and its catchment in the Southern Alps of NZ. Reference data in the form of airborne and terrestrial lidar are used to quantify the quality of 3D reconstructions derived from helicopter photography and used to establish baseline uncertainty models for geomorphic change detection. Results indicate that camera network design is a key determinant of model quality, and that standard aerial networks based on strips of nadir photography can lead to unstable camera calibration and systematic errors that are difficult to model with sparse ground control. We demonstrate how a low cost multi-camera platform providing both nadir and oblique imagery can support robust camera calibration, enabling the generation of high quality, large-scale terrain products that are suitable for precision fluvial change detection.
NASA Astrophysics Data System (ADS)
Brasington, James; James, Joe; Cook, Simon; Cox, Simon; Lotsari, Eliisa; McColl, Sam; Lehane, Niall; Williams, Richard; Vericat, Damia
2016-04-01
In recent years, 3D terrain reconstructions based on Structure-from-Motion photogrammetry have dramatically democratized the availability of high quality topographic data. This approach involves the use of a non-linear bundle adjustment to estimate simultaneously camera position, pose, distortion and 3D model coordinates. In contrast to traditional aerial photogrammetry, the bundle adjustment is typically solved without external constraints and instead ground control is used a posteriori to transform the modelled coordinates to an established datum using a similarity transformation. The limited data requirements, coupled with the ability to self-calibrate compact cameras, has led to a burgeoning of applications using low-cost imagery acquired terrestrially or from low-altitude platforms. To date, most applications have focused on relatively small spatial scales (0.1-5 Ha), where relaxed logistics permit the use of dense ground control networks and high resolution, close-range photography. It is less clear whether this low-cost approach can be successfully upscaled to tackle larger, watershed-scale projects extending over 102-3 km2 where it could offer a competitive alternative to established landscape modelling with airborne lidar. At such scales, compromises over the density of ground control, the speed and height of sensor platform and related image properties are inevitable. In this presentation we provide a systematic assessment of the quality of large-scale SfM terrain products derived for over 80 km2 of the braided Dart River and its catchment in the Southern Alps of NZ. Reference data in the form of airborne and terrestrial lidar are used to quantify the quality of 3D reconstructions derived from helicopter photography and used to establish baseline uncertainty models for geomorphic change detection. Results indicate that camera network design is a key determinant of model quality, and that standard aerial photogrammetric networks based on strips of nadir photography can lead to unstable camera calibration and systematic errors that are difficult to model with sparse ground control. We demonstrate how a low cost multi-camera platform providing both nadir and oblique imagery can support robust camera calibration, enabling the generation of high quality, large-scale terrain products that are suitable for precision fluvial change detection.
Lellis, William A.; Blakeslee, Carrie J.; Allen, Laurie K.; Molnia, Bruce F.; Price, Susan D.; Bristol, R. Sky; Stewart, Brent
2012-01-01
Between 2007 and 2010, National Park Service (NPS) staff at the Point Reyes National Seashore, California, collected over 300,000 photographic images of Drakes Estero from remotely operated wildlife monitoring cameras. The purpose of the systems was to obtain photographic data to help understand possible relationships between anthropogenic activities and Pacific harbor seal (Phoca vitulina richardsi) behavior and distribution. The value of the NPS photographs for use in assessing the frequency and impacts of seal disturbance and displacement in Drakes Estero has been debated. In September 2011, the NPS determined that the photographs did not provide meaningful information for development of a Draft Environmental Impact Statement (DEIS) for the Drakes Bay Oyster Company Special Use Permit. Limitations of the photographs included lack of study design, poor photographic quality, inadequate field of view, incomplete estuary coverage, camera obstructions, and weather limitations. The Marine Mammal Commission (MMC) reviewed the scientific data underpinning the Drakes Bay Oyster Company DEIS in November 2011 and recommended further analysis of the NPS photographs for use in characterizing rates and consequences of seal disturbance (Marine Mammal Commission, 2011). In response to that recommendation, the NPS asked the U.S. Geological Survey (USGS) to conduct an independent review of the photographs and render an opinion on the utility of the remote camera data for informing the environmental impact analyses included in the DEIS. In consultation with the NPS, we selected the 2008 photographic dataset for detailed evaluation because it covers a full harbor seal breeding season (March 1 to June 30), provides two fields of view (two cameras were deployed), and represents a time period when cameras were most consistently deployed and maintained. The NPS requested that the photographs be evaluated in absence of other data or information pertaining to seal and human activity in the estuary and that we focus on the extent to which the photographs could be used in understanding the relationship between human activity (including commercial oyster production) and harbor seal disturbance and distribution in the estuary.
Development of solid tunable optics for ultra-miniature imaging systems
NASA Astrophysics Data System (ADS)
Yongchao, Zou
This thesis focuses on the optimal design, fabrication and testing of solid tunable optics and exploring their applications in miniature imaging systems. It starts with the numerical modelling of such lenses, followed by the optimum design method and alignment tolerance analysis. A miniature solid tunable lens driven by a piezo actuator is then developed. To solve the problem of limited maximum optical power and tuning range in conventional lens designs, a novel multi-element solid tunable lens is proposed and developed. Inspired by the Alvarez principle, a novel miniature solid tunable dual-focus lens, which is designed using freeform surfaces and driven by one micro-electro-mechanical-systems (MEMS) rotary actuator, is demonstrated. To explore the applications of these miniature solid tunable lenses, a miniature adjustable-focus endoscope and one compact adjustable-focus camera module are developed. The adjustable-focus capability of these two miniature imaging systems is fully proved by electrically focusing targets placed at different positions.
An efficient method for the fusion of light field refocused images
NASA Astrophysics Data System (ADS)
Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei
2018-04-01
Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.
van Lankveld, Jacques J D M; van den Hout, Marcel A; Schouten, Erik G W
2004-08-01
Sexually functional (N=26) and sexually dysfunctional heterosexual men with psychogenic erectile disorder (N=23) viewed two sexually explicit videos. Performance demand was manipulated through verbal instruction that a substantial genital response was to be expected from the videos. Self-focused attention was manipulated by introducing a camera pointed at the participant. Dispositional self-consciousness was assessed by questionnaire. Performance demand was found to independently inhibit the genital response. No main effect of self-focus was found. Self-focus inhibited genital response in men scoring high on general and sexual self-consciousness traits, whereas it enhanced penile tumescence in low self-conscious men. Inhibition effects were found in both volunteers and patients. No interaction effects of performance demand and self-focus were found. Subjective sexual arousal in sexually functional men was highest in the self-focus condition. In sexually dysfunctional men, subjective sexual response proved dependent on locus of attention as well as presentation order.
The Athena Microscopic Imager on the Mars Exploration Rovers
NASA Astrophysics Data System (ADS)
Herkenhoff, K. E.; Squyres, S. W.; Bell, J. F.; Maki, J. N.; Schwochert, M. A.
2002-12-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). Technically speaking, the ''microscopic'' imager is not a microscope: it has a fixed magnification of 0.4, and is intended to produce images that simulate a geologist's view when using a common hand lens. The MI uses the same electronics design as the other MER cameras, but has optics that yield a field of view of 31 x 31 mm. The MI will acquire images using only solar or skylight illumination of the target surface. A contact sensor will be used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Because the MI has a relatively small depth of field (+/- 3 mm), a single MI image of a rough surface will contain both focused and unfocused areas. Coarse (~2 mm precision) focusing will be achieved by moving the IDD away from a target after the contact sensor is activated. Multiple images taken at various distances will be acquired to ensure good focus on all parts of rough surfaces. By combining a set of images acquired in this way, a completely focused image will be assembled. The MI optics will be protected from the martian environment by a dust cover. The dust cover includes a polycarbonate window that is tinted yellow to restrict the spectral bandpass to 500-700 nm and allow color information to be obtained by taking images with the dust cover open and closed. The MI will be used to image the same materials measured by other Athena instruments, as well as targets of opportunity (before rover traverses). The resulting images will be used to place other instrumental data in context and to aid in the petrologic interpretation of rocks and soils on Mars.
PN-CCD camera for XMM: performance of high time resolution/bright source operating modes
NASA Astrophysics Data System (ADS)
Kendziorra, Eckhard; Bihler, Edgar; Grubmiller, Willy; Kretschmar, Baerbel; Kuster, Markus; Pflueger, Bernhard; Staubert, Ruediger; Braeuninger, Heinrich W.; Briel, Ulrich G.; Meidinger, Norbert; Pfeffermann, Elmar; Reppin, Claus; Stoetter, Diana; Strueder, Lothar; Holl, Peter; Kemmer, Josef; Soltau, Heike; von Zanthier, Christoph
1997-10-01
The pn-CCD camera is developed as one of the focal plane instruments for the European photon imaging camera (EPIC) on board the x-ray multi mirror (XMM) mission to be launched in 1999. The detector consists of four quadrants of three pn-CCDs each, which are integrated on one silicon wafer. Each CCD has 200 by 64 pixels (150 micrometer by 150 micrometers) with 280 micrometers depletion depth. One CCD of a quadrant is read out at a time, while the four quadrants can be processed independently of each other. In standard imaging mode the CCDs are read out sequentially every 70 ms. Observations of point sources brighter than 1 mCrab will be effected by photon pile- up. However, special operating modes can be used to observe bright sources up to 150 mCrab in timing mode with 30 microseconds time resolution and very bright sources up to several crab in burst mode with 7 microseconds time resolution. We have tested one quadrant of the EPIC pn-CCD camera at line energies from 0.52 keV to 17.4 keV at the long beam test facility Panter in the focus of the qualification mirror module for XMM. In order to test the time resolution of the system, a mechanical chopper was used to periodically modulate the beam intensity. Pulse periods down to 0.7 ms were generated. This paper describes the performance of the pn-CCD detector in timing and burst readout modes with special emphasis on energy and time resolution.
Colorimetric detection for paper-based biosensing applications
NASA Astrophysics Data System (ADS)
Brink, C.; Joubert, T.-H.
2016-02-01
Research on affordable point-of-care health diagnostics is rapidly advancing1. Colorimetric biosensor applications are typically qualitative, but recently the focus has been shifted to quantitative measurements2,3. Although numerous qualitative point-of-care (POC) health diagnostic devices are available, the challenge exists of developing a quantitative colorimetric array reader system that complies with the ASSURED (Affordable, Sensitive, Specific, User-friendly, Rapid and Robust, Equipment-free, Deliverable to end-users) principles of the World Health Organization4. This paper presents a battery powered 8-bit tonal resolution colorimetric sensor circuit for paper microfluidic assays using low cost photo-detection circuitry and a low-power LED light source. A colorimetric 3×3-pixel array reader was developed for rural environments where resources and personnel are limited. The device sports an ultralow-power E-ink paper display. The colorimetric device includes integrated GPS functionality and EEPROM memory to log measurements with geo-tags for possible analysis of regional trends. The device competes with colour intensity measurement techniques using smartphone cameras, but proves to be a cheaper solution, compensating for the typical performance variations between cameras of different brands of smartphones. Inexpensive methods for quantifying bacterial assays have been shown using desktop scanners, which are not portable, and cameras, which suffer severely from changes in ambient light in different environments. Promising colorimetric detection results have been demonstrated using devices such as video cameras5, digital colour analysers6, flatbed scanners7 or custom portable readers8. The major drawback of most of these methods is the need for specialized instrumentation and for image analysis on a computer.
AMICA (Antarctic Multiband Infrared CAmera) project
NASA Astrophysics Data System (ADS)
Dolci, Mauro; Straniero, Oscar; Valentini, Gaetano; Di Rico, Gianluca; Ragni, Maurizio; Pelusi, Danilo; Di Varano, Igor; Giuliani, Croce; Di Cianno, Amico; Valentini, Angelo; Corcione, Leonardo; Bortoletto, Favio; D'Alessandro, Maurizio; Bonoli, Carlotta; Giro, Enrico; Fantinel, Daniela; Magrin, Demetrio; Zerbi, Filippo M.; Riva, Alberto; Molinari, Emilio; Conconi, Paolo; De Caprio, Vincenzo; Busso, Maurizio; Tosti, Gino; Nucciarelli, Giuliano; Roncella, Fabio; Abia, Carlos
2006-06-01
The Antarctic Plateau offers unique opportunities for ground-based Infrared Astronomy. AMICA (Antarctic Multiband Infrared CAmera) is an instrument designed to perform astronomical imaging from Dome-C in the near- (1 - 5 μm) and mid- (5 - 27 μm) infrared wavelength regions. The camera consists of two channels, equipped with a Raytheon InSb 256 array detector and a DRS MF-128 Si:As IBC array detector, cryocooled at 35 and 7 K respectively. Cryogenic devices will move a filter wheel and a sliding mirror, used to feed alternatively the two detectors. Fast control and readout, synchronized with the chopping secondary mirror of the telescope, will be required because of the large background expected at these wavelengths, especially beyond 10 μm. An environmental control system is needed to ensure the correct start-up, shut-down and housekeeping of the camera. The main technical challenge is represented by the extreme environmental conditions of Dome C (T about -90 °C, p around 640 mbar) and the need for a complete automatization of the overall system. AMICA will be mounted at the Nasmyth focus of the 80 cm IRAIT telescope and will perform survey-mode automatic observations of selected regions of the Southern sky. The first goal will be a direct estimate of the observational quality of this new highly promising site for Infrared Astronomy. In addition, IRAIT, equipped with AMICA, is expected to provide a significant improvement in the knowledge of fundamental astrophysical processes, such as the late stages of stellar evolution (especially AGB and post-AGB stars) and the star formation.
Taking on the Heat—a Narrative Account of How Infrared Cameras Invite Instant Inquiry
NASA Astrophysics Data System (ADS)
Haglund, Jesper; Jeppsson, Fredrik; Schönborn, Konrad J.
2016-10-01
Integration of technology, social learning and scientific models offers pedagogical opportunities for science education. A particularly interesting area is thermal science, where students often struggle with abstract concepts, such as heat. In taking on this conceptual obstacle, we explore how hand-held infrared (IR) visualization technology can strengthen students' understanding of thermal phenomena. Grounded in the Swedish physics curriculum and part of a broader research programme on educational uses of IR cameras, we have developed laboratory exercises around a thermal storyline, in conjunction with the teaching of a heat-flow model. We report a narrative analysis of how a group of five fourth-graders, facilitated by a researcher, predicts, observes and explains (POE) how the temperatures change when they pour hot water into a ceramic coffee mug and a thin plastic cup. Four chronological episodes are described and analysed as group interaction unfolded. Results revealed that the students engaged cognitively and emotionally with the POE task and, in particular, held a sustained focus on making observations and offering explanations for the scenarios. A compelling finding was the group's spontaneous generation of multiple "what-ifs" in relation to thermal phenomena, such as blowing on the water surface, or submerging a pencil into the hot water. This was followed by immediate interrogation with the IR camera, a learning event we label instant inquiry. The students' expressions largely reflected adoption of the heat-flow model. In conclusion, IR cameras could serve as an access point for even very young students to develop complex thermal concepts.
NASA Astrophysics Data System (ADS)
Kagoshima, Yasushi; Miyagawa, Takamasa; Kagawa, Saki; Takeda, Shingo; Takano, Hidekazu
2017-08-01
The intensity distribution in phase space of an X-ray synchrotron radiation beamline was measured using a pinhole camera method, in order to verify astigmatism compensation by a Fresnel zone plate focusing optical system. The beamline is equipped with a silicon double crystal monochromator. The beam size and divergence at an arbitrary distance were estimated. It was found that the virtual source point was largely different between the vertical and horizontal directions, which is probably caused by thermal distortion of the monochromator crystal. The result is consistent with our astigmatism compensation by inclining a Fresnel zone plate.
Volcano monitoring with an infrared camera: first insights from Villarrica Volcano
NASA Astrophysics Data System (ADS)
Rosas Sotomayor, Florencia; Amigo Ramos, Alvaro; Velasquez Vargas, Gabriela; Medina, Roxana; Thomas, Helen; Prata, Fred; Geoffroy, Carolina
2015-04-01
This contribution focuses on the first trials of the, almost 24/7 monitoring of Villarrica volcano with an infrared camera. Results must be compared with other SO2 remote sensing instruments such as DOAS and UV-camera, for the ''day'' measurements. Infrared remote sensing of volcanic emissions is a fast and safe method to obtain gas abundances in volcanic plumes, in particular when the access to the vent is difficult, during volcanic crisis and at night time. In recent years, a ground-based infrared camera (Nicair) has been developed by Nicarnica Aviation, which quantifies SO2 and ash on volcanic plumes, based on the infrared radiance at specific wavelengths through the application of filters. Three Nicair1 (first model) have been acquired by the Geological Survey of Chile in order to study degassing of active volcanoes. Several trials with the instruments have been performed in northern Chilean volcanoes, and have proven that the intervals of retrieved SO2 concentration and fluxes are as expected. Measurements were also performed at Villarrica volcano, and a location to install a ''fixed'' camera, at 8km from the crater, was discovered here. It is a coffee house with electrical power, wifi network, polite and committed owners and a full view of the volcano summit. The first measurements are being made and processed in order to have full day and week of SO2 emissions, analyze data transfer and storage, improve the remote control of the instrument and notebook in case of breakdown, web-cam/GoPro support, and the goal of the project: which is to implement a fixed station to monitor and study the Villarrica volcano with a Nicair1 integrating and comparing these results with other remote sensing instruments. This works also looks upon the strengthen of bonds with the community by developing teaching material and giving talks to communicate volcanic hazards and other geoscience topics to the people who live "just around the corner" from one of the most active volcanoes in Chile.
Person and gesture tracking with smart stereo cameras
NASA Astrophysics Data System (ADS)
Gordon, Gaile; Chen, Xiangrong; Buck, Ron
2008-02-01
Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.
NASA Astrophysics Data System (ADS)
Crone, T. J.; Knuth, F.; Marburg, A.
2016-12-01
A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mlynar, J.; Weinzettl, V.; Imrisek, M.
2012-10-15
The contribution focuses on plasma tomography via the minimum Fisher regularisation (MFR) algorithm applied on data from the recently commissioned tomographic diagnostics on the COMPASS tokamak. The MFR expertise is based on previous applications at Joint European Torus (JET), as exemplified in a new case study of the plasma position analyses based on JET soft x-ray (SXR) tomographic reconstruction. Subsequent application of the MFR algorithm on COMPASS data from cameras with absolute extreme ultraviolet (AXUV) photodiodes disclosed a peaked radiating region near the limiter. Moreover, its time evolution indicates transient plasma edge cooling following a radial plasma shift. In themore » SXR data, MFR demonstrated that a high resolution plasma positioning independent of the magnetic diagnostics would be possible provided that a proper calibration of the cameras on an x-ray source is undertaken.« less
NASA Astrophysics Data System (ADS)
Simoens, François; Meilhan, Jérôme; Nicolas, Jean-Alain
2015-10-01
Sensitive and large-format terahertz focal plane arrays (FPAs) integrated in compact and hand-held cameras that deliver real-time terahertz (THz) imaging are required for many application fields, such as non-destructive testing (NDT), security, quality control of food, and agricultural products industry. Two technologies of uncooled THz arrays that are being studied at CEA-Leti, i.e., bolometer and complementary metal oxide semiconductor (CMOS) field effect transistors (FET), are able to meet these requirements. This paper reminds the followed technological approaches and focuses on the latest modeling and performance analysis. The capabilities of application of these arrays to NDT and security are then demonstrated with experimental tests. In particular, high technological maturity of the THz bolometer camera is illustrated with fast scanning of large field of view of opaque scenes achieved in a complete body scanner prototype.
NASA Astrophysics Data System (ADS)
Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian
2014-12-01
This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.
STS-109 Crew Interviews: Michael J. Massimino
NASA Technical Reports Server (NTRS)
2002-01-01
STS-109 Mission Specialist Michael J. Massimino is seen during a prelaunch interview. He answers questions about his inspiration to become an astronaut, his career path, and his most memorable experiences. He gives details on the mission's goals and objectives, which focus on the refurbishing of the Hubble Space Telescope, and his role in the mission. He explains the plans for the rendezvous of the Columbia Orbiter with the Hubble Space Telescope. He provides details and timelines for each of the planned Extravehicular Activities (EVAs), which include replacing the solar arrays, changing the Power Control Unit, installing the Advanced Camera for Surveys (ACS), and installing a new Cryocooler for the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). He also describes the break-out plan in place for these spacewalks. The interview ends with Massimino explaining the details of a late addition to the mission's tasks, which is to replace a reaction wheel on the Hubble Space Telescope.
Small-Scale Features in Pulsating Aurora
NASA Technical Reports Server (NTRS)
Jones, Sarah; Jaynes, Allison N.; Knudsen, David J.; Trondsen, Trond; Lessard, Marc
2011-01-01
A field study was conducted from March 12-16, 2002 using a narrow-field intensified CCD camera installed at Churchill, Manitoba. The camera was oriented along the local magnetic zenith where small-scale black auroral forms are often visible. This analysis focuses on such forms occurring within a region of pulsating aurora. The observations show black forms with irregular shape and nonuniform drift with respect to the relatively stationary pulsating patches. The pulsating patches occur within a diffuse auroral background as a modulation of the auroral brightness in a localized region. The images analyzed show a decrease in the brightness of the diffuse background in the region of the pulsating patch at the beginning of the offphase of the modulation. Throughout the off phase the brightness of the diffuse aurora gradually increases back to the average intensity. The time constant for this increase is measured as the first step toward determining the physical process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
Microenergetic Shock Initiation Studies on Deposited Films of Petn
NASA Astrophysics Data System (ADS)
Tappan, Alexander S.; Wixom, Ryan R.; Trott, Wayne M.; Long, Gregory T.; Knepper, Robert; Brundage, Aaron L.; Jones, David A.
2009-12-01
Films of the high explosive PETN (pentaerythritol tetranitrate) up to 500-μm thick have been deposited through physical vapor deposition, with the intent of creating well-defined samples for shock-initiation studies. PETN films were characterized with microscopy, x-ray diffraction, and focused ion beam nanotomography. These high-density films were subjected to strong shocks in both the out-of-plane and in-plane orientations. Initiation behavior was monitored with high-speed framing and streak camera photography. Direct initiation with a donor explosive (either RDX with binder, or CL-20 with binder) was possible in both orientations, but with the addition of a thin aluminum buffer plate (in-plane configuration only), initiation proved to be difficult. Initiation was possible with an explosively-driven 0.13-mm thick Kapton flyer and direct observation of initiation behavior was examined using streak camera photography at different flyer velocities. Models of this configuration were created using the shock physics code CTH.
Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.
Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias
2017-11-27
Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.
NASA Technical Reports Server (NTRS)
Rice, J. W., Jr.; Smith, P. H.; Marshall, J. R.
1999-01-01
The first microscopic sedimentological studies of the Martian surface will commence with the landing of the Mars Polar Lander (MPL) December 3, 1999. The Robotic Arm Camera (RAC) has a resolution of 25 um/p which will permit detailed micromorphological analysis of surface and subsurface materials. The Robotic Ann will be able to dig up to 50 cm below the surface. The walls of the trench will also be inspected by RAC to look for evidence of stratigraphic and / or sedimentological relationships. The 2001 Mars Lander will build upon and expand the sedimentological research begun by the RAC on MPL. This will be accomplished by: (1) Macroscopic (dm to cm): Descent Imager, Pancam, RAC; (2) Microscopic (mm to um RAC, MECA Optical Microscope (Figure 2), AFM This paper will focus on investigations that can be conducted by the RAC and MECA Optical Microscope.
Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian
2014-12-01
This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.