High-frame-rate infrared and visible cameras for test range instrumentation
NASA Astrophysics Data System (ADS)
Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.
1995-09-01
Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.
The application of high-speed photography in z-pinch high-temperature plasma diagnostics
NASA Astrophysics Data System (ADS)
Wang, Kui-lu; Qiu, Meng-tong; Hei, Dong-wei
2007-01-01
This invited paper is presented to discuss the application of high speed photography in z-pinch high temperature plasma diagnostics in recent years in Northwest Institute of Nuclear Technology in concentrative mode. The developments and applications of soft x-ray framing camera, soft x-ray curved crystal spectrometer, optical framing camera, ultraviolet four-frame framing camera and ultraviolet-visible spectrometer are introduced.
The development of large-aperture test system of infrared camera and visible CCD camera
NASA Astrophysics Data System (ADS)
Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying
2015-10-01
Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.
High-frame rate multiport CCD imager and camera
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.
1993-01-01
A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.
Visible camera imaging of plasmas in Proto-MPEX
NASA Astrophysics Data System (ADS)
Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.
2015-11-01
The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
In-vessel visible inspection system on KSTAR
NASA Astrophysics Data System (ADS)
Chung, Jinil; Seo, D. C.
2008-08-01
To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.
View of Saudi Arabia and north eastern Africa from the Apollo 17 spacecraft
1972-12-09
AS17-148-22718 (7-19 Dec. 1972) --- This excellent view of Saudi Arabia and the north eastern portion of the African continent was photographed by the Apollo 17 astronauts with a hand-held camera on their trans-lunar coast toward man's last lunar visit. Egypt, Sudan, Ethiopia are some of the African nations are visible. Iran, Iraq, Jordan are not so clearly visible because of cloud cover and their particular location in the picture. India is dimly visible at right of frame. The Red Sea is seen entirely in this one single frame, a rare occurrence in Apollo photography or any photography taken from manned spacecraft. The Gulf of Suez, the Dead Sea, Gulf of Aden, Persian Gulf and Gulf of Oman are also visible. This frame is one of 169 frames on film magazine NN carried aboard Apollo 17, all of which are SO368 (color) film. A 250mm lens on a 70mm Hasselblad camera recorded the image, one of 92 taken during the trans-lunar coast. Note AS17-148-22727 (also magazine NN) for an excellent full Earth picture showing the entire African continent.
Deep-UV-sensitive high-frame-rate backside-illuminated CCD camera developments
NASA Astrophysics Data System (ADS)
Dawson, Robin M.; Andreas, Robert; Andrews, James T.; Bhaskaran, Mahalingham; Farkas, Robert; Furst, David; Gershstein, Sergey; Grygon, Mark S.; Levine, Peter A.; Meray, Grazyna M.; O'Neal, Michael; Perna, Steve N.; Proefrock, Donald; Reale, Michael; Soydan, Ramazan; Sudol, Thomas M.; Swain, Pradyumna K.; Tower, John R.; Zanzucchi, Pete
2002-04-01
New applications for ultra-violet imaging are emerging in the fields of drug discovery and industrial inspection. High throughput is critical for these applications where millions of drug combinations are analyzed in secondary screenings or high rate inspection of small feature sizes over large areas is required. Sarnoff demonstrated in1990 a back illuminated, 1024 X 1024, 18 um pixel, split-frame-transfer device running at > 150 frames per second with high sensitivity in the visible spectrum. Sarnoff designed, fabricated and delivered cameras based on these CCDs and is now extending this technology to devices with higher pixel counts and higher frame rates through CCD architectural enhancements. The high sensitivities obtained in the visible spectrum are being pushed into the deep UV to support these new medical and industrial inspection applications. Sarnoff has achieved measured quantum efficiencies > 55% at 193 nm, rising to 65% at 300 nm, and remaining almost constant out to 750 nm. Optimization of the sensitivity is being pursued to tailor the quantum efficiency for particular wavelengths. Characteristics of these high frame rate CCDs and cameras will be described and results will be presented demonstrating high UV sensitivity down to 150 nm.
Cheetah: A high frame rate, high resolution SWIR image camera
NASA Astrophysics Data System (ADS)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
Multiple-frame IR photo-recorder KIT-3M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, E; Wilkins, P; Nebeker, N
2006-05-15
This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less
A high resolution IR/visible imaging system for the W7-X limiter
NASA Astrophysics Data System (ADS)
Wurden, G. A.; Stephey, L. A.; Biedermann, C.; Jakubowski, M. W.; Dunn, J. P.; Gamradt, M.
2016-11-01
A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphite tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (˜1-4.5 MW/m2), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO's can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.
A high resolution IR/visible imaging system for the W7-X limiter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, G. A., E-mail: wurden@lanl.gov; Dunn, J. P.; Stephey, L. A.
A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphitemore » tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (∼1–4.5 MW/m{sup 2}), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO’s can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.« less
Fast visible imaging of turbulent plasma in TORPEX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iraji, D.; Diallo, A.; Fasoli, A.
2008-10-15
Fast framing cameras constitute an important recent diagnostic development aimed at monitoring light emission from magnetically confined plasmas, and are now commonly used to study turbulence in plasmas. In the TORPEX toroidal device [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], low frequency electrostatic fluctuations associated with drift-interchange waves are routinely measured by means of extensive sets of Langmuir probes. A Photron Ultima APX-RS fast framing camera has recently been acquired to complement Langmuir probe measurements, which allows comparing statistical and spectral properties of visible light and electrostatic fluctuations. A direct imaging system has been developed, which allows viewingmore » the light, emitted from microwave-produced plasmas tangentially and perpendicularly to the toroidal direction. The comparison of the probability density function, power spectral density, and autoconditional average of the camera data to those obtained using a multiple head electrostatic probe covering the plasma cross section shows reasonable agreement in the case of perpendicular view and in the plasma region where interchange modes dominate.« less
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio
2010-01-01
This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559
Schlieren imaging of loud sounds and weak shock waves in air near the limit of visibility
NASA Astrophysics Data System (ADS)
Hargather, Michael John; Settles, Gary S.; Madalis, Matthew J.
2010-02-01
A large schlieren system with exceptional sensitivity and a high-speed digital camera are used to visualize loud sounds and a variety of common phenomena that produce weak shock waves in the atmosphere. Frame rates varied from 10,000 to 30,000 frames/s with microsecond frame exposures. Sound waves become visible to this instrumentation at frequencies above 10 kHz and sound pressure levels in the 110 dB (6.3 Pa) range and above. The density gradient produced by a weak shock wave is examined and found to depend upon the profile and thickness of the shock as well as the density difference across it. Schlieren visualizations of weak shock waves from common phenomena include loud trumpet notes, various impact phenomena that compress a bubble of air, bursting a toy balloon, popping a champagne cork, snapping a wooden stick, and snapping a wet towel. The balloon burst, snapping a ruler on a table, and snapping the towel and a leather belt all produced readily visible shock-wave phenomena. In contrast, clapping the hands, snapping the stick, and the champagne cork all produced wave trains that were near the weak limit of visibility. Overall, with sensitive optics and a modern high-speed camera, many nonlinear acoustic phenomena in the air can be observed and studied.
Continuous All-Sky Cloud Measurements: Cloud Fraction Analysis Based on a Newly Developed Instrument
NASA Astrophysics Data System (ADS)
Aebi, C.; Groebner, J.; Kaempfer, N.; Vuilleumier, L.
2017-12-01
Clouds play an important role in the climate system and are also a crucial parameter for the Earth's surface energy budget. Ground-based measurements of clouds provide data in a high temporal resolution in order to quantify its influence on radiation. The newly developed all-sky cloud camera at PMOD/WRC in Davos (Switzerland), the infrared cloud camera (IRCCAM), is a microbolometer sensitive in the 8 - 14 μm wavelength range. To get all-sky information the camera is located on top of a frame looking downward on a spherical gold-plated mirror. The IRCCAM has been measuring continuously (day and nighttime) with a time resolution of one minute in Davos since September 2015. To assess the performance of the IRCCAM, two different visible all-sky cameras (Mobotix Q24M and Schreder VIS-J1006), which can only operate during daytime, are installed in Davos. All three camera systems have different software for calculating fractional cloud coverage from images. Our study analyzes mainly the fractional cloud coverage of the IRCCAM and compares it with the fractional cloud coverage calculated from the two visible cameras. Preliminary results of the measurement accuracy of the IRCCAM compared to the visible camera indicate that 78 % of the data are within ± 1 octa and even 93 % within ± 2 octas. An uncertainty of 1-2 octas corresponds to the measurement uncertainty of human observers. Therefore, the IRCCAM shows similar performance in detection of cloud coverage as the visible cameras and the human observers, with the advantage that continuous measurements with high temporal resolution are possible.
Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik
2017-02-10
This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyfirstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.
Earth Observations taken by the Expedition 10 crew
2005-01-17
ISS010-E-13680 (17 January 2005) --- The border of Galveston and Brazoria Counties in Texas is visible in this electronic still camera's image, as photographed by the Expedition 10 crew onboard the International Space Station. Polly Ranch, near Friendswood, is visible west of Interstate Highway 45 (right side). FM528 goes horizontally through the middle, and FM518 runs vertically through frame center, with the two roads intersecting near Friendswood.
Lunar Reconnaissance Orbiter Camera (LROC) instrument overview
Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.
2010-01-01
The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.
NASA Astrophysics Data System (ADS)
Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji
2012-03-01
We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.
NASA Astrophysics Data System (ADS)
Le, Nam-Tuan
2017-05-01
Copyright protection and information security are two most considered issues of digital data following the development of internet and computer network. As an important solution for protection, watermarking technology has become one of the challenged roles in industry and academic research. The watermarking technology can be classified by two categories: visible watermarking and invisible watermarking. With invisible technique, there is an advantage on user interaction because of the visibility. By applying watermarking for communication, it will be a challenge and a new direction for communication technology. In this paper we will propose one new research on communication technology using optical camera communications (OCC) based invisible watermarking. Beside the analysis on performance of proposed system, we also suggest the frame structure of PHY and MAC layer for IEEE 802.15.7r1 specification which is a revision of visible light communication (VLC) standardization.
High speed line-scan confocal imaging of stimulus-evoked intrinsic optical signals in the retina
Li, Yang-Guo; Liu, Lei; Amthor, Franklin; Yao, Xin-Cheng
2010-01-01
A rapid line-scan confocal imager was developed for functional imaging of the retina. In this imager, an acousto-optic deflector (AOD) was employed to produce mechanical vibration- and inertia-free light scanning, and a high-speed (68,000 Hz) linear CCD camera was used to achieve sub-cellular and sub-millisecond spatiotemporal resolution imaging. Two imaging modalities, i.e., frame-by-frame and line-by-line recording, were validated for reflected light detection of intrinsic optical signals (IOSs) in visible light stimulus activated frog retinas. Experimental results indicated that fast IOSs were tightly correlated with retinal stimuli, and could track visible light flicker stimulus frequency up to at least 2 Hz. PMID:20125743
A Fast Visible Camera Divertor-Imaging Diagnostic on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roquemore, A; Maingi, R; Lasnier, C
2007-06-19
In recent campaigns, the Photron Ultima SE fast framing camera has proven to be a powerful diagnostic when applied to imaging divertor phenomena on the National Spherical Torus Experiment (NSTX). Active areas of NSTX divertor research addressed with the fast camera include identification of types of EDGE Localized Modes (ELMs)[1], dust migration, impurity behavior and a number of phenomena related to turbulence. To compare such edge and divertor phenomena in low and high aspect ratio plasmas, a multi-institutional collaboration was developed for fast visible imaging on NSTX and DIII-D. More specifically, the collaboration was proposed to compare the NSTX smallmore » type V ELM regime [2] and the residual ELMs observed during Type I ELM suppression with external magnetic perturbations on DIII-D[3]. As part of the collaboration effort, the Photron camera was installed recently on DIII-D with a tangential view similar to the view implemented on NSTX, enabling a direct comparison between the two machines. The rapid implementation was facilitated by utilization of the existing optics that coupled the visible spectral output from the divertor vacuum ultraviolet UVTV system, which has a view similar to the view developed for the divertor tangential TV camera [4]. A remote controlled filter wheel was implemented, as was the radiation shield required for the DIII-D installation. The installation and initial operation of the camera are described in this paper, and the first images from the DIII-D divertor are presented.« less
C-RED one: ultra-high speed wavefront sensing in the infrared made possible
NASA Astrophysics Data System (ADS)
Gach, J.-L.; Feautrier, Philippe; Stadler, Eric; Greffe, Timothee; Clop, Fabien; Lemarchand, Stéphane; Carmignani, Thomas; Boutolleau, David; Baker, Ian
2016-07-01
First Light Imaging's CRED-ONE infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. We will show the performances of the camera, its main features and compare them to other high performance wavefront sensing cameras like OCAM2 in the visible and in the infrared. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilbert, B.; Chiaberge, M.; Kotyla, J. P.
2016-07-01
We present new rest-frame UV and visible observations of 22 high- z (1 < z < 2.5) 3C radio galaxies and QSOs obtained with the Hubble Space Telescope ’s Wide Field Camera 3 instrument. Using a custom data reduction strategy in order to assure the removal of cosmic rays, persistence signal, and other data artifacts, we have produced high-quality science-ready images of the targets and their local environments. We observe targets with regions of UV emission suggestive of active star formation. In addition, several targets exhibit highly distorted host galaxy morphologies in the rest frame visible images. Photometric analyses revealmore » that brighter QSOs generally tend to be redder than their dimmer counterparts. Using emission line fluxes from the literature, we estimate that emission line contamination is relatively small in the rest frame UV images for the QSOs. Using archival VLA data, we have also created radio map overlays for each of our targets, allowing for analysis of the optical and radio axes alignment.« less
1. GENERAL VIEW OF SLC3W SHOWING SOUTH FACE AND EAST ...
1. GENERAL VIEW OF SLC-3W SHOWING SOUTH FACE AND EAST SIDE OF A-FRAME MOBILE SERVICE TOWER (MST). MST IN SERVICE POSITION OVER LAUNCHER AND FLAME BUCKET. CABLE TRAYS BETWEEN LAUNCH OPERATIONS BUILDING (BLDG. 763) AND SLC-3W IN FOREGROUND. LIQUID OXYGEN APRON VISIBLE IMMEDIATELY EAST (RIGHT) OF MST; FUEL APRON VISIBLE IMMEDIATELY WEST (LEFT) OF MST. A PORTION OF THE FLAME BUCKET VISIBLE BELOW THE SOUTH FACE OF THE MST. CAMERA TOWERS VISIBLE EAST OF MST BETWEEN ROAD AND CABLE TRAY, AND SOUTH OF MST NEAR LEFT MARGIN OF PHOTOGRAPH. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 West, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples
Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.
2014-01-01
Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510
Iodine filter imaging system for subtraction angiography using synchrotron radiation
NASA Astrophysics Data System (ADS)
Umetani, K.; Ueda, K.; Takeda, T.; Itai, Y.; Akisada, M.; Nakajima, T.
1993-11-01
A new type of real-time imaging system was developed for transvenous coronary angiography. A combination of an iodine filter and a single energy broad-bandwidth X-ray produces two-energy images for the iodine K-edge subtraction technique. X-ray images are sequentially converted to visible images by an X-ray image intensifier. By synchronizing the timing of the movement of the iodine filter into and out of the X-ray beam, two output images of the image intensifier are focused side by side on the photoconductive layer of a camera tube by an oscillating mirror. Both images are read out by electron beam scanning of a 1050-scanning-line video camera within a camera frame time of 66.7 ms. One hundred ninety two pairs of iodine-filtered and non-iodine-filtered images are stored in the frame memory at a rate of 15 pairs/s. In vivo subtracted images of coronary arteries in dogs were obtained in the form of motion pictures.
Fast-camera imaging on the W7-X stellarator
NASA Astrophysics Data System (ADS)
Ballinger, S. B.; Terry, J. L.; Baek, S. G.; Tang, K.; Grulke, O.
2017-10-01
Fast cameras recording in the visible range have been used to study filamentary (``blob'') edge turbulence in tokamak plasmas, revealing that emissive filaments aligned with the magnetic field can propagate perpendicular to it at speeds on the order of 1 km/s in the SOL or private flux region. The motion of these filaments has been studied in several tokamaks, including MAST, NSTX, and Alcator C-Mod. Filaments were also observed in the W7-X Stellarator using fast cameras during its initial run campaign. For W7-X's upcoming 2017-18 run campaign, we have installed a Phantom V710 fast camera with a view of the machine cross section and part of a divertor module in order to continue studying edge and divertor filaments. The view is coupled to the camera via a coherent fiber bundle. The Phantom camera is able to record at up to 400,000 frames per second and has a spatial resolution of roughly 2 cm in the view. A beam-splitter is used to share the view with a slower machine-protection camera. Stepping-motor actuators tilt the beam-splitter about two orthogonal axes, making it possible to frame user-defined sub-regions anywhere within the view. The diagnostic has been prepared to be remotely controlled via MDSplus. The MIT portion of this work is supported by US DOE award DE-SC0014251.
Tracking Sunspots from Mars, April 2015 Animation
2015-07-10
This single frame from a sequence of six images of an animation shows sunspots as viewed by NASA Curiosity Mars rover from April 4 to April 15, 2015. From Mars, the rover was in position to see the opposite side of the sun. The images were taken by the right-eye camera of Curiosity's Mast Camera (Mastcam), which has a 100-millimeter telephoto lens. The view on the left of each pair in this sequence has little processing other than calibration and putting north toward the top of each frame. The view on the right of each pair has been enhanced to make sunspots more visible. The apparent granularity throughout these enhanced images is an artifact of this processing. These sunspots seen in this sequence eventually produced two solar eruptions, one of which affected Earth. http://photojournal.jpl.nasa.gov/catalog/PIA19802
NASA Astrophysics Data System (ADS)
Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.
2017-12-01
Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.
Internal Waves, South China Sea
1983-06-24
STS007-05-245 (18-24 June 1983) --- A rare view of internal waves in the South China Sea. Several different series of internal waves are represented in the 70mm frame, exposed with a handheld camera by members of the STS-7 astronaut crew aboard the Earth-orbiting Challenger. The land area visible in the lower left is part of the large island of Hainan, China.
A multi-channel coronal spectrophotometer.
NASA Technical Reports Server (NTRS)
Landman, D. A.; Orrall, F. Q.; Zane, R.
1973-01-01
We describe a new multi-channel coronal spectrophotometer system, presently being installed at Mees Solar Observatory, Mount Haleakala, Maui. The apparatus is designed to record and interpret intensities from many sections of the visible and near-visible spectral regions simultaneously, with relatively high spatial and temporal resolution. The detector, a thermoelectrically cooled silicon vidicon camera tube, has its central target area divided into a rectangular array of about 100,000 pixels and is read out in a slow-scan (about 2 sec/frame) mode. Instrument functioning is entirely under PDP 11/45 computer control, and interfacing is via the CAMAC system.
CubeSat Nighttime Earth Observations
NASA Astrophysics Data System (ADS)
Pack, D. W.; Hardy, B. S.; Longcore, T.
2017-12-01
Satellite monitoring of visible emissions at night has been established as a useful capability for environmental monitoring and mapping the global human footprint. Pioneering work using Defense Meteorological Support Program (DMSP) sensors has been followed by new work using the more capable Visible Infrared Imaging Radiometer Suite (VIIRS). Beginning in 2014, we have been investigating the ability of small visible light cameras on CubeSats to contribute to nighttime Earth science studies via point-and-stare imaging. This paper summarizes our recent research using a common suite of simple visible cameras on several AeroCube satellites to carry out nighttime observations of urban areas and natural gas flares, nighttime weather (including lighting), and fishing fleet lights. Example results include: urban image examples, the utility of color imagery, urban lighting change detection, and multi-frame sequences imaging nighttime weather and large ocean areas with extensive fishing vessel lights. Our results show the potential for CubeSat sensors to improve monitoring of urban growth, light pollution, energy usage, the urban-wildland interface, the improvement of electrical power grids in developing countries, light-induced fisheries, and oil industry flare activity. In addition to orbital results, the nighttime imaging capabilities of new CubeSat sensors scheduled for launch in October 2017 are discussed.
Earth observations taken during STS-41C
2009-06-25
41C-51-2414 (6-13 April 1984) --- The entire Texas portion of the Gulf Coast and part of Louisiana's shoreline are visible in this frame, photographed on 4"x5" roll film using a large format camera aboard the Earth-orbiting space shuttle Challenger. Coastal bays and other geographic features from the Boca Chica (mouth of Rio Grande), to the mouth of the Mississippi are included in the frame, photographed from approximately 285 nautical miles above Earth. Inland cities that can be easily delineated are San Antonio, Austin, College Station, Del Rio and Lufkin. Easily pinpointed coastal cities include Houston, Galveston and Corpus Christi. The 41-C crew members used this frame as one of the visuals for their post-flight press conference on April 24, 1984.
2004-09-07
Lonely Mimas swings around Saturn, seeming to gaze down at the planet's splendid rings. The outermost, narrow F ring is visible here and exhibits some clumpy structure near the bottom of the frame. The shadow of Saturn's southern hemisphere stretches almost entirely across the rings. Mimas is 398 kilometers (247 miles) wide. The image was taken with the Cassini spacecraft narrow angle camera on August 15, 2004, at a distance of 8.8 million kilometers (5.5 million miles) from Saturn, through a filter sensitive to visible red light. The image scale is 53 kilometers (33 miles) per pixel. Contrast was slightly enhanced to aid visibility.almost entirely across the rings. Mimas is 398 kilometers (247 miles) wide. http://photojournal.jpl.nasa.gov/catalog/PIA06471
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomanowski, B. A., E-mail: b.a.lomanowski@durham.ac.uk; Sharples, R. M.; Meigs, A. G.
2014-11-15
The mirror-linked divertor spectroscopy diagnostic on JET has been upgraded with a new visible and near-infrared grating and filtered spectroscopy system. New capabilities include extended near-infrared coverage up to 1875 nm, capturing the hydrogen Paschen series, as well as a 2 kHz frame rate filtered imaging camera system for fast measurements of impurity (Be II) and deuterium Dα, Dβ, Dγ line emission in the outer divertor. The expanded system provides unique capabilities for studying spatially resolved divertor plasma dynamics at near-ELM resolved timescales as well as a test bed for feasibility assessment of near-infrared spectroscopy.
Solid-state framing camera with multiple time frames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, K. L.; Stewart, R. E.; Steele, P. T.
2013-10-07
A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.
A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applicationsa)
NASA Astrophysics Data System (ADS)
Wurden, G. A.; Coffey, S. K.
2012-10-01
For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm2 of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating (˜200 eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.
Efficient coding and detection of ultra-long IDs for visible light positioning systems.
Zhang, Hualong; Yang, Chuanchuan
2018-05-14
Visible light positioning (VLP) is a promising technique to complement Global Navigation Satellite System (GNSS) such as Global positioning system (GPS) and BeiDou Navigation Satellite System (BDS) which features the advantage of low-cost and high accuracy. The situation becomes even more crucial for indoor environments, where satellite signals are weak or even unavailable. For large-scale application of VLP, there would be a considerable number of Light emitting diode (LED) IDs, which bring forward the demand of long LED ID detection. In particular, to provision indoor localization globally, a convenient way is to program a unique ID into each LED during manufacture. This poses a big challenge for image sensors, such as the CMOS camera in everybody's hands since the long ID covers the span of multiple frames. In this paper, we investigate the detection of ultra-long ID using rolling shutter cameras. By analyzing the pattern of data loss in each frame, we proposed a novel coding technique to improve the efficiency of LED ID detection. We studied the performance of Reed-Solomon (RS) code in this system and designed a new coding method which considered the trade-off between performance and decoding complexity. Coding technique decreases the number of frames needed in data processing, significantly reduces the detection time, and improves the accuracy of detection. Numerical and experimental results show that the detected LED ID can be much longer with the coding technique. Besides, our proposed coding method is proved to achieve a performance close to that of RS code while the decoding complexity is much lower.
High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project
NASA Astrophysics Data System (ADS)
Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique
2015-04-01
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
STS-52 CANEX-2 Canadian Target Assembly (CTA) held by RMS over OV-102's PLB
1992-11-01
STS052-71-057 (22 Oct-1 Nov 1992) --- This 70mm frame, photographed with a handheld Hasselblad camera aimed through Columbia's aft flight deck windows, captures the operation of the Space Vision System (SVS) experiment above the cargo bay. Target dots have been placed on the Canadian Target Assembly (CTA), a small satellite, in the grasp of the Canadian-built remote manipulator system (RMS) arm. SVS utilized a Shuttle TV camera to monitor the dots strategically arranged on the satellite, to be tracked. As the satellite moved via the arm, the SVS computer measured the changing position of the dots and provided real-time television display of the location and orientation of the CTA. This type of displayed information is expected to help an operator guide the RMS or the Mobile Servicing System (MSS) of the future when berthing or deploying satellites. Also visible in the frame is the U.S. Microgravity Payload (USMP-01).
NASA Technical Reports Server (NTRS)
Barnes, J. C. (Principal Investigator); Smallwood, M. D.; Cogan, J. L.
1975-01-01
The author has identified the following significant results. Of the four black and white S190A camera stations, snowcover is best defined in the two visible spectral bands, due in part to their better resolution. The overall extent of the snow can be mapped more precisely, and the snow within shadow areas is better defined in the visible bands. Of the two S190A color products, the aerial color photography is the better. Because of the contrast in color between snow and snow-free terrain and the better resolution, this product is concluded to be the best overall of the six camera stations for detecting and mapping snow. Overlapping frames permit stereo viewing, which aids in distinguishing clouds from the underlying snow. Because of the greater spatial resolution of the S190B earth terrain camera, areal snow extent can be mapped in greater detail than from the S190A photographs. The snow line elevation measured from the S190A and S190B photographs is reasonable compared to the meager ground truth data available.
Video Completion in Digital Stabilization Task Using Pseudo-Panoramic Technique
NASA Astrophysics Data System (ADS)
Favorskaya, M. N.; Buryachenko, V. V.; Zotin, A. G.; Pakhirka, A. I.
2017-05-01
Video completion is a necessary stage after stabilization of a non-stationary video sequence, if it is desirable to make the resolution of the stabilized frames equalled the resolution of the original frames. Usually the cropped stabilized frames lose 10-20% of area that means the worse visibility of the reconstructed scenes. The extension of a view of field may appear due to the pan-tilt-zoom unwanted camera movement. Our approach deals with a preparing of pseudo-panoramic key frame during a stabilization stage as a pre-processing step for the following inpainting. It is based on a multi-layered representation of each frame including the background and objects, moving differently. The proposed algorithm involves four steps, such as the background completion, local motion inpainting, local warping, and seamless blending. Our experiments show that a necessity of a seamless stitching occurs often than a local warping step. Therefore, a seamless blending was investigated in details including four main categories, such as feathering-based, pyramid-based, gradient-based, and optimal seam-based blending.
New Orleans after Hurricane Katrina
2005-09-08
JSC2005e37990 (8 September 2005) --- Flooding of large sections of I-610 and the I-610/I-10 interchange (center) are visible to the east of the 17th Street Canal in this image acquired on September 8, 2005 from the International Space Station. Flooded regions are dark greenish brown, while dry areas are light brown to tan. North is to top of image, which was cropped from the digital still camera's original frame, ISS011-E-12527.
An approach to instrument qualified visual range
NASA Astrophysics Data System (ADS)
Courtade, Benoît; Bonnet, Jordan; Woodruff, Chris; Larson, Josiah; Giles, Andrew; Sonde, Nikhil; Moore, C. J.; Schimon, David; Harris, David Money; Pond, Duane; Way, Scott
2008-04-01
This paper describes a system that calculates aircraft visual range with instrumentation alone. A unique message is encoded using modified binary phase shift keying and continuously flashed at high speed by ALSF-II runway approach lights. The message is sampled at 400 frames per second by an aircraft borne high-speed camera. The encoding is designed to avoid visible flicker and minimize frame rate. Instrument qualified visual range is identified as the largest distance at which the aircraft system can acquire and verify the correct, runway-specific signal. Scaled testing indicates that if the system were implemented on one full ALSF-II fixture, instrument qualified range could be established at 5 miles in clear weather conditions.
September 2006 Monthly Report- ITER Visible/IRTV Optical Design Scoping Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lasnier, C
LLNL received a request from the US ITER organization to perform a scoping study of optical design for visible/IR camera systems for the 6 upper ports of ITER. A contract was put in place and the LLNL account number was opened July 19, 2006. A kickoff meeting was held at LLNL July 26. The principal work under the contract is being performed by Lynn Seppala (optical designer), Kevin Morris (mechanical designer), Max Fenstermacher (visible cameras), Mathias Groth (assisting with visible cameras), and Charles Lasnier (IR cameras and Principal Investigator), all LLNL employees. Kevin Morris has imported ITER CAD files andmore » developed a simplified 3D view of the ITER tokamak with upper ports, which he used to determine the optimum viewing angle from an upper port to see the outer target. He also determined the minimum angular field of view needed to see the largest possible coverage of the outer target. We examined the CEA-Cadarache report on their optical design for ITER visible/IRTV equatorial ports. We found that the resolution was diffraction-limited by the 5-mm aperture through the tile. Lynn Seppala developed a similar front-end design for an upper port but with a larger 6-inch-diameter beam. This allows the beam to pass through the port plug and port interspace without further focusing optics until outside the bioshield. This simplifies the design as well as eliminating a requirement for complex relay lenses in the port interspace. The focusing optics are all mirrors, which allows the system to handle light from 0.4 {micro}m to 5 {micro}m wavelength without chromatic aberration. The window material chosen is sapphire, as in the CEA design. Sapphire has good transmission in the desired wavelengths up to 4.8 {micro}m, as well as good mechanical strength. We have verified that sapphire windows of the needed size are commercially available. The diffraction-limited resolution permitted by the 5 mm aperture falls short of the ITER specification value but is well-matched to the resolution of current detectors. A large increase in resolution would require a similar increase in the linear pixel count on a detector. However, we cannot increase the aperture much without affecting the image quality. Lynn Seppala is writing a memo detailing the resolution trade-offs. Charles Lasnier is calculating the radiated power, which will fall on the detector in order to estimate signal-to-noise ratio and maximum frame rate. The signal will be reduced by the fact that the outer target plates are tungsten, which radiates less than carbon at the same temperature. The tungsten will also reflect radiation from the carbon tiles private flux dome, which will radiate efficiently although at a lower temperature than the target plates. The analysis will include estimates of these effects. Max Fenstermacher is investigating the intensity of line emission that will be emitted in the visible band, in order to predict signal-to-noise ratio and maximum frame rate for the visible camera. Andre Kukushkin has modeling results that will give local emission of deuterium and carbon lines. Line integrals of the emission must be done to produce the emitted intensity. The model is not able to handle tungsten and beryllium so we will only be able to estimate deuterium and carbon emission. Total costs as of September 30, 2006 are $87,834.43. Manpower was 0.58 FTE's in August, 1.48 in August, and 1.56 in September.« less
C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors
NASA Astrophysics Data System (ADS)
Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David
2018-02-01
After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.
A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, G. A.; Coffey, S. K.
2012-10-15
For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm{sup 2} of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating ({approx}200more » eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.« less
NASA Astrophysics Data System (ADS)
Ou, Yangwei; Zhang, Hongbo; Li, Bin
2018-04-01
The purpose of this paper is to show that absolute orbit determination can be achieved based on spacecraft formation. The relative position vectors expressed in the inertial frame are used as measurements. In this scheme, the optical camera is applied to measure the relative line-of-sight (LOS) angles, i.e., the azimuth and elevation. The LIDAR (Light radio Detecting And Ranging) or radar is used to measure the range and we assume that high-accuracy inertial attitude is available. When more deputies are included in the formation, the formation configuration is optimized from the perspective of the Fisher information theory. Considering the limitation on the field of view (FOV) of cameras, the visibility of spacecraft and the installation of cameras are investigated. In simulations, an extended Kalman filter (EKF) is used to estimate the position and velocity. The results show that the navigation accuracy can be enhanced by using more deputies and the installation of cameras significantly affects the navigation performance.
Saying Goodbye to 'Bonneville' Crater
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] Annotated Image NASA's Mars Exploration Rover Spirit took this panoramic camera image on sol 86 (March 31, 2004) before driving 36 meters (118 feet) on sol 87 toward its future destination, the Columbia Hills. This is probably the last panoramic camera image that Spirit will take from the high rim of 'Bonneville' crater, and provides an excellent view of the ejecta-covered path the rover has journeyed thus far. The lander can be seen toward the upper right of the frame and is approximately 321 meters (1060 feet) away from Spirit's current location. The large hill on the horizon is Grissom Hill. The Colombia Hills, located to the left, are not visible in this image.Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
Design of Dual-Road Transportable Portal Monitoring System for Visible Light and Gamma-Ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Cunningham, Mark F; Goddard Jr, James Samuel
2010-01-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they entermore » and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third alignment camera for motion compensation and are mounted on a 50 deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.« less
Design of dual-road transportable portal monitoring system for visible light and gamma-ray imaging
NASA Astrophysics Data System (ADS)
Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Bradley, E. Craig; Chesser, J.; Marchant, W.
2010-04-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they enter and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third "alignment" camera for motion compensation and are mounted on a 50' deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.
Krychowiak, M; Adnan, A; Alonso, A; Andreeva, T; Baldzuhn, J; Barbui, T; Beurskens, M; Biel, W; Biedermann, C; Blackwell, B D; Bosch, H S; Bozhenkov, S; Brakel, R; Bräuer, T; Brotas de Carvalho, B; Burhenn, R; Buttenschön, B; Cappa, A; Cseh, G; Czarnecka, A; Dinklage, A; Drews, P; Dzikowicka, A; Effenberg, F; Endler, M; Erckmann, V; Estrada, T; Ford, O; Fornal, T; Frerichs, H; Fuchert, G; Geiger, J; Grulke, O; Harris, J H; Hartfuß, H J; Hartmann, D; Hathiramani, D; Hirsch, M; Höfel, U; Jabłoński, S; Jakubowski, M W; Kaczmarczyk, J; Klinger, T; Klose, S; Knauer, J; Kocsis, G; König, R; Kornejew, P; Krämer-Flecken, A; Krawczyk, N; Kremeyer, T; Książek, I; Kubkowska, M; Langenberg, A; Laqua, H P; Laux, M; Lazerson, S; Liang, Y; Liu, S C; Lorenz, A; Marchuk, A O; Marsen, S; Moncada, V; Naujoks, D; Neilson, H; Neubauer, O; Neuner, U; Niemann, H; Oosterbeek, J W; Otte, M; Pablant, N; Pasch, E; Sunn Pedersen, T; Pisano, F; Rahbarnia, K; Ryć, L; Schmitz, O; Schmuck, S; Schneider, W; Schröder, T; Schuhmacher, H; Schweer, B; Standley, B; Stange, T; Stephey, L; Svensson, J; Szabolics, T; Szepesi, T; Thomsen, H; Travere, J-M; Trimino Mora, H; Tsuchiya, H; Weir, G M; Wenzel, U; Werner, A; Wiegel, B; Windisch, T; Wolf, R; Wurden, G A; Zhang, D; Zimbal, A; Zoletnik, S
2016-11-01
Wendelstein 7-X, a superconducting optimized stellarator built in Greifswald/Germany, started its first plasmas with the last closed flux surface (LCFS) defined by 5 uncooled graphite limiters in December 2015. At the end of the 10 weeks long experimental campaign (OP1.1) more than 20 independent diagnostic systems were in operation, allowing detailed studies of many interesting plasma phenomena. For example, fast neutral gas manometers supported by video cameras (including one fast-frame camera with frame rates of tens of kHz) as well as visible cameras with different interference filters, with field of views covering all ten half-modules of the stellarator, discovered a MARFE-like radiation zone on the inboard side of machine module 4. This structure is presumably triggered by an inadvertent plasma-wall interaction in module 4 resulting in a high impurity influx that terminates some discharges by radiation cooling. The main plasma parameters achieved in OP1.1 exceeded predicted values in discharges of a length reaching 6 s. Although OP1.1 is characterized by short pulses, many of the diagnostics are already designed for quasi-steady state operation of 30 min discharges heated at 10 MW of ECRH. An overview of diagnostic performance for OP1.1 is given, including some highlights from the physics campaigns.
Compact full-motion video hyperspectral cameras: development, image processing, and applications
NASA Astrophysics Data System (ADS)
Kanaev, A. V.
2015-10-01
Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.
Development of low-cost high-performance multispectral camera system at Banpil
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.
2014-05-01
Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.
2015-10-08
Regions with exposed water ice are highlighted in blue in this composite image from New Horizons' Ralph instrument, combining visible imagery from the Multispectral Visible Imaging Camera (MVIC) with infrared spectroscopy from the Linear Etalon Imaging Spectral Array (LEISA). The strongest signatures of water ice occur along Virgil Fossa, just west of Elliot crater on the left side of the inset image, and also in Viking Terra near the top of the frame. A major outcrop also occurs in Baré Montes towards the right of the image, along with numerous much smaller outcrops, mostly associated with impact craters and valleys between mountains. The scene is approximately 280 miles (450 kilometers) across. Note that all surface feature names are informal. http://ppj2:8080/catalog/PIA19963
Earth observations taken during STS-90 mission
1998-04-20
STS090-758-018 (17 April - 3 May 1998) --- The Space Shuttle Columbia was almost directly over the San Diego, California, area when this scene was captured with a 70mm handheld camera. In order for north to appear toward the top of the frame, it should be held with the Pacific Ocean waters to the left. The United States Naval Air Station, the United States Naval Training Center, United States Marine Corps (USMC) Recruit Depot and the United States Naval Station are all visible just left of center on or near the island and peninsula features. Among the many bodies of water visible in the photo are Mission Bay, San Diego Bay, Lower Otay Reservoir, Sweetwater Reservoir and El Capitan Reservoir.
Visible Color and Photometry of Bright Materials on Vesta
NASA Technical Reports Server (NTRS)
Schroder, S. E.; Li, J. Y.; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.
2012-01-01
The Dawn Framing Camera (FC) collected images of the surface of Vesta at a pixel scale of 70 m in the High Altitude Mapping Orbit (HAMO) phase through its clear and seven color filters spanning from 430 nm to 980 nm. The surface of Vesta displays a large diversity in its brightness and colors, evidently related to the diverse geology [1] and mineralogy [2]. Here we report a detailed investigation of the visible colors and photometric properties of the apparently bright materials on Vesta in order to study their origin. The global distribution and the spectroscopy of bright materials are discussed in companion papers [3, 4], and the synthesis results about the origin of Vestan bright materials are reported in [5].
2017-07-28
Cassini gazed toward high southern latitudes near Saturn's south pole to observe ghostly curtains of dancing light -- Saturn's southern auroras, or southern lights. These natural light displays at the planet's poles are created by charged particles raining down into the upper atmosphere, making gases there glow. The dark area at the top of this scene is Saturn's night side. The auroras rotate from left to right, curving around the planet as Saturn rotates over about 70 minutes, compressed here into a movie sequence of about five seconds. Background stars are seen sliding behind the planet. Cassini was moving around Saturn during the observation, keeping its gaze fixed on a particular spot on the planet, which causes a shift in the distant background over the course of the observation. Some of the stars seem to make a slight turn to the right just before disappearing. This effect is due to refraction -- the starlight gets bent as it passes through the atmosphere, which acts as a lens. Random bright specks and streaks appearing from frame to frame are due to charged particles and cosmic rays hitting the camera detector. The aim of this observation was to observe seasonal changes in the brightness of Saturn's auroras, and to compare with the simultaneous observations made by Cassini's infrared and ultraviolet imaging spectrometers. The original images in this movie sequence have a size of 256x256 pixels; both the original size and a version enlarged to 500x500 pixels are available here. The small image size is the result of a setting on the camera that allows for shorter exposure times than full-size (1024x1024 pixel) images. This enabled Cassini to take more frames in a short time and still capture enough photons from the auroras for them to be visible. The images were taken in visible light using the Cassini spacecraft narrow-angle camera on July 20, 2017, at a distance of about 620,000 miles (1 million kilometers) from Saturn. The views look toward 74 degrees south latitude on Saturn. Image scale is about 0.9 mile (1.4 kilometers) per pixel on Saturn. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21623
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
Demosaicking for full motion video 9-band SWIR sensor
NASA Astrophysics Data System (ADS)
Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.
2014-05-01
Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.
Solid state replacement of rotating mirror cameras
NASA Astrophysics Data System (ADS)
Frank, Alan M.; Bartolick, Joseph M.
2007-01-01
Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.
Early forest fire detection using principal component analysis of infrared video
NASA Astrophysics Data System (ADS)
Saghri, John A.; Radjabi, Ryan; Jacobs, John T.
2011-09-01
A land-based early forest fire detection scheme which exploits the infrared (IR) temporal signature of fire plume is described. Unlike common land-based and/or satellite-based techniques which rely on measurement and discrimination of fire plume directly from its infrared and/or visible reflectance imagery, this scheme is based on exploitation of fire plume temporal signature, i.e., temperature fluctuations over the observation period. The method is simple and relatively inexpensive to implement. The false alarm rate is expected to be lower that of the existing methods. Land-based infrared (IR) cameras are installed in a step-stare-mode configuration in potential fire-prone areas. The sequence of IR video frames from each camera is digitally processed to determine if there is a fire within camera's field of view (FOV). The process involves applying a principal component transformation (PCT) to each nonoverlapping sequence of video frames from the camera to produce a corresponding sequence of temporally-uncorrelated principal component (PC) images. Since pixels that form a fire plume exhibit statistically similar temporal variation (i.e., have a unique temporal signature), PCT conveniently renders the footprint/trace of the fire plume in low-order PC images. The PC image which best reveals the trace of the fire plume is then selected and spatially filtered via simple threshold and median filter operations to remove the background clutter, such as traces of moving tree branches due to wind.
2016-09-15
NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this movie showing just over four Saturn days. With Cassini's orbit being moved closer to the planet in preparation for the mission's 2017 finale, scientists took this final opportunity to capture a long movie in which the planet's full disk fit into a single wide-angle camera frame. Visible at top is the giant hexagon-shaped jet stream that surrounds the planet's north pole. Each side of this huge shape is slightly wider than Earth. The resolution of the 250 natural color wide-angle camera frames comprising this movie is 512x512 pixels, rather than the camera's full resolution of 1024x1024 pixels. Cassini's imaging cameras have the ability to take reduced-size images like these in order to decrease the amount of data storage space required for an observation. The spacecraft began acquiring this sequence of images just after it obtained the images to make a three-panel color mosaic. When it began taking images for this movie sequence, Cassini was 1,847,000 miles (2,973,000 kilometers) from Saturn, with an image scale of 355 kilometers per pixel. When it finished gathering the images, the spacecraft had moved 171,000 miles (275,000 kilometers) closer to the planet, with an image scale of 200 miles (322 kilometers) per pixel. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21047
Kidd, David G; Brethwaite, Andrew
2014-05-01
This study identified the areas behind vehicles where younger and older children are not visible and measured the extent to which vehicle technologies improve visibility. Rear visibility of targets simulating the heights of a 12-15-month-old, a 30-36-month-old, and a 60-72-month-old child was assessed in 21 2010-2013 model year passenger vehicles with a backup camera or a backup camera plus parking sensor system. The average blind zone for a 12-15-month-old was twice as large as it was for a 60-72-month-old. Large SUVs had the worst rear visibility and small cars had the best. Increases in rear visibility provided by backup cameras were larger than the non-visible areas detected by parking sensors, but parking sensors detected objects in areas near the rear of the vehicle that were not visible in the camera or other fields of view. Overall, backup cameras and backup cameras plus parking sensors reduced the blind zone by around 90 percent on average and have the potential to prevent backover crashes if drivers use the technology appropriately. Copyright © 2014 Elsevier Ltd. All rights reserved.
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...
2015-08-13
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
Coincidence ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen
2014-12-01
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.
Mosad and Stream Vision For A Telerobotic, Flying Camera System
NASA Technical Reports Server (NTRS)
Mandl, William
2002-01-01
Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.
2013-08-22
ISS036-E-035177 (22 Aug. 2013) --- Russian cosmonaut Alexander Misurkin, Expedition 36 flight engineer, attired in a Russian Orlan spacesuit, participates in a session of extravehicular activity (EVA) to continue outfitting the International Space Station. During the five-hour, 58-minute spacewalk, Misurkin and Russian cosmonaut Fyodor Yurchikhin (out of frame) completed the replacement of a laser communications experiment with a new platform for a small optical camera system, the installation of new spacewalk aids and an inspection of antenna covers. Parts of solar array panels on the orbital outpost are visible in the background,
2013-08-22
ISS036-E-035198 (22 Aug. 2013) --- Russian cosmonaut Alexander Misurkin, Expedition 36 flight engineer, attired in a Russian Orlan spacesuit, participates in a session of extravehicular activity (EVA) to continue outfitting the International Space Station. During the five-hour, 58-minute spacewalk, Misurkin and Russian cosmonaut Fyodor Yurchikhin (out of frame) completed the replacement of a laser communications experiment with a new platform for a small optical camera system, the installation of new spacewalk aids and an inspection of antenna covers. A section of the space station is visible in the reflections in his helmet visor.
2013-08-22
ISS036-E-035200 (22 Aug. 2013) --- Russian cosmonaut Alexander Misurkin, Expedition 36 flight engineer, attired in a Russian Orlan spacesuit, participates in a session of extravehicular activity (EVA) to continue outfitting the International Space Station. During the five-hour, 58-minute spacewalk, Misurkin and Russian cosmonaut Fyodor Yurchikhin (out of frame) completed the replacement of a laser communications experiment with a new platform for a small optical camera system, the installation of new spacewalk aids and an inspection of antenna covers. A section of the space station is visible in the reflections in his helmet visor.
2012-09-05
ISS032-E-025171 (5 Sept. 2012) --- Japan Aerospace Exploration Agency astronaut Aki Hoshide, Expedition 32 flight engineer, participates in the mission's third session of extravehicular activity (EVA). During the six-hour, 28-minute spacewalk, Hoshide and NASA astronaut Sunita Williams (out of frame), flight engineer, completed the installation of a Main Bus Switching Unit (MBSU) that was hampered last week by a possible misalignment and damaged threads where a bolt must be placed. They also installed a camera on the International Space Station's robotic arm, Canadarm2. A cloud-covered part of Earth is visible in the background
1997-08-27
This image of the rock "Wedge" was taken from the Sojourner rover's rear color camera on Sol 37. The position of the rover relative to Wedge is seen in MRPS 83349. The segmented rod visible in the middle of the frame is the deployment arm for the Alpha Proton X-Ray Spectrometer (APXS). The APXS, the bright, cylindrical object at the end of the arm, is positioned against Wedge and is designed to measure the rock's chemical composition. This was done successfully on the night of Sol 37. http://photojournal.jpl.nasa.gov/catalog/PIA00906
Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera
NASA Astrophysics Data System (ADS)
Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.
2016-04-01
The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.
Coincidence ion imaging with a fast frame camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei
2014-12-15
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less
2015-12-09
This representation of Ceres' Occator Crater in false colors shows differences in the surface composition. Red corresponds to a wavelength range around 0.97 micrometers (near infrared), green to a wavelength range around 0.75 micrometers (red, visible light) and blue to a wavelength range of around 0.44 micrometers (blue, visible light). Occator measures about 60 miles (90 kilometers) wide. Scientists use false color to examine differences in surface materials. The color blue on Ceres is generally associated with bright material, found in more than 130 locations, and seems to be consistent with salts, such as sulfates. It is likely that silicate materials are also present. The images were obtained by the framing camera on NASA's Dawn spacecraft from a distance of about 2,700 miles (4,400 kilometers). http://photojournal.jpl.nasa.gov/catalog/PIA20180
NASA Astrophysics Data System (ADS)
Kadosh, Itai; Sarusi, Gabby
2017-10-01
The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.
Fast camera imaging of dust in the DIII-D tokamak
NASA Astrophysics Data System (ADS)
Yu, J. H.; Rudakov, D. L.; Pigarov, A. Yu.; Smirnov, R. D.; Brooks, N. H.; Muller, S. H.; West, W. P.
2009-06-01
Naturally occurring and injected dust particles are observed in the DIII-D tokamak in the outer midplane scrape-off-layer (SOL) using a visible fast-framing camera, and the size of dust particles is estimated using the observed particle lifetime and theoretical ablation rate of a carbon sphere. Using this method, the lower limit of detected dust radius is ˜3 μm and particles with inferred radius as large as ˜1 mm are observed. Dust particle 2D velocities range from approximately 10 to 300 m/s with velocities inversely correlated with dust size. Pre-characterized 2-4 μm diameter diamond dust particles are introduced at the lower divertor in an ELMing H-mode discharge using the divertor materials evaluation system (DiMES), and these particles are found to be at the lower size limit of detection using the camera with resolution of ˜0.2 cm 2 per pixel and exposure time of 330 μs.
Overview of the Multi-Spectral Imager on the NEAR spacecraft
NASA Astrophysics Data System (ADS)
Hawkins, S. E., III
1996-07-01
The Multi-Spectral Imager on the Near Earth Asteroid Rendezvous (NEAR) spacecraft is a 1 Hz frame rate CCD camera sensitive in the visible and near infrared bands (~400-1100 nm). MSI is the primary instrument on the spacecraft to determine morphology and composition of the surface of asteroid 433 Eros. In addition, the camera will be used to assist in navigation to the asteroid. The instrument uses refractive optics and has an eight position spectral filter wheel to select different wavelength bands. The MSI optical focal length of 168 mm gives a 2.9 ° × 2.25 ° field of view. The CCD is passively cooled and the 537×244 pixel array output is digitized to 12 bits. Electronic shuttering increases the effective dynamic range of the instrument by more than a factor of 100. A one-time deployable cover protects the instrument during ground testing operations and launch. A reduced aperture viewport permits full field of view imaging while the cover is in place. A Data Processing Unit (DPU) provides the digital interface between the spacecraft and the Camera Head and uses an RTX2010 processor. The DPU provides an eight frame image buffer, lossy and lossless data compression routines, and automatic exposure control. An overview of the instrument is presented and design parameters and trade-offs are discussed.
Krychowiak, M.
2016-10-27
Wendelstein 7-X, a superconducting optimized stellarator built in Greifswald/Germany, started its first plasmas with the last closed flux surface (LCFS) defined by 5 uncooled graphite limiters in December 2015. At the end of the 10 weeks long experimental campaign (OP1.1) more than 20 independent diagnostic systems were in operation, allowing detailed studies of many interesting plasma phenomena. For example, fast neutral gas manometers supported by video cameras (including one fast-frame camera with frame rates of tens of kHz) as well as visible cameras with different interference filters, with field of views covering all ten half-modules of the stellarator, discovered amore » MARFE-like radiation zone on the inboard side of machine module 4. This structure is presumably triggered by an inadvertent plasma-wall interaction in module 4 resulting in a high impurity influx that terminates some discharges by radiation cooling. The main plasma parameters achieved in OP1.1 exceeded predicted values in discharges of a length reaching 6 s. Although OP1.1 is characterized by short pulses, many of the diagnostics are already designed for quasi-steady state operation of 30 min discharges heated at 10 MW of ECRH. Finally, an overview of diagnostic performance for OP1.1 is given, including some highlights from the physics campaigns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krychowiak, M.
Wendelstein 7-X, a superconducting optimized stellarator built in Greifswald/Germany, started its first plasmas with the last closed flux surface (LCFS) defined by 5 uncooled graphite limiters in December 2015. At the end of the 10 weeks long experimental campaign (OP1.1) more than 20 independent diagnostic systems were in operation, allowing detailed studies of many interesting plasma phenomena. For example, fast neutral gas manometers supported by video cameras (including one fast-frame camera with frame rates of tens of kHz) as well as visible cameras with different interference filters, with field of views covering all ten half-modules of the stellarator, discovered amore » MARFE-like radiation zone on the inboard side of machine module 4. This structure is presumably triggered by an inadvertent plasma-wall interaction in module 4 resulting in a high impurity influx that terminates some discharges by radiation cooling. The main plasma parameters achieved in OP1.1 exceeded predicted values in discharges of a length reaching 6 s. Although OP1.1 is characterized by short pulses, many of the diagnostics are already designed for quasi-steady state operation of 30 min discharges heated at 10 MW of ECRH. Finally, an overview of diagnostic performance for OP1.1 is given, including some highlights from the physics campaigns.« less
Noise and sensitivity of x-ray framing cameras at Nike (abstract)
NASA Astrophysics Data System (ADS)
Pawley, C. J.; Deniz, A. V.; Lehecka, T.
1999-01-01
X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.
Inflight Radiometric Calibration of New Horizons' Multispectral Visible Imaging Camera (MVIC)
NASA Technical Reports Server (NTRS)
Howett, C. J. A.; Parker, A. H.; Olkin, C. B.; Reuter, D. C.; Ennico, K.; Grundy, W. M.; Graps, A. L.; Harrison, K. P.; Throop, H. B.; Buie, M. W.;
2016-01-01
We discuss two semi-independent calibration techniques used to determine the inflight radiometric calibration for the New Horizons Multi-spectral Visible Imaging Camera (MVIC). The first calibration technique compares the measured number of counts (DN) observed from a number of well calibrated stars to those predicted using the component-level calibration. The ratio of these values provides a multiplicative factor that allows a conversation between the preflight calibration to the more accurate inflight one, for each detector. The second calibration technique is a channel-wise relative radiometric calibration for MVIC's blue, near-infrared and methane color channels using Hubble and New Horizons observations of Charon and scaling from the red channel stellar calibration. Both calibration techniques produce very similar results (better than 7% agreement), providing strong validation for the techniques used. Since the stellar calibration described here can be performed without a color target in the field of view and covers all of MVIC's detectors, this calibration was used to provide the radiometric keyword values delivered by the New Horizons project to the Planetary Data System (PDS). These keyword values allow each observation to be converted from counts to physical units; a description of how these keyword values were generated is included. Finally, mitigation techniques adopted for the gain drift observed in the near-infrared detector and one of the panchromatic framing cameras are also discussed.
NASA Astrophysics Data System (ADS)
Lu, Qun; Yu, Li; Zhang, Dan; Zhang, Xuebo
2018-01-01
This paper presentsa global adaptive controller that simultaneously solves tracking and regulation for wheeled mobile robots with unknown depth and uncalibrated camera-to-robot extrinsic parameters. The rotational angle and the scaled translation between the current camera frame and the reference camera frame, as well as the ones between the desired camera frame and the reference camera frame can be calculated in real time by using the pose estimation techniques. A transformed system is first obtained, for which an adaptive controller is then designed to accomplish both tracking and regulation tasks, and the controller synthesis is based on Lyapunov's direct method. Finally, the effectiveness of the proposed method is illustrated by a simulation study.
Color image processing and object tracking workstation
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Paulick, Michael J.
1992-01-01
A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.
Hardware accelerator design for tracking in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil
2011-10-01
Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.
Edge Turbulence Imaging in Alcator C-Mod
NASA Astrophysics Data System (ADS)
Zweben, Stewart J.
2001-10-01
This talk will describe measurements and modeling of the 2-D structure of edge turbulence in Alcator C-Mod. The radial vs. poloidal structure was measured using Gas Puff Imaging (GPI) (R. Maqueda et al, RSI 72, 931 (2001), J. Terry et al, J. Nucl. Materials 290-293, 757 (2001)), in which the visible light emitted by an edge neutral gas puff (generally D or He) is viewed along the local magnetic field by a fast-gated video camera. Strong fluctuations are observed in the gas cloud light emission when the camera is gated at ~2 microsec exposure time per frame. The structure of these fluctuations is highly turbulent with a typical radial and poloidal scale of ≈1 cm, and often with local maxima in the scrape-off layer (i.e. ``blobs"). Video clips and analyses of these images will be presented along with their variation in different plasma regimes. The local time dependence of edge turbulence is measured using high-speed photodiodes viewing the gas puff emission, a scanning Langmuir probe, and also with a Princeton Scientific Instruments ultra-fast framing camera, which can make 2-D images the gas puff at up to 200,000 frames/sec. Probe measurements show that the strong turbulence region moves to the separatrix as the density limit is approached, which may be connected to the density limit (B. LaBombard et al., Phys. Plasmas 8 2107 (2001)). Comparisons of this C-Mod turbulence data will be made with results of simulations from the Drift-Ballooning Mode (DBM) (B.N. Rogers et al, Phys. Rev. Lett. 20 4396 (1998))and Non-local Edge Turbulence (NLET) codes.
Dust measurements in tokamaks (invited).
Rudakov, D L; Yu, J H; Boedo, J A; Hollmann, E M; Krasheninnikov, S I; Moyer, R A; Muller, S H; Pigarov, A Yu; Rosenberg, M; Smirnov, R D; West, W P; Boivin, R L; Bray, B D; Brooks, N H; Hyatt, A W; Wong, C P C; Roquemore, A L; Skinner, C H; Solomon, W M; Ratynskaia, S; Fenstermacher, M E; Groth, M; Lasnier, C J; McLean, A G; Stangeby, P C
2008-10-01
Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers, visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 microm in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C(2) dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudakov, D. L.; Yu, J. H.; Boedo, J. A.
Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers,more » visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 {mu}m in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C{sub 2} dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.« less
Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.
2013-01-01
This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.
Coincidence electron/ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin
2015-05-01
A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.
Fast soft x-ray images of magnetohydrodynamic phenomena in NSTX.
Bush, C E; Stratton, B C; Robinson, J; Zakharov, L E; Fredrickson, E D; Stutman, D; Tritz, K
2008-10-01
A variety of magnetohydrodynamic (MHD) phenomena have been observed on NSTX. Many of these affect fast particle losses, which are of major concern for future burning plasma experiments. Usual diagnostics for studying these phenomena are arrays of Mirnov coils for magnetic oscillations and p-i-n diode arrays for soft x-ray emission from the plasma core. Data reported here are from a unique fast soft x-ray imaging camera (FSXIC) with a wide-angle (pinhole) tangential view of the entire plasma minor cross section. The camera provides a 64x64 pixel image, on a charge coupled device chip, of light resulting from conversion of soft x rays incident on a phosphor to the visible. We have acquired plasma images at frame rates of 1-500 kHz (300 frames/shot) and have observed a variety of MHD phenomena: disruptions, sawteeth, fishbones, tearing modes, and edge localized modes (ELMs). New data including modes with frequency >90 kHz are also presented. Data analysis and modeling techniques used to interpret the FSXIC data are described and compared, and FSXIC results are compared to Mirnov and p-i-n diode array results.
Night vision imaging system design, integration and verification in spacecraft vacuum thermal test
NASA Astrophysics Data System (ADS)
Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing
2015-08-01
The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.
NASA Technical Reports Server (NTRS)
1992-01-01
The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.
A novel simultaneous streak and framing camera without principle errors
NASA Astrophysics Data System (ADS)
Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.
2018-02-01
A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.
High-contrast imaging in the cloud with klipReduce and Findr
NASA Astrophysics Data System (ADS)
Haug-Baltzell, Asher; Males, Jared R.; Morzinski, Katie M.; Wu, Ya-Lin; Merchant, Nirav; Lyons, Eric; Close, Laird M.
2016-08-01
Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loève image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.
Event-Driven Random-Access-Windowing CCD Imaging System
NASA Technical Reports Server (NTRS)
Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William
2004-01-01
A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
EVA 2 - MS Newman and Massimino over Australia
2002-03-05
STS109-E-5610 (5 March 2002) --- Astronauts James H. Newman, attached to the Remote Manipulator System (RMS) arm of the Space Shuttle Columbia, and Michael J. Massimino (barely visible against the Hubble Space Telescope near center frame) work on the telescope as the shuttle flies over Australia. This day's space walk went on to see astronauts Newman and Massimino replace the port solar array on the Hubble. On the previous day astronauts John M. Grunsfeld and Richard M. Linnehan replaced the starboard solar array on the giant telescope. The image was recorded with a digital still camera.
Earth Observations taken by the Expedition 39 Crew
2014-04-22
ISS039-E-014807 (22 April 2014) --- As the International Space Station passed over the Bering Sea on Earth Day, one of the Expedition 39 crew members aboard the orbital outpost shot this panoramic scene looking toward Russia. The Kamchatka Peninsula can be seen in the foreground. Sunglint is visible on the left side of the frame. Only two points of view from Earth orbit were better for taking in this scene than that of the crew member with the camera inside, and those belonged to the two spacewalking astronauts -- Flight Engineers Rick Mastracchio and Steve Swanson of NASA.
Initial Demonstration of 9-MHz Framing Camera Rates on the FAST UV Drive Laser Pulse Trains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A. H.; Edstrom Jr., D.; Ruan, J.
2016-10-09
We report the configuration of a Hamamatsu C5680 streak camera as a framing camera to record transverse spatial information of green-component laser micropulses at 3- and 9-MHz rates for the first time. The latter is near the time scale of the ~7.5-MHz revolution frequency of the Integrable Optics Test Accelerator (IOTA) ring and its expected synchroton radiation source temporal structure. The 2-D images are recorded with a Gig-E readout CCD camera. We also report a first proof of principle with an OTR source using the linac streak camera in a semi-framing mode.
Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing
NASA Astrophysics Data System (ADS)
McCaffrey, Nathaniel J.; Pantuso, Francis P.
1998-03-01
A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.
Benedetti, L. R.; Holder, J. P.; Perkins, M.; ...
2016-02-26
We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement formore » gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. Furthermore, we have developed a device that can be added to the framing camera head to prevent these artifacts.« less
Benedetti, L R; Holder, J P; Perkins, M; Brown, C G; Anderson, C S; Allen, F V; Petre, R B; Hargrove, D; Glenn, S M; Simanovskaia, N; Bradley, D K; Bell, P
2016-02-01
We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement for gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. We have developed a device that can be added to the framing camera head to prevent these artifacts.
Image synchronization for 3D application using the NanEye sensor
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
Development of two-framing camera with large format and ultrahigh speed
NASA Astrophysics Data System (ADS)
Jiang, Xiaoguo; Wang, Yuan; Wang, Yi
2012-10-01
High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.
Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.
Barter, James D; Thompson, Harold R; Richardson, Christine L
2003-03-20
A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.
Resiman during Expedition 16/STS-123 EVA 1
2008-03-14
ISS016-E-032705 (13/14 March 2008) --- Astronaut Garrett Reisman, Expedition 16 flight engineer, uses a digital camera to expose a photo of his helmet visor during the mission's first scheduled session of extravehicular activity (EVA) as construction and maintenance continue on the International Space Station. Also visible in the reflections in the visor are various components of the station, the docked Space Shuttle Endeavour and a blue and white portion of Earth. During the seven-hour and one-minute spacewalk, Reisman and astronaut Rick Linnehan (out of frame), STS-123 mission specialist, prepared the Japanese logistics module-pressurized section (JLP) for removal from Space Shuttle Endeavour's payload bay; opened the Centerline Berthing Camera System on top of the Harmony module; removed the Passive Common Berthing Mechanism and installed both the Orbital Replacement Unit (ORU) tool change out mechanisms on the Canadian-built Dextre robotic system, the final element of the station's Mobile Servicing System.
SEOS frame camera applications study
NASA Technical Reports Server (NTRS)
1974-01-01
A research and development satellite is discussed which will provide opportunities for observation of transient phenomena that fall within the fixed viewing circle of the spacecraft. The evaluation of possible applications for frame cameras, for SEOS, are studied. The computed lens characteristics for each camera are listed.
Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung
2017-07-08
A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.
Multisensor data fusion across time and space
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.
2014-06-01
Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.
Novel instrumentation for multifield time-lapse cinemicrography.
Kallman, R F; Blevins, N; Coyne, M A; Prionas, S D
1990-04-01
The most significant feature of the system that is described is its ability to image essentially simultaneously the growth of up to 99 single cells into macroscopic colonies, each in its own microscope field. Operationally, fields are first defined and programmed by a trained observer. All subsequent steps are automatic and under computer control. Salient features of the hardware are stepper motor-controlled movement of the stage and fine adjustment of an inverted microscope, a high-quality 16-mm cine camera with light meter and controls, and a miniature incubator in which cells may be grown under defined conditions directly on the microscope stage. This system, termed MUTLAS, necessitates reordering of the primary images by rephotographing them on fresh film. Software developed for the analysis of cell and colony growth requires frame-by-frame examination of the secondary film and the use of a mouse-driven cursor to trace microscopically visible (4X objective magnification) events.
Io's Sodium Cloud (Clear and Green-Yellow Filters)
NASA Technical Reports Server (NTRS)
1997-01-01
The green-yellow filter and clear filter images of Io which were released over the past two days were originally exposed on the same frame. The camera pointed in slightly different directions for the two exposures, placing a clear filter image of Io on the top half of the frame, and a green-yellow filter image of Io on the bottom half of the frame. This picture shows that entire original frame in false color, the most intense emission appearing white.
East is to the right. Most of Io's visible surface is in shadow, though one can see part of an illuminated crescent on its western side. The burst of white light near Io's eastern equatorial edge (most distinctive in the green filter image) is sunlight scattered by the plume of the volcano Prometheus.There is much more bright light near Io in the clear filter image, since that filter's wider wavelength range admits more scattered light from Prometheus' sunlit plume and Io's illuminated crescent. Thus in the clear filter image especially, Prometheus's plume was bright enough to produce several white spikes which extend radially outward from the center of the plume emission. These spikes are artifacts produced by the optics of the camera. Two of the spikes in the clear filter image appear against Io's shadowed surface, and the lower of these is pointing towards a bright round spot. That spot corresponds to thermal emission from the volcano Pele.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.Improved Fast, Deep Record Length, Time-Resolved Visible Spectroscopy of Plasmas Using Fiber Grids
NASA Astrophysics Data System (ADS)
Brockington, S.; Case, A.; Cruz, E.; Williams, A.; Witherspoon, F. D.; Horton, R.; Klauser, R.; Hwang, D.
2017-10-01
HyperV Technologies is developing a fiber-coupled, deep record-length, low-light camera head for performing high time resolution spectroscopy on visible emission from plasma events. By coupling the output of a spectrometer to an imaging fiber bundle connected to a bank of amplified silicon photomultipliers, time-resolved spectroscopic imagers of 100 to 1,000 pixels can be constructed. A second generation prototype 32-pixel spectroscopic imager employing this technique was constructed and successfully tested at the University of California at Davis Compact Toroid Injection Experiment (CTIX). Pixel performance of 10 Megaframes/sec with record lengths of up to 256,000 frames ( 25.6 milliseconds) were achieved. Pixel resolution was 12 bits. Pixel pitch can be refined by using grids of 100 μm to 1000 μm diameter fibers. Experimental results will be discussed, along with future plans for this diagnostic. Work supported by USDOE SBIR Grant DE-SC0013801.
Hardware accelerator design for change detection in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil
2011-10-01
Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.
Development of high-speed video cameras
NASA Astrophysics Data System (ADS)
Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk
2001-04-01
Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.
Geometrical distortion calibration of the stereo camera for the BepiColombo mission to Mercury
NASA Astrophysics Data System (ADS)
Simioni, Emanuele; Da Deppo, Vania; Re, Cristina; Naletto, Giampiero; Martellato, Elena; Borrelli, Donato; Dami, Michele; Aroldi, Gianluca; Ficai Veltroni, Iacopo; Cremonese, Gabriele
2016-07-01
The ESA-JAXA mission BepiColombo that will be launched in 2018 is devoted to the observation of Mercury, the innermost planet of the Solar System. SIMBIOSYS is its remote sensing suite, which consists of three instruments: the High Resolution Imaging Channel (HRIC), the Visible and Infrared Hyperspectral Imager (VIHI), and the Stereo Imaging Channel (STC). The latter will provide the global three dimensional reconstruction of the Mercury surface, and it represents the first push-frame stereo camera on board of a space satellite. Based on a new telescope design, STC combines the advantages of a compact single detector camera to the convenience of a double direction acquisition system; this solution allows to minimize mass and volume performing a push-frame imaging acquisition. The shared camera sensor is divided in six portions: four are covered with suitable filters; the others, one looking forward and one backwards with respect to nadir direction, are covered with a panchromatic filter supplying stereo image pairs of the planet surface. The main STC scientific requirements are to reconstruct in 3D the Mercury surface with a vertical accuracy better than 80 m and performing a global imaging with a grid size of 65 m along-track at the periherm. Scope of this work is to present the on-ground geometric calibration pipeline for this original instrument. The selected STC off-axis configuration forced to develop a new distortion map model. Additional considerations are connected to the detector, a Si-Pin hybrid CMOS, which is characterized by a high fixed pattern noise. This had a great impact in pre-calibration phases compelling to use a not common approach to the definition of the spot centroids in the distortion calibration process. This work presents the results obtained during the calibration of STC concerning the distortion analysis for three different temperatures. These results are then used to define the corresponding distortion model of the camera.
Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing
NASA Technical Reports Server (NTRS)
Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.
1995-01-01
Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.
Hypervelocity impact studies using a rotating mirror framing laser shadowgraph camera
NASA Technical Reports Server (NTRS)
Parker, Vance C.; Crews, Jeanne Lee
1988-01-01
The need to study the effects of the impact of micrometeorites and orbital debris on various space-based systems has brought together the technologies of several companies and individuals in order to provide a successful instrumentation package. A light gas gun was employed to accelerate small projectiles to speeds in excess of 7 km/sec. Their impact on various targets is being studied with the help of a specially designed continuous-access rotating-mirror framing camera. The camera provides 80 frames of data at up to 1 x 10 to the 6th frames/sec with exposure times of 20 nsec.
Prasad, Dilip K; Rajan, Deepu; Rachmawati, Lily; Rajabally, Eshan; Quek, Chai
2016-12-01
This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement.
Software defined multi-spectral imaging for Arctic sensor networks
NASA Astrophysics Data System (ADS)
Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi
2016-05-01
Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.
A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications
Fu, Bo; Pitter, Mark C.; Russell, Noah A.
2011-01-01
Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852
NASA Technical Reports Server (NTRS)
Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)
1985-01-01
Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.
Enhanced Early View of Ceres from Dawn
2014-12-05
As the Dawn spacecraft flies through space toward the dwarf planet Ceres, the unexplored world appears to its camera as a bright light in the distance, full of possibility for scientific discovery. This view was acquired as part of a final calibration of the science camera before Dawn's arrival at Ceres. To accomplish this, the camera needed to take pictures of a target that appears just a few pixels across. On Dec. 1, 2014, Ceres was about nine pixels in diameter, nearly perfect for this calibration. The images provide data on very subtle optical properties of the camera that scientists will use when they analyze and interpret the details of some of the pictures returned from orbit. Ceres is the bright spot in the center of the image. Because the dwarf planet is much brighter than the stars in the background, the camera team selected a long exposure time to make the stars visible. The long exposure made Ceres appear overexposed, and exaggerated its size; this was corrected by superimposing a shorter exposure of the dwarf planet in the center of the image. A cropped, magnified view of Ceres appears in the inset image at lower left. The image was taken on Dec. 1, 2014 with the Dawn spacecraft's framing camera, using a clear spectral filter. Dawn was about 740,000 miles (1.2 million kilometers) from Ceres at the time. Ceres is 590 miles (950 kilometers) across and was discovered in 1801. http://photojournal.jpl.nasa.gov/catalog/PIA19050
True Ortho Generation of Urban Area Using High Resolution Aerial Photos
NASA Astrophysics Data System (ADS)
Hu, Yong; Stanley, David; Xin, Yubin
2016-06-01
The pros and cons of existing methods for true ortho generation are analyzed based on a critical literature review for its two major processing stages: visibility analysis and occlusion compensation. They process frame and pushbroom images using different algorithms for visibility analysis due to the need of perspective centers used by the z-buffer (or alike) techniques. For occlusion compensation, the pixel-based approach likely results in excessive seamlines in the ortho-rectified images due to the use of a quality measure on the pixel-by-pixel rating basis. In this paper, we proposed innovative solutions to tackle the aforementioned problems. For visibility analysis, an elevation buffer technique is introduced to employ the plain elevations instead of the distances from perspective centers by z-buffer, and has the advantage of sensor independency. A segment oriented strategy is developed to evaluate a plain cost measure per segment for occlusion compensation instead of the tedious quality rating per pixel. The cost measure directly evaluates the imaging geometry characteristics in ground space, and is also sensor independent. Experimental results are demonstrated using aerial photos acquired by UltraCam camera.
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
NASA Astrophysics Data System (ADS)
Arvesen, J. C.; Dotson, R. C.
2014-12-01
The DMS (Digital Mapping System) has been a sensor component of all DC-8 and P-3 IceBridge flights since 2009 and has acquired over 3 million JPEG images over Arctic and Antarctic land and sea ice. The DMS imagery is primarily used for identifying and locating open leads for LiDAR sea-ice freeboard measurements and documenting snow and ice surface conditions. The DMS is a COTS Canon SLR camera utilizing a 28mm focal length lens, resulting in a 10cm GSD and swath of ~400 meters from a nominal flight altitude of 500 meters. Exterior orientation is provided by an Applanix IMU/GPS which records a TTL pulse coincident with image acquisition. Notable for virtually all IceBridge flights is that parallel grids are not flown and thus there is no ability to photogrammetrically tie any imagery to adjacent flight lines. Approximately 800,000 Level-3 DMS Surface Model data products have been delivered to NSIDC, each consisting of a Digital Elevation Model (GeoTIFF DEM) and a co-registered Visible Overlay (GeoJPEG). Absolute elevation accuracy for each individual Elevation Model is adjusted to concurrent Airborne Topographic Mapper (ATM) Lidar data, resulting in higher elevation accuracy than can be achieved by photogrammetry alone. The adjustment methodology forces a zero mean difference to the corresponding ATM point cloud integrated over each DMS frame. Statistics are calculated for each DMS Elevation Model frame and show RMS differences are within +/- 10 cm with respect to the ATM point cloud. The DMS Surface Model possesses similar elevation accuracy to the ATM point cloud, but with the following advantages: · Higher and uniform spatial resolution: 40 cm GSD · 45% wider swath: 435 meters vs. 300 meters at 500 meter flight altitude · Visible RGB co-registered overlay at 10 cm GSD · Enhanced visualization through 3-dimensional virtual reality (i.e. video fly-through) Examples will be presented of the utility of these advantages and a novel use of a cell phone camera for aerial photogrammetry will also be presented.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
Characterization of a thinned back illuminated MIMOSA V sensor as a visible light camera
NASA Astrophysics Data System (ADS)
Bulgheroni, Antonio; Bianda, Michele; Caccia, Massimo; Cappellini, Chiara; Mozzanica, Aldo; Ramelli, Renzo; Risigo, Fabio
2006-09-01
This paper reports the measurements that have been performed both in the Silicon Detector Laboratory at the University of Insubria (Como, Italy) and at the Instituto Ricerche SOlari Locarno (IRSOL) to characterize a CMOS pixel particle detector as a visible light camera. The CMOS sensor has been studied in terms of Quantum Efficiency in the visible spectrum, image blooming and reset inefficiency in saturation condition. The main goal of these measurements is to prove that this kind of particle detector can also be used as an ultra fast, 100% fill factor visible light camera in solar physics experiments.
Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A
2015-01-01
The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.
Ambient-Light-Canceling Camera Using Subtraction of Frames
NASA Technical Reports Server (NTRS)
Morookian, John Michael
2004-01-01
The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.
Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask
NASA Astrophysics Data System (ADS)
Morel, Sébastien
2004-09-01
A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-03-23
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-01-01
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690
Automatic visibility retrieval from thermal camera images
NASA Astrophysics Data System (ADS)
Dizerens, Céline; Ott, Beat; Wellig, Peter; Wunderle, Stefan
2017-10-01
This study presents an automatic visibility retrieval of a FLIR A320 Stationary Thermal Imager installed on a measurement tower on the mountain Lagern located in the Swiss Jura Mountains. Our visibility retrieval makes use of edges that are automatically detected from thermal camera images. Predefined target regions, such as mountain silhouettes or buildings with high thermal differences to the surroundings, are used to derive the maximum visibility distance that is detectable in the image. To allow a stable, automatic processing, our procedure additionally removes noise in the image and includes automatic image alignment to correct small shifts of the camera. We present a detailed analysis of visibility derived from more than 24000 thermal images of the years 2015 and 2016 by comparing them to (1) visibility derived from a panoramic camera image (VISrange), (2) measurements of a forward-scatter visibility meter (Vaisala FD12 working in the NIR spectra), and (3) modeled visibility values using the Thermal Range Model TRM4. Atmospheric conditions, mainly water vapor from European Center for Medium Weather Forecast (ECMWF), were considered to calculate the extinction coefficients using MODTRAN. The automatic visibility retrieval based on FLIR A320 images is often in good agreement with the retrieval from the systems working in different spectral ranges. However, some significant differences were detected as well, depending on weather conditions, thermal differences of the monitored landscape, and defined target size.
Oversampling in virtual visual sensors as a means to recover higher modes of vibration
NASA Astrophysics Data System (ADS)
Shariati, Ali; Schumacher, Thomas
2015-03-01
Vibration-based structural health monitoring (SHM) techniques require modal information from the monitored structure in order to estimate the location and severity of damage. Natural frequencies also provide useful information to calibrate finite element models. There are several types of physical sensors that can measure the response over a range of frequencies. For most of those sensors however, accessibility, limitation of measurement points, wiring, and high system cost represent major challenges. Recent optical sensing approaches offer advantages such as easy access to visible areas, distributed sensing capabilities, and comparatively inexpensive data recording while having no wiring issues. In this research we propose a novel methodology to measure natural frequencies of structures using digital video cameras based on virtual visual sensors (VVS). In our initial study where we worked with commercially available inexpensive digital video cameras we found that for multiple degrees of freedom systems it is difficult to detect all of the natural frequencies simultaneously due to low quantization resolution. In this study we show how oversampling enabled by the use of high-end high-frame-rate video cameras enable recovering all of the three natural frequencies from a three story lab-scale structure.
Flammability Limits of Gases Under Low Gravity Conditions
NASA Technical Reports Server (NTRS)
Strehlow, R. A.
1985-01-01
The purpose of this combustion science investigation is to determine the effect of zero, fractional, and super gravity on the flammability limits of a premixed methane air flame in a standard 51 mm diameter flammability tube and to determine, if possible, the fluid flow associated with flame passage under zero-g conditions and the density (and hence, temperature) profiles associated with the flame under conditions of incipient extinction. This is accomplished by constructing an appropriate apparatus for placement in NASA's Lewis Research Center Lear Jet facility and flying the prescribed g-trajectories while the experiment is being performed. Data is recorded photographically using the visible light of the flame. The data acquired is: (1) the shape and propagation velocity of the flame under various g-conditions for methane compositions that are inside the flammable limits, and (2) the effect of gravity on the limits. Real time accelerometer readings for the three orthogonal directions are displayed in full view of the cameras and the framing rate of the cameras is used to measure velocities.
Real-time millimeter-wave imaging radiometer for avionic synthetic vision
NASA Astrophysics Data System (ADS)
Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.
1994-07-01
ThermoTrex Corporation (TTC) has developed an imaging radiometer, the passive microwave camera (PMC), that uses an array of frequency-scanned antennas coupled to a multi-channel acousto-optic (Bragg cell) spectrum analyzer to form visible images of a scene through acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output of the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. One application of this system could be its incorporation into an enhanced vision system to provide pilots with a clear view of the runway during fog and other adverse weather conditions. The unique PMC system architecture will allow compact large-aperture implementations because of its flat antenna sensor. Other potential applications include air traffic control, all-weather area surveillance, fire detection, and security. This paper describes the architecture of the TTC PMC and shows examples of images acquired with the system.
View of Kotov during a session of EVA on Expedition 15
2007-06-06
ISS015-E-10933 (6 June 2007) --- Cosmonaut Oleg V. Kotov, Expedition 15 flight engineer representing Russia's Federal Space Agency, wearing a Russian Orlan spacesuit, uses a digital still camera to expose a photo of his helmet visor during a session of extravehicular activity (EVA). With the Earth in the background, International Space Station solar array panels are also visible in the reflections. Among other tasks, Kotov and cosmonaut Fyodor N. Yurchikhin (out of frame), commander representing Russia's Federal Space Agency, completed the installation of 12 more Zvezda Service Module debris panels and installed sample containers on the Pirs Docking Compartment for a Russian experiment, called Biorisk, which looks at the effect of space on microorganisms.
View of Kotov during a session of EVA on Expedition 15
2007-06-06
ISS015-E-10939 (6 June 2007) --- Cosmonaut Oleg V. Kotov, Expedition 15 flight engineer representing Russia's Federal Space Agency, wearing a Russian Orlan spacesuit, uses a digital still camera to expose a photo of his helmet visor during a session of extravehicular activity (EVA). With the Earth in the background, International Space Station solar array panels are also visible in the reflections. Among other tasks, Kotov and cosmonaut Fyodor N. Yurchikhin (out of frame), commander representing Russia's Federal Space Agency, completed the installation of 12 more Zvezda Service Module debris panels and installed sample containers on the Pirs Docking Compartment for a Russian experiment, called Biorisk, which looks at the effect of space on microorganisms.
Commander Lousma with Bubble Separation Experiment
1982-03-31
S82-28914 (26 March 1982) --- Astronaut Jack R. Lousma, STS-3 commander, spins a package of colored liquid in zero-gravity aboard the Earth-orbiting space shuttle Columbia. He was actually creating a centrifuge to conduct a test involving the separation of bubbles from the liquid rehydrated strawberry powder for visible clarity. The gas from liquid experiment is a test devised by scientist-astronaut William E. Thornton. The gun-like device at center of left edge is a water-dispenser which the astronauts use in rehydrating food packets, many of which can be seen in the background of this middeck area of the Columbia. Astronaut C. Gordon Fullerton, pilot, exposed this frame with a 35mm camera. Photo credit: NASA
Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter
NASA Technical Reports Server (NTRS)
Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko;
2015-01-01
In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the alpha-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned following four steps in order to reduce standing time alignment me. 1: is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm). 2: The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3: CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4: Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.
Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter
NASA Technical Reports Server (NTRS)
Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko;
2015-01-01
In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the a-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned 'following four steps in order to reduce standing time alignment me. 1. is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm).2. The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3. CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4. Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.
Visible camera cryostat design and performance for the SuMIRe Prime Focus Spectrograph (PFS)
NASA Astrophysics Data System (ADS)
Smee, Stephen A.; Gunn, James E.; Golebiowski, Mirek; Hope, Stephen C.; Madec, Fabrice; Gabriel, Jean-Francois; Loomis, Craig; Le fur, Arnaud; Dohlen, Kjetil; Le Mignant, David; Barkhouser, Robert; Carr, Michael; Hart, Murdock; Tamura, Naoyuki; Shimono, Atsushi; Takato, Naruhisa
2016-08-01
We describe the design and performance of the SuMIRe Prime Focus Spectrograph (PFS) visible camera cryostats. SuMIRe PFS is a massively multi-plexed ground-based spectrograph consisting of four identical spectrograph modules, each receiving roughly 600 fibers from a 2394 fiber robotic positioner at the prime focus. Each spectrograph module has three channels covering wavelength ranges 380 nm - 640 nm, 640 nm - 955 nm, and 955 nm - 1.26 um, with the dispersed light being imaged in each channel by a f/1.07 vacuum Schmidt camera. The cameras are very large, having a clear aperture of 300 mm at the entrance window, and a mass of 280 kg. In this paper we describe the design of the visible camera cryostats and discuss various aspects of cryostat performance.
Apollo 12 photography 70 mm, 16 mm, and 35 mm frame index
NASA Technical Reports Server (NTRS)
1970-01-01
For each 70-mm frame, the index presents information on: (1) the focal length of the camera, (2) the photo scale at the principal point of the frame, (3) the selenographic coordinates at the principal point of the frame, (4) the percentage of forward overlap of the frame, (5) the sun angle (medium, low, high), (6) the quality of the photography, (7) the approximate tilt (minimum and maximum) of the camera, and (8) the direction of tilt. A brief description of each frame is also included. The index to the 16-mm sequence photography includes information concerning the approximate surface coverage of the photographic sequence and a brief description of the principal features shown. A column of remarks is included to indicate: (1) if the sequence is plotted on the photographic index map and (2) the quality of the photography. The pictures taken using the lunar surface closeup stereoscopic camera (35 mm) are also described in this same index format.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
NASA Astrophysics Data System (ADS)
Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika
2015-09-01
In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.
Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization
NASA Technical Reports Server (NTRS)
Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.
2012-01-01
The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.
Revolutionary visible and infrared sensor detectors for the most advanced astronomical AO systems
NASA Astrophysics Data System (ADS)
Feautrier, Philippe; Gach, Jean-Luc; Guieu, Sylvain; Downing, Mark; Jorden, Paul; Rothman, Johan; de Borniol, Eric D.; Balard, Philippe; Stadler, Eric; Guillaume, Christian; Boutolleau, David; Coussement, Jérome; Kolb, Johann; Hubin, Norbert; Derelle, Sophie; Robert, Clélia; Tanchon, Julien; Trollier, Thierry; Ravex, Alain; Zins, Gérard; Kern, Pierre; Moulin, Thibaut; Rochat, Sylvain; Delpoulbé, Alain; Lebouqun, Jean-Baptiste
2014-07-01
We report in this paper decisive advance on the detector development for the astronomical applications that require very fast operation. Since the CCD220 and OCAM2 major success, new detector developments started in Europe either for visible and IR wavelengths. Funded by ESO and the FP7 Opticon European network, the NGSD CMOS device is fully dedicated to Natural and Laser Guide Star AO for the E-ELT with strong ESO involvement. The NGSD will be a 880x840 pixels CMOS detector with a readout noise of 3 e (goal 1e) at 700 Hz frame rate and providing digital outputs. A camera development, based on this CMOS device and also funded by the Opticon European network, is ongoing. Another major AO wavefront sensing detector development concerns IR detectors based on Avalanche Photodiode (e- APD) arrays within the RAPID project. Developed by the SOFRADIR and CEA/LETI manufacturers, the latter offers a 320x255 8 outputs 30 microns IR array, sensitive from 0.4 to 3 microns, with less than 2 e readout noise at 1600 fps. A rectangular window can also be programmed to speed up even more the frame rate when the full frame readout is not required. The high QE response, in the range of 70%, is almost flat over this wavelength range. Advanced packaging with miniature cryostat using pulse tube cryocoolers was developed in the frame of this programme in order to allow use on this detector in any type of environment. The characterization results of this device are presented here. Readout noise as low as 1.7 e at 1600 fps has been measured with a 3 microns wavelength cut-off chip and a multiplication gain of 14 obtained with a limited photodiode polarization of 8V. This device also exhibits excellent linearity, lower than 1%. The pulse tube cooling allows smart and easy cooling down to 55 K. Vibrations investigations using centroiding and FFT measurements were performed proving that the miniature pulse tube does not induce measurable vibrations to the optical bench, allowing use of this cooled device without liquid nitrogen in very demanding environmental conditions. A successful test of this device was performed on sky on the PIONIER 4 telescopes beam combiner on the VLTi at ESOParanal in June 2014. First Light Imaging, which will commercialize a camera system using also APD infrared arrays in its proprietary wavefront sensor camera platform. These programs are held with several partners, among them are the French astronomical laboratories (LAM, OHP, IPAG), the detector manufacturers (e2v technologies, Sofradir, CEA/LETI) and other partners (ESO, ONERA, IAC, GTC, First Light Imaging). Funding is: Opticon FP7 from European Commission, ESO, CNRS and Université de Provence, Sofradir, ONERA, CEA/LETI the French FUI (DGCIS), the FOCUS Labex and OSEO.
Human detection in sensitive security areas through recognition of omega shapes using MACH filters
NASA Astrophysics Data System (ADS)
Rehman, Saad; Riaz, Farhan; Hassan, Ali; Liaquat, Muwahida; Young, Rupert
2015-03-01
Human detection has gained considerable importance in aggravated security scenarios over recent times. An effective security application relies strongly on detailed information regarding the scene under consideration. A larger accumulation of humans than the number of personal authorized to visit a security controlled area must be effectively detected, amicably alarmed and immediately monitored. A framework involving a novel combination of some existing techniques allows an immediate detection of an undesirable crowd in a region under observation. Frame differencing provides a clear visibility of moving objects while highlighting those objects in each frame acquired by a real time camera. Training of a correlation pattern recognition based filter on desired shapes such as elliptical representations of human faces (variants of an Omega Shape) yields correct detections. The inherent ability of correlation pattern recognition filters caters for angular rotations in the target object and renders decision regarding the existence of the number of persons exceeding an allowed figure in the monitored area.
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang
1994-01-01
This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.
Spectral characterisation and noise performance of Vanilla—an active pixel sensor
NASA Astrophysics Data System (ADS)
Blue, Andrew; Bates, R.; Bohndiek, S. E.; Clark, A.; Arvanitis, Costas D.; Greenshaw, T.; Laing, A.; Maneuski, D.; Turchetta, R.; O'Shea, V.
2008-06-01
This work will report on the characterisation of a new active pixel sensor, Vanilla. The Vanilla comprises of 512×512 (25μm 2) pixels. The sensor has a 12 bit digital output for full-frame mode, although it can also be readout in analogue mode, whereby it can also be read in a fully programmable region-of-interest (ROI) mode. In full frame, the sensor can operate at a readout rate of more than 100 frames per second (fps), while in ROI mode, the speed depends on the size, shape and number of ROIs. For example, an ROI of 6×6 pixels can be read at 20,000 fps in analogue mode. Using photon transfer curve (PTC) measurements allowed for the calculation of the read noise, shot noise, full-well capacity and camera gain constant of the sensor. Spectral response measurements detailed the quantum efficiency (QE) of the detector through the UV and visible region. Analysis of the ROI readout mode was also performed. Such measurements suggest that the Vanilla APS (active pixel sensor) will be suitable for a wide range of applications including particle physics and medical imaging.
Encrypting Digital Camera with Automatic Encryption Key Deletion
NASA Technical Reports Server (NTRS)
Oakley, Ernest C. (Inventor)
2007-01-01
A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.
Automatic fog detection for public safety by using camera images
NASA Astrophysics Data System (ADS)
Pagani, Giuliano Andrea; Roth, Martin; Wauben, Wiel
2017-04-01
Fog and reduced visibility have considerable impact on the performance of road, maritime, and aeronautical transportation networks. The impact ranges from minor delays to more serious congestions or unavailability of the infrastructure and can even lead to damage or loss of lives. Visibility is traditionally measured manually by meteorological observers using landmarks at known distances in the vicinity of the observation site. Nowadays, distributed cameras facilitate inspection of more locations from one remote monitoring center. The main idea is, however, still deriving the visibility or presence of fog by an operator judging the scenery and the presence of landmarks. Visibility sensors are also used, but they are rather costly and require regular maintenance. Moreover, observers, and in particular sensors, give only visibility information that is representative for a limited area. Hence the current density of visibility observations is insufficient to give detailed information on the presence of fog. Cameras are more and more deployed for surveillance and security reasons in cities and for monitoring traffic along main transportation ways. In addition to this primary use of cameras, we consider cameras as potential sensors to automatically identify low visibility conditions. The approach that we follow is to use machine learning techniques to determine the presence of fog and/or to make an estimation of the visibility. For that purpose a set of features are extracted from the camera images such as the number of edges, brightness, transmission of the image dark channel, fractal dimension. In addition to these image features, we also consider meteorological variables such as wind speed, temperature, relative humidity, and dew point as additional features to feed the machine learning model. The results obtained with a training and evaluation set consisting of 10-minute sampled images for two KNMI locations over a period of 1.5 years by using decision trees methods to classify the dense fog conditions (i.e., visibility below 250 meters) show promising results (in terms of accuracy and type I and II errors). We are currently extending the approach to images obtained with traffic-monitoring cameras along highways. This is a first step to reach a solution that is closer to an operational artificial intelligence application for automatic fog alarm signaling for public safety.
The appearance and propagation of filaments in the private flux region in Mega Amp Spherical Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, J. R.; Fishpool, G. M.; Thornton, A. J.
2015-09-15
The transport of particles via intermittent filamentary structures in the private flux region (PFR) of plasmas in the MAST tokamak has been investigated using a fast framing camera recording visible light emission from the volume of the lower divertor, as well as Langmuir probes and IR thermography monitoring particle and power fluxes to plasma-facing surfaces in the divertor. The visible camera data suggest that, in the divertor volume, fluctuations in light emission above the X-point are strongest in the scrape-off layer (SOL). Conversely, in the region below the X-point, it is found that these fluctuations are strongest in the PFRmore » of the inner divertor leg. Detailed analysis of the appearance of these filaments in the camera data suggests that they are approximately circular, around 1–2 cm in diameter, but appear more elongated near the divertor target. The most probable toroidal quasi-mode number is between 2 and 3. These filaments eject plasma deeper into the private flux region, sometimes by the production of secondary filaments, moving at a speed of 0.5–1.0 km/s. Probe measurements at the inner divertor target suggest that the fluctuations in the particle flux to the inner target are strongest in the private flux region, and that the amplitude and distribution of these fluctuations are insensitive to the electron density of the core plasma, auxiliary heating and whether the plasma is single-null or double-null. It is found that the e-folding width of the time-average particle flux in the PFR decreases with increasing plasma current, but the fluctuations appear to be unaffected. At the outer divertor target, the fluctuations in particle and power fluxes are strongest in the SOL.« less
An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories
NASA Astrophysics Data System (ADS)
Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji
2008-11-01
We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.
Flexible nuclear medicine camera and method of using
Dilmanian, F.A.; Packer, S.; Slatkin, D.N.
1996-12-10
A nuclear medicine camera and method of use photographically record radioactive decay particles emitted from a source, for example a small, previously undetectable breast cancer, inside a patient. The camera includes a flexible frame containing a window, a photographic film, and a scintillation screen, with or without a gamma-ray collimator. The frame flexes for following the contour of the examination site on the patient, with the window being disposed in substantially abutting contact with the skin of the patient for reducing the distance between the film and the radiation source inside the patient. The frame is removably affixed to the patient at the examination site for allowing the patient mobility to wear the frame for a predetermined exposure time period. The exposure time may be several days for obtaining early qualitative detection of small malignant neoplasms. 11 figs.
Computational Studies of X-ray Framing Cameras for the National Ignition Facility
2013-06-01
Livermore National Laboratory 7000 East Avenue Livermore, CA 94550 USA Abstract The NIF is the world’s most powerful laser facility and is...a phosphor screen where the output is recorded. The x-ray framing cameras have provided excellent information. As the yields at NIF have increased...experiments on the NIF . The basic operation of these cameras is shown in Fig. 1. Incident photons generate photoelectrons both in the pores of the MCP and
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013683 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013687 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013693 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
A combined microphone and camera calibration technique with application to acoustic imaging.
Legg, Mathew; Bradley, Stuart
2013-10-01
We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.
Beam measurements using visible synchrotron light at NSLS2 storage ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Weixing, E-mail: chengwx@bnl.gov; Bacha, Bel; Singh, Om
2016-07-27
Visible Synchrotron Light Monitor (SLM) diagnostic beamline has been designed and constructed at NSLS2 storage ring, to characterize the electron beam profile at various machine conditions. Due to the excellent alignment, SLM beamline was able to see the first visible light when beam was circulating the ring for the first turn. The beamline has been commissioned for the past year. Besides a normal CCD camera to monitor the beam profile, streak camera and gated camera are used to measure the longitudinal and transverse profile to understand the beam dynamics. Measurement results from these cameras will be presented in this paper.more » A time correlated single photon counting system (TCSPC) has also been setup to measure the single bunch purity.« less
NASA Astrophysics Data System (ADS)
Gaddam, Vamsidhar Reddy; Griwodz, Carsten; Halvorsen, Pâl.
2014-02-01
One of the most common ways of capturing wide eld-of-view scenes is by recording panoramic videos. Using an array of cameras with limited overlapping in the corresponding images, one can generate good panorama images. Using the panorama, several immersive display options can be explored. There is a two fold synchronization problem associated to such a system. One is the temporal synchronization, but this challenge can easily be handled by using a common triggering solution to control the shutters of the cameras. The other synchronization challenge is the automatic exposure synchronization which does not have a straight forward solution, especially in a wide area scenario where the light conditions are uncontrolled like in the case of an open, outdoor football stadium. In this paper, we present the challenges and approaches for creating a completely automatic real-time panoramic capture system with a particular focus on the camera settings. One of the main challenges in building such a system is that there is not one common area of the pitch that is visible to all the cameras that can be used for metering the light in order to nd appropriate camera parameters. One approach we tested is to use the green color of the eld grass. Such an approach provided us with acceptable results only in limited light conditions.A second approach was devised where the overlapping areas between adjacent cameras are exploited, thus creating pairs of perfectly matched video streams. However, there still existed some disparity between di erent pairs. We nally developed an approach where the time between two temporal frames is exploited to communicate the exposures among the cameras where we achieve a perfectly synchronized array. An analysis of the system and some experimental results are presented in this paper. In summary, a pilot-camera approach running in auto-exposure mode and then distributing the used exposure values to the other cameras seems to give best visual results.
South Melea Planum, By The Dawn's Early Light
NASA Technical Reports Server (NTRS)
1999-01-01
MOC 'sees' by the dawn's early light! This picture was taken over the high southern polar latitudes during the first week of May 1999. The area shown is currently in southern winter darkness. Because sunlight is scattered over the horizon by aerosols--dust and ice particles--suspended in the atmosphere, sufficient light reaches regions within a few degrees of the terminator (the line dividing night and day) to be visible to the Mars Global Surveyor Mars Orbiter Camera (MOC) when the maximum exposure settings are used. This image shows a bright, wispy cloud hanging over southern Malea Planum. This cloud would not normally be visible, since it is currently in darkness. At the time this picture was taken, the sun was more than 5.7o below the northern horizon. The scene covers an area 3 kilometers (1.9 miles) wide. Again, the illumination is from the top. In this frame, the surface appears a relatively uniform gray. At the time the picture was acquired, the surface was covered with south polar wintertime frost. The highly reflective frost, in fact, may have contributed to the increased visibility of this surface. This 'twilight imaging' technique for viewing Mars can only work near the terminator; thus in early May only regions between about 67oS and 74oS were visible in twilight images in the southern hemisphere, and a similar narrow latitude range could be imaged in the northern hemisphere. MOC cannot 'see' in the total darkness of full-borne night. Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Clouds Sailing Overhead on Mars, Enhanced
2017-08-09
Wispy clouds float across the Martian sky in this accelerated sequence of enhanced images from NASA's Curiosity Mars rover. The rover's Navigation Camera (Navcam) took these eight images over a span of four minutes early in the morning of the mission's 1,758th Martian day, or sol (July 17, 2017), aiming nearly straight overhead. They have been processed by first making a "flat field' adjustment for known differences in sensitivity among pixels and correcting for camera artifacts due to light reflecting within the camera, and then generating an "average" of all the frames and subtracting that average from each frame. This subtraction results in emphasizing any changes due to movement or lighting. The clouds are also visible, though fainter, in a raw image sequence from these same observations. On the same Martian morning, Curiosity also observed clouds near the southern horizon. The clouds resemble Earth's cirrus clouds, which are ice crystals at high altitudes. These Martian clouds are likely composed of crystals of water ice that condense onto dust grains in the cold Martian atmosphere. Cirrus wisps appear as ice crystals fall and evaporate in patterns known as "fall streaks" or "mare's tails." Such patterns have been seen before at high latitudes on Mars, for instance by the Phoenix Mars Lander in 2008, and seasonally nearer the equator, for instance by the Opportunity rover. However, Curiosity has not previously observed such clouds so clearly visible from the rover's study area about five degrees south of the equator. The Hubble Space Telescope and spacecraft orbiting Mars have observed a band of clouds to appear near the Martian equator around the time of the Martian year when the planet is farthest from the Sun. With a more elliptical orbit than Earth's, Mars experiences more annual variation than Earth in its distance from the Sun. The most distant point in an orbit around the Sun is called the aphelion. The near-equatorial Martian cloud pattern observed at that time of year is called the "aphelion cloud belt." These new images from Curiosity were taken about two months before aphelion, but the morning clouds observed may be an early stage of the aphelion cloud belt. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21841
Dust Devils in Gusev Crater, Sol 463
NASA Technical Reports Server (NTRS)
2005-01-01
This movie clip shows a several dust devils -- whirlwinds that loft dust into the air -- moving across a plain below the hillside vantage point of NASA's Mars Exploration Rover Spirit. Several of the dust devils are visible at once in some of the frames in this sequence. The local solar time was about 2 p.m., when the ground temperature was high enough to cause turbulence that kicks up dust devils as the wind blows across the plain. The number of seconds elapsed since the first frame is indicated at lower left of the images, typically 20 seconds between frames. Spirit's navigation camera took these images on the rover's 463rd martian day, or sol (April 22, 2005.) Contrast has been enhanced for anything in the images that changes from frame to frame, that is, for the dust devil. Scientists expected dust devils since before Spirit landed. The landing area inside Gusev Crater is filled with dark streaks left behind when dust devils pick dust up from an area. It is also filled with bright 'hollows,' which are dust-filled miniature craters. Dust covers most of the terrain. Winds flow into and out of Gusev crater every day. The Sun heats the surface so that the surface is warm to the touch even though the atmosphere at 2 meters (6 feet) above the surface would be chilly. That temperature contrast causes convection. Mixing the dust, winds, and convection can trigger dust devils.Several Dust Devils in Gusev Crater, Sol 461
NASA Technical Reports Server (NTRS)
2005-01-01
This movie clip shows a several dust devils -- whirlwinds that loft dust into the air -- moving across a plain below the hillside vantage point of NASA's Mars Exploration Rover Spirit. Several of the dust devils are visible at once in some of the 21 frames in this sequence. The local solar time was about 2 p.m., when the ground temperature was high enough to cause turbulence that kicks up dust devils as the wind blows across the plain. The number of seconds elapsed since the first frame is indicated at lower left of the images, typically 20 seconds between frames. Spirit's navigation camera took these images on the rover's 461st martian day, or sol (April 20, 2005.) Contrast has been enhanced for anything in the images that changes from frame to frame, that is, for the dust devil. Scientists expected dust devils since before Spirit landed. The landing area inside Gusev Crater is filled with dark streaks left behind when dust devils pick dust up from an area. It is also filled with bright 'hollows,' which are dust-filled miniature craters. Dust covers most of the terrain. Winds flow into and out of Gusev crater every day. The Sun heats the surface so that the surface is warm to the touch even though the atmosphere at 2 meters (6 feet) above the surface would be chilly. That temperature contrast causes convection. Mixing the dust, winds, and convection can trigger dust devils.Driving techniques for high frame rate CCD camera
NASA Astrophysics Data System (ADS)
Guo, Weiqiang; Jin, Longxu; Xiong, Jingwu
2008-03-01
This paper describes a high-frame rate CCD camera capable of operating at 100 frames/s. This camera utilizes Kodak KAI-0340, an interline transfer CCD with 640(vertical)×480(horizontal) pixels. Two output ports are used to read out CCD data and pixel rates approaching 30 MHz. Because of its reduced effective opacity of vertical charge transfer registers, interline transfer CCD can cause undesired image artifacts, such as random white spots and smear generated in the registers. To increase frame rate, a kind of speed-up structure has been incorporated inside KAI-0340, then it is vulnerable to a vertical stripe effect. The phenomena which mentioned above may severely impair the image quality. To solve these problems, some electronic methods of eliminating these artifacts are adopted. Special clocking mode can dump the unwanted charge quickly, then the fast readout of the images, cleared of smear, follows immediately. Amplifier is used to sense and correct delay mismatch between the dual phase vertical clock pulses, the transition edges become close to coincident, so vertical stripes disappear. Results obtained with the CCD camera are shown.
"Night" scene of the STS-5 Columbia in orbit over the earth
1982-11-17
S82-39796 (11-16 Nov. 1982) --- A ?night? scene of the STS-5 space shuttle Columbia in orbit over Earth?s glowing horizon was captured by an astronaut crew member aiming a 70mm handheld camera through the aft windows of the flight deck. The aft section of the cargo bay contains two closed protective shields for satellites which were deployed on the flight. The nearest ?cradle? or shield houses the Satellite Business System?s (SBS-3) spacecraft and is visible in this frame while the Telesta Canada ANIK C-3 shield is out of view. The vertical stabilizer, illuminated by the sun, is flanked by two orbital maneuvering system (OMS) pods. Photo credit: NASA
Prototype high resolution multienergy soft x-ray array for NSTX.
Tritz, K; Stutman, D; Delgado-Aparicio, L; Finkenthal, M; Kaita, R; Roquemore, L
2010-10-01
A novel diagnostic design seeks to enhance the capability of multienergy soft x-ray (SXR) detection by using an image intensifier to amplify the signals from a larger set of filtered x-ray profiles. The increased number of profiles and simplified detection system provides a compact diagnostic device for measuring T(e) in addition to contributions from density and impurities. A single-energy prototype system has been implemented on NSTX, comprised of a filtered x-ray pinhole camera, which converts the x-rays to visible light using a CsI:Tl phosphor. SXR profiles have been measured in high performance plasmas at frame rates of up to 10 kHz, and comparisons to the toroidally displaced tangential multi-energy SXR have been made.
Sequential detection of web defects
Eichel, Paul H.; Sleefe, Gerard E.; Stalker, K. Terry; Yee, Amy A.
2001-01-01
A system for detecting defects on a moving web having a sequential series of identical frames uses an imaging device to form a real-time camera image of a frame and a comparitor to comparing elements of the camera image with corresponding elements of an image of an exemplar frame. The comparitor provides an acceptable indication if the pair of elements are determined to be statistically identical; and a defective indication if the pair of elements are determined to be statistically not identical. If the pair of elements is neither acceptable nor defective, the comparitor recursively compares the element of said exemplar frame with corresponding elements of other frames on said web until one of the acceptable or defective indications occur.
High-speed imaging using 3CCD camera and multi-color LED flashes
NASA Astrophysics Data System (ADS)
Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis
2017-11-01
This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.
Students' framing of laboratory exercises using infrared cameras
NASA Astrophysics Data System (ADS)
Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.
2015-12-01
Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N =30 ) partook in four IR-camera laboratory activities, designed around the predict-observe-explain approach of White and Gunstone. The activities involved central thermal concepts that focused on heat conduction and dissipative processes such as friction and collisions. Students' interactions within each activity were videotaped and the analysis focuses on how a purposefully selected group of three students engaged with the exercises. As the basis for an interpretative study, a "thick" narrative description of the students' epistemological and conceptual framing of the exercises and how they took advantage of the disciplinary affordance of IR cameras in the thermal domain is provided. Findings include that the students largely shared their conceptual framing of the four activities, but differed among themselves in their epistemological framing, for instance, in how far they found it relevant to digress from the laboratory instructions when inquiring into thermal phenomena. In conclusion, the study unveils the disciplinary affordances of infrared cameras, in the sense of their use in providing access to knowledge about macroscopic thermal science.
NASA Technical Reports Server (NTRS)
1996-01-01
This series of 10 Hubble Space Telescope images captures several small moons orbiting Saturn. Hubble snapped the five pairs of images while the Earth was just above the ring plane and the Sun below it. The telescope captured a pair of images every 97 minutes as it circled the Earth. Moving out from Saturn, the visible rings are: the broad C Ring, the Cassini Division, and the narrow F Ring.
The first pair of images shows the large, bright moon Dione, near the middle of the frames. Two smaller moons, Pandora (the brighter one closer to Saturn) and Prometheus, appear as if they're touching the F Ring. In the second frame, Mimas emerges from Saturn's shadow and appears to be chasing Prometheus.In the second image pair, Mimas has moved towards the tip of the F Ring. Rhea, another bright moon, has just emerged from behind Saturn. Prometheus, the closest moon to Saturn, has rounded the F Ring's tip and is approaching the planet. The slightly larger moon Epimetheus has appeared.The third image pair shows Epimetheus, as a tiny dot just beyond the tip of the F Ring. Prometheus is in the lower right corner. An elongated clump or arc of debris in the F ring is seen as a slight brightening on the far side of this thin ring.In the fourth image pair, Epimetheus, in the lower right corner, streaks towards Saturn. The long ring arc can be seen in both frames.The fifth image pair again captures Mimas, beyond the tip of the F Ring. The same ring arc is still visible.In addition to the satellites, a pair of stars can be seen passing behind the rings, appearing to move towards the lower left due to Saturn's motion across the sky.The images were taken Nov. 21, 1995 with Wide Field Planetary Camera-2.The Wide Field/Planetary Camera 2 was developed by the Jet Propulsion Laboratory and managed by the Goddard Spaced Flight Center for NASA's Office of Space Science.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/User interface using a 3D model for video surveillance
NASA Astrophysics Data System (ADS)
Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru
1998-02-01
These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.
MagAO: Status and on-sky performance of the Magellan adaptive optics system
NASA Astrophysics Data System (ADS)
Morzinski, Katie M.; Close, Laird M.; Males, Jared R.; Kopon, Derek; Hinz, Phil M.; Esposito, Simone; Riccardi, Armando; Puglisi, Alfio; Pinna, Enrico; Briguglio, Runa; Xompero, Marco; Quirós-Pacheco, Fernando; Bailey, Vanessa; Follette, Katherine B.; Rodigas, T. J.; Wu, Ya-Lin; Arcidiacono, Carmelo; Argomedo, Javier; Busoni, Lorenzo; Hare, Tyson; Uomoto, Alan; Weinberger, Alycia
2014-07-01
MagAO is the new adaptive optics system with visible-light and infrared science cameras, located on the 6.5-m Magellan "Clay" telescope at Las Campanas Observatory, Chile. The instrument locks on natural guide stars (NGS) from 0th to 16th R-band magnitude, measures turbulence with a modulating pyramid wavefront sensor binnable from 28×28 to 7×7 subapertures, and uses a 585-actuator adaptive secondary mirror (ASM) to provide at wavefronts to the two science cameras. MagAO is a mutated clone of the similar AO systems at the Large Binocular Telescope (LBT) at Mt. Graham, Arizona. The high-level AO loop controls up to 378 modes and operates at frame rates up to 1000 Hz. The instrument has two science cameras: VisAO operating from 0.5-1μm and Clio2 operating from 1-5 μm. MagAO was installed in 2012 and successfully completed two commissioning runs in 2012-2013. In April 2014 we had our first science run that was open to the general Magellan community. Observers from Arizona, Carnegie, Australia, Harvard, MIT, Michigan, and Chile took observations in collaboration with the MagAO instrument team. Here we describe the MagAO instrument, describe our on-sky performance, and report our status as of summer 2014.
Spectral measurements of muzzle flash with multispectral and hyperspectral sensor
NASA Astrophysics Data System (ADS)
Kastek, M.; Dulski, R.; Trzaskawka, P.; Piątkowski, T.; Polakowski, H.
2011-08-01
The paper presents some practical aspects of the measurements of muzzle flash signatures. Selected signatures of sniper shot in typical scenarios has been presented. Signatures registered during all phases of muzzle flash were analyzed. High precision laboratory measurements were made in a special ballistic laboratory and as a result several flash patterns were registered. The field measurements of a muzzle flash were also performed. During the tests several infrared cameras were used, including the measurement class devices with high accuracy and frame rates. The registrations were made in NWIR, SWIR and LWIR spectral bands simultaneously. An ultra fast visual camera was also used for visible spectra registration. Some typical infrared shot signatures were presented. Beside the cameras, the LWIR imaging spectroradiometer HyperCam was also used during the laboratory experiments and the field tests. The signatures collected by the HyperCam device were useful for the determination of spectral characteristics of the muzzle flash, whereas the analysis of thermal images registered during the tests provided the data on temperature distribution in the flash area. As a result of the measurement session the signatures of several types handguns, machine guns and sniper rifles were obtained which will be used in the development of passive infrared systems for sniper detection.
A dual-band adaptor for infrared imaging.
McLean, A G; Ahn, J-W; Maingi, R; Gray, T K; Roquemore, A L
2012-05-01
A novel imaging adaptor providing the capability to extend a standard single-band infrared (IR) camera into a two-color or dual-band device has been developed for application to high-speed IR thermography on the National Spherical Tokamak Experiment (NSTX). Temperature measurement with two-band infrared imaging has the advantage of being mostly independent of surface emissivity, which may vary significantly in the liquid lithium divertor installed on NSTX as compared to that of an all-carbon first wall. In order to take advantage of the high-speed capability of the existing IR camera at NSTX (1.6-6.2 kHz frame rate), a commercial visible-range optical splitter was extensively modified to operate in the medium wavelength and long wavelength IR. This two-band IR adapter utilizes a dichroic beamsplitter, which reflects 4-6 μm wavelengths and transmits 7-10 μm wavelength radiation, each with >95% efficiency and projects each IR channel image side-by-side on the camera's detector. Cutoff filters are used in each IR channel, and ZnSe imaging optics and mirrors optimized for broadband IR use are incorporated into the design. In-situ and ex-situ temperature calibration and preliminary data of the NSTX divertor during plasma discharges are presented, with contrasting results for dual-band vs. single-band IR operation.
Digital holographic interferometry applied to the investigation of ignition process.
Pérez-Huerta, J S; Saucedo-Anaya, Tonatiuh; Moreno, I; Ariza-Flores, D; Saucedo-Orozco, B
2017-06-12
We use the digital holographic interferometry (DHI) technique to display the early ignition process for a butane-air mixture flame. Because such an event occurs in a short time (few milliseconds), a fast CCD camera is used to study the event. As more detail is required for monitoring the temporal evolution of the process, less light coming from the combustion is captured by the CCD camera, resulting in a deficient and underexposed image. Therefore, the CCD's direct observation of the combustion process is limited (down to 1000 frames per second). To overcome this drawback, we propose the use of DHI along with a high power laser in order to supply enough light to increase the speed capture, thus improving the visualization of the phenomenon in the initial moments. An experimental optical setup based on DHI is used to obtain a large sequence of phase maps that allows us to observe two transitory stages in the ignition process: a first explosion which slightly emits visible light, and a second stage induced by variations in temperature when the flame is emerging. While the last stage can be directly monitored by the CCD camera, the first stage is hardly detected by direct observation, and DHI clearly evidences this process. Furthermore, our method can be easily adapted for visualizing other types of fast processes.
Flexible nuclear medicine camera and method of using
Dilmanian, F. Avraham; Packer, Samuel; Slatkin, Daniel N.
1996-12-10
A nuclear medicine camera 10 and method of use photographically record radioactive decay particles emitted from a source, for example a small, previously undetectable breast cancer, inside a patient. The camera 10 includes a flexible frame 20 containing a window 22, a photographic film 24, and a scintillation screen 26, with or without a gamma-ray collimator 34. The frame 20 flexes for following the contour of the examination site on the patient, with the window 22 being disposed in substantially abutting contact with the skin of the patient for reducing the distance between the film 24 and the radiation source inside the patient. The frame 20 is removably affixed to the patient at the examination site for allowing the patient mobility to wear the frame 20 for a predetermined exposure time period. The exposure time may be several days for obtaining early qualitative detection of small malignant neoplasms.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
NASA Astrophysics Data System (ADS)
Jylhä, Juha; Marjanen, Kalle; Rantala, Mikko; Metsäpuro, Petri; Visa, Ari
2006-09-01
Surveillance camera automation and camera network development are growing areas of interest. This paper proposes a competent approach to enhance the camera surveillance with Geographic Information Systems (GIS) when the camera is located at the height of 10-1000 m. A digital elevation model (DEM), a terrain class model, and a flight obstacle register comprise exploited auxiliary information. The approach takes into account spherical shape of the Earth and realistic terrain slopes. Accordingly, considering also forests, it determines visible and shadow regions. The efficiency arises out of reduced dimensionality in the visibility computation. Image processing is aided by predicting certain advance features of visible terrain. The features include distance from the camera and the terrain or object class such as coniferous forest, field, urban site, lake, or mast. The performance of the approach is studied by comparing a photograph of Finnish forested landscape with the prediction. The predicted background is well-fitting, and potential knowledge-aid for various purposes becomes apparent.
640x480 PtSi Stirling-cooled camera system
NASA Astrophysics Data System (ADS)
Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; Coyle, Peter J.; Feder, Howard L.; Gilmartin, Harvey R.; Levine, Peter A.; Sauer, Donald J.; Shallcross, Frank V.; Demers, P. L.; Smalser, P. J.; Tower, John R.
1992-09-01
A Stirling cooled 3 - 5 micron camera system has been developed. The camera employs a monolithic 640 X 480 PtSi-MOS focal plane array. The camera system achieves an NEDT equals 0.10 K at 30 Hz frame rate with f/1.5 optics (300 K background). At a spatial frequency of 0.02 cycles/mRAD the vertical and horizontal Minimum Resolvable Temperature are in the range of MRT equals 0.03 K (f/1.5 optics, 300 K background). The MOS focal plane array achieves a resolution of 480 TV lines per picture height independent of background level and position within the frame.
2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup
NASA Astrophysics Data System (ADS)
Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.
2017-10-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Multi-camera synchronization core implemented on USB3 based FPGA platform
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Studies on the formation, temporal evolution and forensic applications of camera "fingerprints".
Kuppuswamy, R
2006-06-02
A series of experiments was conducted by exposing negative film in brand new cameras of different make and model. The exposures were repeated at regular time intervals spread over a period of 2 years. The processed film negatives were studied under a stereomicroscope (10-40x) in transmitted illumination for the presence of the characterizing features on their four frame-edges. These features were then related to those present on the masking frame of the cameras by examining the latter in reflected light stereomicroscopy (10-40x). The purpose of the study was to determine the origin and permanence of the frame-edge-marks, and also the processes by which the marks may probably alter with time. The investigations have arrived at the following conclusions: (i) the edge-marks have originated principally from the imperfections received on the film mask from the manufacturing and also occasionally from the accumulated dirt, dust and fiber on the film mask over an extended time period. (ii) The edge profiles of the cameras have remained fixed over a considerable period of time so as to be of a valuable identification medium. (iii) The marks are found to be varying in nature even with those cameras manufactured at similar time. (iv) The influence of f/number and object distance has great effect in the recording of the frame-edge marks during exposure of the film. The above findings would serve as a useful addition to the technique of camera edge-mark comparisons.
Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation
NASA Technical Reports Server (NTRS)
Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri
2002-01-01
The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.
Colors and Photometry of Bright Materials on Vesta as Seen by the Dawn Framing Camera
NASA Technical Reports Server (NTRS)
Schroeder, S. E.; Li, J.-Y.; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.;
2012-01-01
The Dawn spacecraft has been in orbit around the asteroid Vesta since July, 2011. The on-board Framing Camera has acquired thousands of high-resolution images of the regolith-covered surface through one clear and seven narrow-band filters in the visible and near-IR wavelength range. It has observed bright and dark materials that have a range of reflectance that is unusually wide for an asteroid. Material brighter than average is predominantly found on crater walls, and in ejecta surrounding caters in the southern hemisphere. Most likely, the brightest material identified on the Vesta surface so far is located on the inside of a crater at 64.27deg S, 1.54deg . The apparent brightness of a regolith is influenced by factors such as particle size, mineralogical composition, and viewing geometry. As such, the presence of bright material can indicate differences in lithology and/or degree of space weathering. We retrieve the spectral and photometric properties of various bright terrains from false-color images acquired in the High Altitude Mapping Orbit (HAMO). We find that most bright material has a deeper 1-m pyroxene band than average. However, the aforementioned brightest material appears to have a 1-m band that is actually less deep, a result that awaits confirmation by the on-board VIR spectrometer. This site may harbor a class of material unique for Vesta. We discuss the implications of our spectral findings for the origin of bright materials.
Vision Based SLAM in Dynamic Scenes
2012-12-20
the correct relative poses between cameras at frame F. For this purpose, we detect and match SURF features between cameras in dilierent groups, and...all cameras in s uch a challenging case. For a compa rison, we disabled the ’ inte r-camera pose estimation’ and applied the ’ intra-camera pose esti
Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael
2015-01-01
Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485
View of Island of Kyushu, Japan from Skylab
1974-01-07
SL4-139-3942 (7 Jan. 1974) --- This oblique view of the Island of Kyushu, Japan, was taken from the Earth-orbiting Skylab space station on Jan. 8, 1974 during its third manning. A plume from the volcano Sakurajima (bottom center) is clearly seen as it extends about 80 kilometers (50 miles) east from the volcano. (EDITOR'S NOTE: On Jan. 10, 2013, a little over 39 years after this 1974 photo was made from the Skylab space station, Expedition 34 crew members aboard the International Space Station took a similar picture (frame no. ISS034-E-027139) featuring smoke rising from the same volcano, with much of the island of Kyushu visible. Interesting comparisons can be made between the two photos, at least as far as the devices used to record them. The Skylab image was made by one of the three Skylab 4 crew members with a hand-held camera using a 100-mm lens and 70-mm color film, whereas the station photo was taken with 180-mm lens on a digital still camera, hand-held by one of the six crew members). Photo credit: NASA
Accurate estimation of camera shot noise in the real-time
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.
2017-10-01
Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.
NASA Technical Reports Server (NTRS)
Baker, Donald J.; Li, Ji-An
2005-01-01
The experimental results from a stitched VaRTM carbon-epoxy composite panel tested under uni-axial compression loading are presented along with nonlinear finite element analysis prediction of the response. The curved panel is divided by frames and stringers into six bays with a column of three bays along the compressive loading direction. The frames are supported at the frame ends to resist out-of-plane translation. Back-to-back strain gages are used to record the strain and displacement transducers were used to record the out-of-plane displacements. In addition a full-field-displacement measurement technique that utilizes a camera-based-stereo-vision system was used to record the displacements. The panel was loaded to 1.5 times the predicted initial buckling load (1st bay buckling load, P(sub er) from the nonlinear finite element analysis and then was removed from the test machine for impact testing. After impacting with 20 ft-lbs of energy using a spherical impactor to produce barely visible damage the panel was loaded in compression until failure. The buckling load of the first bay to buckle was 97% of the buckling load before impact. The stitching constrained the impact damage from growing during the loading to failure. Impact damage had very little overall effect on panel stiffness. Panel stiffness measured by the full-field-displacement technique indicated a 13% loss in stiffness after impact. The panel failed at 1.64 times the first panel buckling load. The barely visible impact damage did not grow noticeably as the panel failed by global instability due to stringer-web terminations at the frame locations. The predictions from the nonlinear analysis of the finite element modeling of the entire specimen were very effective in the capture of the initial buckling and global behavior of the panel. In addition, the prediction highlighted the weakness of the panel under compression due to stringer web terminations. Both the test results and the nonlinear predictions serve to reinforce the severe penalty in structural integrity caused by the low cost manufacturing technique to terminate the stringer webs, and demonstrates the importance of this type of sub-component testing and high fidelity failure analysis in the design of a composite fuselage.
Optical flow estimation on image sequences with differently exposed frames
NASA Astrophysics Data System (ADS)
Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin
2015-09-01
Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.
D Point Cloud Model Colorization by Dense Registration of Digital Images
NASA Astrophysics Data System (ADS)
Crombez, N.; Caron, G.; Mouaddib, E.
2015-02-01
Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.
Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera
NASA Astrophysics Data System (ADS)
Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.
2016-08-01
Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.
Color Image Processing and Object Tracking System
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.
1996-01-01
This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
High-performance electronic image stabilisation for shift and rotation correction
NASA Astrophysics Data System (ADS)
Parker, Steve C. J.; Hickman, D. L.; Wu, F.
2014-06-01
A novel low size, weight and power (SWaP) video stabiliser called HALO™ is presented that uses a SoC to combine the high processing bandwidth of an FPGA, with the signal processing flexibility of a CPU. An image based architecture is presented that can adapt the tiling of frames to cope with changing scene dynamics. A real-time implementation is then discussed that can generate several hundred optical flow vectors per video frame, to accurately calculate the unwanted rigid body translation and rotation of camera shake. The performance of the HALO™ stabiliser is comprehensively benchmarked against the respected Deshaker 3.0 off-line stabiliser plugin to VirtualDub. Eight different videos are used for benchmarking, simulating: battlefield, surveillance, security and low-level flight applications in both visible and IR wavebands. The results show that HALO™ rivals the performance of Deshaker within its operating envelope. Furthermore, HALO™ may be easily reconfigured to adapt to changing operating conditions or requirements; and can be used to host other video processing functionality like image distortion correction, fusion and contrast enhancement.
Simultaneous three wavelength imaging with a scanning laser ophthalmoscope.
Reinholz, F; Ashman, R A; Eikelboom, R H
1999-11-01
Various imaging properties of scanning laser ophthalmoscopes (SLO) such as contrast or depth discrimination, are superior to those of the traditional photographic fundus camera. However, most SLO are monochromatic whereas photographic systems produce colour images, which inherently contain information over a broad wavelength range. An SLO system has been modified to allow simultaneous three channel imaging. Laser light sources in the visible and infrared spectrum were concurrently launched into the system. Using different wavelength triads, digital fundus images were acquired at high frame rates. Favourable wavelengths combinations were established and high contrast, true (red, green, blue) or false (red, green, infrared) colour images of the retina were recorded. The monochromatic frames which form the colour image exhibit improved distinctness of different retinal structures such as the nerve fibre layer, the blood vessels, and the choroid. A multi-channel SLO combines the advantageous imaging properties of a tunable, monochrome SLO with the benefits and convenience of colour ophthalmoscopy. The options to modify parameters such as wavelength, intensity, gain, beam profile, aperture sizes, independently for every channel assign a high degree of versatility to the system. Copyright 1999 Wiley-Liss, Inc.
Lightning spectra at 100,000 fps
NASA Astrophysics Data System (ADS)
McHarg, M. G.; Harley, J.; Haaland, R. K.; Edens, H. E.; Stenbaek-Nielsen, H.
2016-12-01
A fundamental understanding of lightning can be inferred from the spectral emissions resulting from the leader and return stroke channel. We examine an event recorded at 00:58:07 on 19 July 2015 at Langmuir Laboratory. We recorded lightning spectra using a 100 line per mm grating in front of a Phantom V2010 camera with an 85mm Nikon lens recording at 100,000 frames per second. Coarse resolution spectra (approximately 5nm resolution) are produced from approximately 400 nm to 800 nm for each frame. Electric field data from the Langmuir Electric Field Array for the 03:19:19 event show 10 V/m changes in the electric field associated with multiple return strokes visible in the spectral data. We used the spectral data to compare temperatures at the top, middle and bottom of the lightning channel. Lightning Mapping Array data at Langmuir for the 00:58:07 event show a complex flash extending 10 km in the East-West plane and 6 km in the North-South plane. The imagery data imply that this is a bolt-from-the-blue event.
Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path
Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki
2017-01-01
Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaffney, Kelly
Movies have transformed our perception of the world. With slow motion photography, we can see a hummingbird flap its wings, and a bullet pierce an apple. The remarkably small and extremely fast molecular world that determines how your body functions cannot be captured with even the most sophisticated movie camera today. To see chemistry in real time requires a camera capable of seeing molecules that are one ten billionth of a foot with a frame rate of 10 trillion frames per second! SLAC has embarked on the construction of just such a camera. Please join me as I discuss howmore » this molecular movie camera will work and how it will change our perception of the molecular world.« less
Geometrical calibration television measuring systems with solid state photodetectors
NASA Astrophysics Data System (ADS)
Matiouchenko, V. G.; Strakhov, V. V.; Zhirkov, A. O.
2000-11-01
The various optical measuring methods for deriving information about the size and form of objects are now used in difference branches- mechanical engineering, medicine, art, criminalistics. Measuring by means of the digital television systems is one of these methods. The development of this direction is promoted by occurrence on the market of various types and costs small-sized television cameras and frame grabbers. There are many television measuring systems using the expensive cameras, but accuracy performances of low cost cameras are also interested for the system developers. For this reason inexpensive mountingless camera SK1004CP (format 1/3', cost up to 40$) and frame grabber Aver2000 were used in experiments.
Earth Observation taken during the 41G mission
2009-06-25
41G-120-056 (October 1984) --- Parts of Israel, Lebanon, Palestine, Syria and Jordan and part of the Mediterranean Sea are seen in this nearly-vertical, large format camera's view from the Earth-orbiting Space Shuttle Challenger. The Sea of Galilee is at center frame and the Dead Sea at bottom center. The frame's center coordinates are 32.5 degrees north latitude and 35.5 degrees east longitude. A Linhof camera, using 4" x 5" film, was used to expose the frame through one of the windows on Challenger's aft flight deck.
Calibration and verification of thermographic cameras for geometric measurements
NASA Astrophysics Data System (ADS)
Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.
2011-03-01
Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better results for the Nec camera.
Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras
NASA Astrophysics Data System (ADS)
Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi
1997-04-01
Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.
2017-12-08
Spiral galaxy NGC 3274 is a relatively faint galaxy located over 20 million light-years away in the constellation of Leo (The Lion). This NASA/ESA Hubble Space Telescope image comes courtesy of Hubble's Wide Field Camera 3 (WFC3), whose multi-color vision allows astronomers to study a wide range of targets, from nearby star formation to galaxies in the most remote regions of the cosmos. This image combines observations gathered in five different filters, bringing together ultraviolet, visible and infrared light to show off NGC 3274 in all its glory. NGC 3274 was discovered by Wilhelm Herschel in 1783. The galaxy PGC 213714 is also visible on the upper right of the frame, located much farther away from Earth. Image Credit: ESA/Hubble & NASA, D. Calzetti NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Sensor fusion of cameras and a laser for city-scale 3D reconstruction.
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-11-04
This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.
Advances in real-time millimeter-wave imaging radiometers for avionic synthetic vision
NASA Astrophysics Data System (ADS)
Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.; Galliano, Joseph A., Jr.
1995-06-01
Millimeter-wave imaging has advantages over conventional visible or infrared imaging for many applications because millimeter-wave signals can travel through fog, snow, dust, and clouds with much less attenuation than infrared or visible light waves. Additionally, passive imaging systems avoid many problems associated with active radar imaging systems, such as radar clutter, glint, and multi-path return. ThermoTrex Corporation previously reported on its development of a passive imaging radiometer that uses an array of frequency-scanned antennas coupled to a multichannel acousto-optic spectrum analyzer (Bragg-cell) to form visible images of a scene through the acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output from the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. An application of this system is its incorporation as part of an enhanced vision system to provide pilots with a synthetic view of a runway in fog and during other adverse weather conditions. Ongoing improvements to a 94 GHz imaging system and examples of recent images taken with this system will be presented. Additionally, the development of dielectric antennas and an electro- optic-based processor for improved system performance, and the development of an `ultra- compact' 220 GHz imaging system will be discussed.
Universal ICT Picosecond Camera
NASA Astrophysics Data System (ADS)
Lebedev, Vitaly B.; Syrtzev, V. N.; Tolmachyov, A. M.; Feldman, Gregory G.; Chernyshov, N. A.
1989-06-01
The paper reports on the design of an ICI camera operating in the mode of linear or three-frame image scan. The camera incorporates two tubes: time-analyzing ICI PIM-107 1 with cathode S-11, and brightness amplifier PMU-2V (gain about 104) for the image shaped by the first tube. The camera is designed on the basis of streak camera AGAT-SF3 2 with almost the same power sources, but substantially modified pulse electronics. Schematically, the design of tube PIM-107 is depicted in the figure. The tube consists of cermet housing 1, photocathode 2 made in a separate vacuum volume and introduced into the housing by means of a manipulator. In a direct vicinity of the photocathode, accelerating electrode is located made of a fine-structure grid. An electrostatic lens formed by focusing electrode 4 and anode diaphragm 5 produces a beam of electrons with a "remote crossover". The authors have suggested this term for an electron beam whose crossover is 40 to 60 mm away from the anode diaphragm plane which guarantees high sensitivity of scan plates 6 with respect to multiaperture framing diaphragm 7. Beyond every diaphragm aperture, a pair of deflecting plates 8 is found shielded from compensation plates 10 by diaphragm 9. The electronic image produced by the photocathode is focused on luminescent screen 11. The tube is controlled with the help of two saw-tooth voltages applied in antiphase across plates 6 and 10. Plates 6 serve for sweeping the electron beam over the surface of diaphragm 7. The beam is either allowed toward the screen, or delayed by the diaphragm walls. In such a manner, three frames are obtained, the number corresponding to that of the diaphragm apertures. Plates 10 serve for stopping the compensation of the image streak sweep on the screen. To avoid overlapping of frames, plates 8 receive static potentials responsible for shifting frames on the screen. Changing the potentials applied to plates 8, one can control the spacing between frames and partially or fully overlap the frames. This sort of control is independent of the frequency of frame running and of their duration, and can only determine frame positioning on the screen. Since diaphragm 7 is located in the area of crossover and electron trajectories cross in the crossover, the frame is not decomposed into separate elements during its formation. The image is transferred onto the screen practically within the entire time of frame duration increasing the aperture ratio of the tube as compared to that in Ref. 3.
A higher-speed compressive sensing camera through multi-diode design
NASA Astrophysics Data System (ADS)
Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore
2013-05-01
Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.
NASA Astrophysics Data System (ADS)
Sizemore, H. G.; Prettyman, T. H.; De Sanctis, M. C.; Schmidt, B. E.; Hughson, K.; Chilton, H.; Castillo, J. C.; Platz, T.; Schorghofer, N.; Bland, M. T.; Sori, M.; Buczkowski, D.; Byrne, S.; Landis, M. E.; Fu, R.; Ermakov, A.; Raymond, C. A.; Schwartz, S. J.
2017-12-01
Prior to the arrival of the Dawn spacecraft at Ceres, the dwarf planet was anticipated to have a deep global cryosphere protected by a thin silicate lag. Gravity science along with data collected by Dawn's Framing Camera (FC), Gamma Ray and Neutron Detector (GRaND), and Visible and Infrared Mapping Spectrometer (VIR-MS) during the primary mission at Ceres have confirmed the existence of a global, silicate-rich cryosphere, and suggest the existence of deeper ice, brine, or mud layers. As such, Ceres' surface morphology has characteristics in common with both Mars and the small icy bodies of the outer solar system. We will summarize the evidence for the existence and global extent of the Cerean cryosphere. We will also discuss the range of morphological features that have been linked to subsurface ice, and highlight outstanding science questions.
Marshall Grazing Incidence X-ray Spectrometer (MaGIXS) Slit-Jaw Imaging System
NASA Astrophysics Data System (ADS)
Wilkerson, P.; Champey, P. R.; Winebarger, A. R.; Kobayashi, K.; Savage, S. L.
2017-12-01
The Marshall Grazing Incidence X-ray Spectrometer is a NASA sounding rocket payload providing a 0.6 - 2.5 nm spectrum with unprecedented spatial and spectral resolution. The instrument is comprised of a novel optical design, featuring a Wolter1 grazing incidence telescope, which produces a focused solar image on a slit plate, an identical pair of stigmatic optics, a planar diffraction grating and a low-noise detector. When MaGIXS flies on a suborbital launch in 2019, a slit-jaw camera system will reimage the focal plane of the telescope providing a reference for pointing the telescope on the solar disk and aligning the data to supporting observations from satellites and other rockets. The telescope focuses the X-ray and EUV image of the sun onto a plate covered with a phosphor coating that absorbs EUV photons, which then fluoresces in visible light. This 10-week REU project was aimed at optimizing an off-axis mounted camera with 600-line resolution NTSC video for extremely low light imaging of the slit plate. Radiometric calculations indicate an intensity of less than 1 lux at the slit jaw plane, which set the requirement for camera sensitivity. We selected a Watec 910DB EIA charge-coupled device (CCD) monochrome camera, which has a manufacturer quoted sensitivity of 0.0001 lux at F1.2. A high magnification and low distortion lens was then identified to image the slit jaw plane from a distance of approximately 10 cm. With the selected CCD camera, tests show that at extreme low-light levels, we achieve a higher resolution than expected, with only a moderate drop in frame rate. Based on sounding rocket flight heritage, the launch vehicle attitude control system is known to stabilize the instrument pointing such that jitter does not degrade video quality for context imaging. Future steps towards implementation of the imaging system will include ruggedizing the flight camera housing and mounting the selected camera and lens combination to the instrument structure.
An Acoustic Charge Transport Imager for High Definition Television
NASA Technical Reports Server (NTRS)
Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard
1999-01-01
This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode with an output data rate of 5MHz, which gives a maximum frame rate of 4 frames per second. The MIT/Polaroid group developed two cameras under this program. The cameras have effectively four times the current video spatial resolution and at 60 frames per second are double the normal video frame rate.
Enhancement Strategies for Frame-To Uas Stereo Visual Odometry
NASA Astrophysics Data System (ADS)
Kersten, J.; Rodehorst, V.
2016-06-01
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.
Voss with video camera in Service Module
2001-04-08
ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.
Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2017-05-08
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.
A Fisheries Application of a Dual-Frequency Identification Sonar Acoustic Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moursund, Russell A.; Carlson, Thomas J.; Peters, Rock D.
2003-06-01
The uses of an acoustic camera in fish passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The Dual-Frequency Identification Sonar (DIDSON) is a high-resolution imaging sonar that obtains near video-quality images for the identification of objects underwater. Developed originally for the Navy by the University of Washington?s Applied Physics Laboratory, it bridges the gap between existing fisheries assessment sonar and optical systems. Traditional fisheries assessment sonars detect targets at long ranges but cannot record the shape of targets. The images within 12 m of this acoustic camera are so clear that one canmore » see fish undulating as they swim and can tell the head from the tail in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system is composed of 96 beams over a 29-degree field of view. This high resolution and a fast frame rate allow the acoustic camera to produce near video-quality images of objects through time. This technology redefines many of the traditional limitations of sonar for fisheries and aquatic ecology. Images can be taken of fish in confined spaces, close to structural or surface boundaries, and in the presence of entrained air. The targets themselves can be visualized in real time. The DIDSON can be used where conventional underwater cameras would be limited in sampling range to < 1 m by low light levels and high turbidity, and where traditional sonar would be limited by the confined sample volume. Results of recent testing at The Dalles Dam, on the lower Columbia River in Oregon, USA, are shown.« less
Geiger-mode APD camera system for single-photon 3D LADAR imaging
NASA Astrophysics Data System (ADS)
Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir
2012-06-01
The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.
NASA Astrophysics Data System (ADS)
Chulichkov, Alexey I.; Nikitin, Stanislav V.; Emilenko, Alexander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.
2017-10-01
Earlier, we developed a method for estimating the height and speed of clouds from cloud images obtained by a pair of digital cameras. The shift of a fragment of the cloud in the right frame relative to its position in the left frame is used to estimate the height of the cloud and its velocity. This shift is estimated by the method of the morphological analysis of images. However, this method requires that the axes of the cameras are parallel. Instead of real adjustment of the axes, we use virtual camera adjustment, namely, a transformation of a real frame, the result of which could be obtained if all the axes were perfectly adjusted. For such adjustment, images of stars as infinitely distant objects were used: on perfectly aligned cameras, images on both the right and left frames should be identical. In this paper, we investigate in more detail possible mathematical models of cloud image deformations caused by the misalignment of the axes of two cameras, as well as their lens aberration. The simplest model follows the paraxial approximation of lens (without lens aberrations) and reduces to an affine transformation of the coordinates of one of the frames. The other two models take into account the lens distortion of the 3rd and 3rd and 5th orders respectively. It is shown that the models differ significantly when converting coordinates near the edges of the frame. Strict statistical criteria allow choosing the most reliable model, which is as much as possible consistent with the measurement data. Further, each of these three models was used to determine parameters of the image deformations. These parameters are used to provide cloud images to mean what they would have when measured using an ideal setup, and then the distance to cloud is calculated. The results were compared with data of a laser range finder.
Frames of Reference in the Classroom
ERIC Educational Resources Information Center
Grossman, Joshua
2012-01-01
The classic film "Frames of Reference" effectively illustrates concepts involved with inertial and non-inertial reference frames. In it, Donald G. Ivey and Patterson Hume use the cameras perspective to allow the viewer to see motion in reference frames translating with a constant velocity, translating while accelerating, and rotating--all with…
Development and use of an L3CCD high-cadence imaging system for Optical Astronomy
NASA Astrophysics Data System (ADS)
Sheehan, Brendan J.; Butler, Raymond F.
2008-02-01
A high cadence imaging system, based on a Low Light Level CCD (L3CCD) camera, has been developed for photometric and polarimetric applications. The camera system is an iXon DV-887 from Andor Technology, which uses a CCD97 L3CCD detector from E2V technologies. This is a back illuminated device, giving it an extended blue response, and has an active area of 512×512 pixels. The camera system allows frame-rates ranging from 30 fps (full frame) to 425 fps (windowed & binned frame). We outline the system design, concentrating on the calibration and control of the L3CCD camera. The L3CCD detector can be either triggered directly by a GPS timeserver/frequency generator or be internally triggered. A central PC remotely controls the camera computer system and timeserver. The data is saved as standard `FITS' files. The large data loads associated with high frame rates, leads to issues with gathering and storing the data effectively. To overcome such problems, a specific data management approach is used, and a Python/PYRAF data reduction pipeline was written for the Linux environment. This uses calibration data collected either on-site, or from lab based measurements, and enables a fast and reliable method for reducing images. To date, the system has been used twice on the 1.5 m Cassini Telescope in Loiano (Italy) we present the reduction methods and observations made.
High-speed plasma imaging: A lightning bolt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, G.A.; Whiteson, D.O.
Using a gated intensified digital Kodak Ektapro camera system, the authors captured a lightning bolt at 1,000 frames per second, with 100-{micro}s exposure time on each consecutive frame. As a thunder storm approaches while darkness descended (7:50 pm) on July 21, 1994, they photographed lightning bolts with an f22 105-mm lens and 100% gain on the intensified camera. This 15-frame sequence shows a cloud to ground stroke at a distance of about 1.5 km, which has a series of stepped leaders propagating downwards, following by the upward-propagating main return stroke.
Tan, Tai Ho; Williams, Arthur H.
1985-01-01
An optical fiber-coupled detector visible streak camera plasma diagnostic apparatus. Arrays of optical fiber-coupled detectors are placed on the film plane of several types of particle, x-ray and visible spectrometers or directly in the path of the emissions to be measured and the output is imaged by a visible streak camera. Time and spatial dependence of the emission from plasmas generated from a single pulse of electromagnetic radiation or from a single particle beam burst can be recorded.
Tan, T.H.; Williams, A.H.
An optical fiber-coupled detector visible streak camera plasma diagnostic apparatus. Arrays of optical fiber-coupled detectors are placed on the film plane of several types of particle, x-ray and visible spectrometers or directly in the path of the emissions to be measured and the output is imaged by a visible streak camera. Time and spatial dependence of the emission from plasma generated from a single pulse of electromagnetic radiation or from a single particle beam burst can be recorded.
Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio
2016-04-14
The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.
Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio
2016-01-01
The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera. PMID:27089344
Multiport backside-illuminated CCD imagers for high-frame-rate camera applications
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Sauer, Donald J.; Hseuh, Fu-Lung; Shallcross, Frank V.; Taylor, Gordon C.; Meray, Grazyna M.; Tower, John R.; Harrison, Lorna J.; Lawler, William B.
1994-05-01
Two multiport, second-generation CCD imager designs have been fabricated and successfully tested. They are a 16-port 512 X 512 array and a 32-port 1024 X 1024 array. Both designs are back illuminated, have on-chip CDS, lateral blooming control, and use a split vertical frame transfer architecture with full frame storage. The 512 X 512 device has been operated at rates over 800 frames per second. The 1024 X 1024 device has been operated at rates over 300 frames per second. The major changes incorporated in the second-generation design are, reduction in gate length in the output area to give improved high-clock-rate performance, modified on-chip CDS circuitry for reduced noise, and optimized implants to improve performance of blooming control at lower clock amplitude. This paper discusses the imager design improvements and presents measured performance results at high and moderate frame rates. The design and performance of three moderate frame rate cameras are discussed.
Estimating the spatial position of marine mammals based on digital camera recordings
Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert
2015-01-01
Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator–prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982
Evaluation of Eye Metrics as a Detector of Fatigue
2010-03-01
eyeglass frames . The cameras are angled upward toward the eyes and extract real-time pupil diameter, eye-lid movement, and eye-ball movement. The...because the cameras were mounted on eyeglass -like frames , the system was able to continuously monitor the eye throughout all sessions. Overall, the...of “ fitness for duty” testing and “real-time monitoring” of operator performance has been slow (Institute of Medicine, 2004). Oculometric-based
Advanced imaging research and development at DARPA
NASA Astrophysics Data System (ADS)
Dhar, Nibir K.; Dat, Ravi
2012-06-01
Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.
3-D Velocimetry of Strombolian Explosions
NASA Astrophysics Data System (ADS)
Taddeucci, J.; Gaudin, D.; Orr, T. R.; Scarlato, P.; Houghton, B. F.; Del Bello, E.
2014-12-01
Using two synchronized high-speed cameras we were able to reconstruct the three-dimensional displacement and velocity field of bomb-sized pyroclasts in Strombolian explosions at Stromboli Volcano. Relatively low-intensity Strombolian-style activity offers a rare opportunity to observe volcanic processes that remain hidden from view during more violent explosive activity. Such processes include the ejection and emplacement of bomb-sized clasts along pure or drag-modified ballistic trajectories, in-flight bomb collision, and gas liberation dynamics. High-speed imaging of Strombolian activity has already opened new windows for the study of the abovementioned processes, but to date has only utilized two-dimensional analysis with limited motion detection and ability to record motion towards or away from the observer. To overcome this limitation, we deployed two synchronized high-speed video cameras at Stromboli. The two cameras, located sixty meters apart, filmed Strombolian explosions at 500 and 1000 frames per second and with different resolutions. Frames from the two cameras were pre-processed and combined into a single video showing frames alternating from one to the other camera. Bomb-sized pyroclasts were then manually identified and tracked in the combined video, together with fixed reference points located as close as possible to the vent. The results from manual tracking were fed to a custom software routine that, knowing the relative position of the vent and cameras, and the field of view of the latter, provided the position of each bomb relative to the reference points. By tracking tens of bombs over five to ten frames at different intervals during one explosion, we were able to reconstruct the three-dimensional evolution of the displacement and velocity fields of bomb-sized pyroclasts during individual Strombolian explosions. Shifting jet directivity and dispersal angle clearly appear from the three-dimensional analysis.
Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †
Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi
2016-01-01
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781
Standard design for National Ignition Facility x-ray streak and framing cameras.
Kimbrough, J R; Bell, P M; Bradley, D K; Holder, J P; Kalantar, D K; MacPhee, A G; Telford, S
2010-10-01
The x-ray streak camera and x-ray framing camera for the National Ignition Facility were redesigned to improve electromagnetic pulse hardening, protect high voltage circuits from pressure transients, and maximize the use of common parts and operational software. Both instruments use the same PC104 based controller, interface, power supply, charge coupled device camera, protective hermetically sealed housing, and mechanical interfaces. Communication is over fiber optics with identical facility hardware for both instruments. Each has three triggers that can be either fiber optic or coax. High voltage protection consists of a vacuum sensor to enable the high voltage and pulsed microchannel plate phosphor voltage. In the streak camera, the high voltage is removed after the sweep. Both rely on the hardened aluminum box and a custom power supply to reduce electromagnetic pulse/electromagnetic interference (EMP/EMI) getting into the electronics. In addition, the streak camera has an EMP/EMI shield enclosing the front of the streak tube.
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-01-01
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-03-04
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.
A photoelastic modulator-based birefringence imaging microscope for measuring biological specimens
NASA Astrophysics Data System (ADS)
Freudenthal, John; Leadbetter, Andy; Wolf, Jacob; Wang, Baoliang; Segal, Solomon
2014-11-01
The photoelastic modulator (PEM) has been applied to a variety of polarimetric measurements. However, nearly all such applications use point-measurements where each point (spot) on the sample is measured one at a time. The main challenge for employing the PEM in a camera-based imaging instrument is that the PEM modulates too fast for typical cameras. The PEM modulates at tens of KHz. To capture the specific polarization information that is carried on the modulation frequency of the PEM, the camera needs to be at least ten times faster. However, the typical frame rates of common cameras are only in the tens or hundreds frames per second. In this paper, we report a PEM-camera birefringence imaging microscope. We use the so-called stroboscopic illumination method to overcome the incompatibility of the high frequency of the PEM to the relatively slow frame rate of a camera. We trigger the LED light source using a field-programmable gate array (FPGA) in synchrony with the modulation of the PEM. We show the measurement results of several standard birefringent samples as a part of the instrument calibration. Furthermore, we show results observed in two birefringent biological specimens, a human skin tissue that contains collagen and a slice of mouse brain that contains bundles of myelinated axonal fibers. Novel applications of this PEM-based birefringence imaging microscope to both research communities and industrial applications are being tested.
Robust object matching for persistent tracking with heterogeneous features.
Guo, Yanlin; Hsu, Steve; Sawhney, Harpreet S; Kumar, Rakesh; Shan, Ying
2007-05-01
This paper addresses the problem of matching vehicles across multiple sightings under variations in illumination and camera poses. Since multiple observations of a vehicle are separated in large temporal and/or spatial gaps, thus prohibiting the use of standard frame-to-frame data association, we employ features extracted over a sequence during one time interval as a vehicle fingerprint that is used to compute the likelihood that two or more sequence observations are from the same or different vehicles. Furthermore, since our domain is aerial video tracking, in order to deal with poor image quality and large resolution and quality variations, our approach employs robust alignment and match measures for different stages of vehicle matching. Most notably, we employ a heterogeneous collection of features such as lines, points, and regions in an integrated matching framework. Heterogeneous features are shown to be important. Line and point features provide accurate localization and are employed for robust alignment across disparate views. The challenges of change in pose, aspect, and appearances across two disparate observations are handled by combining a novel feature-based quasi-rigid alignment with flexible matching between two or more sequences. However, since lines and points are relatively sparse, they are not adequate to delineate the object and provide a comprehensive matching set that covers the complete object. Region features provide a high degree of coverage and are employed for continuous frames to provide a delineation of the vehicle region for subsequent generation of a match measure. Our approach reliably delineates objects by representing regions as robust blob features and matching multiple regions to multiple regions using Earth Mover's Distance (EMD). Extensive experimentation under a variety of real-world scenarios and over hundreds of thousands of Confirmatory Identification (CID) trails has demonstrated about 95 percent accuracy in vehicle reacquisition with both visible and Infrared (IR) imaging cameras.
Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa
2017-09-01
Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far (additional 0.21mm error; 95% CI, 0.13-0.28; P<0.001) camera distances, regardless of whether a drape was used. In comparison to the 'no drape' condition, the accuracy error of 0.11 mm when using a 4MM film drape is minimal and clinically insignificant.
Calibration Target for Curiosity Arm Camera
2012-09-10
This view of the calibration target for the MAHLI camera aboard NASA Mars rover Curiosity combines two images taken by that camera during Sept. 9, 2012. Part of Curiosity left-front and center wheels and a patch of Martian ground are also visible.
ERIC Educational Resources Information Center
Fortunato, John A.
2001-01-01
Identifies and analyzes the exposure and portrayal framing methods that are utilized by the National Basketball Association (NBA). Notes that key informant interviews provide insight into the exposure framing method and reveal two portrayal instruments: cameras and announcers; and three framing strategies: depicting the NBA as a team game,…
Advanced High-Definition Video Cameras
NASA Technical Reports Server (NTRS)
Glenn, William
2007-01-01
A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.
Network-based H.264/AVC whole frame loss visibility model and frame dropping methods.
Chang, Yueh-Lun; Lin, Ting-Lan; Cosman, Pamela C
2012-08-01
We examine the visual effect of whole frame loss by different decoders. Whole frame losses are introduced in H.264/AVC compressed videos which are then decoded by two different decoders with different common concealment effects: frame copy and frame interpolation. The videos are seen by human observers who respond to each glitch they spot. We found that about 39% of whole frame losses of B frames are not observed by any of the subjects, and over 58% of the B frame losses are observed by 20% or fewer of the subjects. Using simple predictive features which can be calculated inside a network node with no access to the original video and no pixel level reconstruction of the frame, we developed models which can predict the visibility of whole B frame losses. The models are then used in a router to predict the visual impact of a frame loss and perform intelligent frame dropping to relieve network congestion. Dropping frames based on their visual scores proves superior to random dropping of B frames.
Investigating plasma viscosity with fast framing photography in the ZaP-HD Flow Z-Pinch experiment
NASA Astrophysics Data System (ADS)
Weed, Jonathan Robert
The ZaP-HD Flow Z-Pinch experiment investigates the stabilizing effect of sheared axial flows while scaling toward a high-energy-density laboratory plasma (HEDLP > 100 GPa). Stabilizing flows may persist until viscous forces dissipate a sheared flow profile. Plasma viscosity is investigated by measuring scale lengths in turbulence intentionally introduced in the plasma flow. A boron nitride turbulence-tripping probe excites small scale length turbulence in the plasma, and fast framing optical cameras are used to study time-evolved turbulent structures and viscous dissipation. A Hadland Imacon 790 fast framing camera is modified for digital image capture, but features insufficient resolution to study turbulent structures. A Shimadzu HPV-X camera captures the evolution of turbulent structures with great spatial and temporal resolution, but is unable to resolve the anticipated Kolmogorov scale in ZaP-HD as predicted by a simplified pinch model.
The Last Meter: Blind Visual Guidance to a Target.
Manduchi, Roberto; Coughlan, James M
2014-01-01
Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.
South Melea Planum, By The Dawn's Early Light
NASA Technical Reports Server (NTRS)
1999-01-01
MOC 'sees' by the dawn's early light! This picture was taken over the high southern polar latitudes during the first week of May 1999. The area shown is currently in southern winter darkness. Because sunlight is scattered over the horizon by aerosols--dust and ice particles--suspended in the atmosphere, sufficient light reaches regions within a few degrees of the terminator (the line dividing night and day) to be visible to the Mars Global Surveyor Mars Orbiter Camera (MOC) when the maximum exposure settings are used. This picture shows a polygonally-patterned surface on southern Malea Planum. At the time the picture was taken, the sun was more than 4.5o below the northern horizon. The scene covers an area 3 kilometers (1.9 miles) wide, with the illumination from the top of the picture. In this frame, the surface appears a relatively uniform gray. At the time the picture was acquired, the surface was covered with south polar wintertime frost. The highly reflective frost, in fact, may have contributed to the increased visibility of this surface. This 'twilight imaging' technique for viewing Mars can only work near the terminator; thus in early May only regions between about 67oS and 74oS were visible in twilight images in the southern hemisphere, and a similar narrow latitude range could be imaged in the northern hemisphere. MOC cannot 'see' in the total darkness of full-borne night. Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.NASA Astrophysics Data System (ADS)
O'Keefe, Eoin S.
2005-10-01
As thermal imaging technology matures and ownership costs decrease, there is a trend to equip a greater proportion of airborne surveillance vehicles used by security and defence forces with both visible band and thermal infrared cameras. These cameras are used for tracking vehicles on the ground, to aid in pursuit of villains in vehicles and on foot, while also assisting in the direction and co-ordination of emergency service vehicles as the occasion arises. These functions rely on unambiguous identification of police and the other emergency service vehicles. In the visible band this is achieved by dark markings with high contrast (light) backgrounds on the roof of vehicles. When there is no ambient lighting, for example at night, thermal imaging is used to track both vehicles and people. In the thermal IR, the visible markings are not obvious. At the wavelength thermal imagers operate, either 3-5 microns or 8-12 microns, the dark and light coloured materials have similar low reflectivity. To maximise the usefulness of IR airborne surveillance, a method of passively and unobtrusively marking vehicles concurrently in the visible and thermal infrared is needed. In this paper we discuss the design, application and operation of some vehicle and personnel marking materials and show airborne IR and visible imagery of materials in use.
Multi-frame image processing with panning cameras and moving subjects
NASA Astrophysics Data System (ADS)
Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric
2014-06-01
Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.
NASA Astrophysics Data System (ADS)
Lee, Kyuhang; Ko, Jinseok; Wi, Hanmin; Chung, Jinil; Seo, Hyeonjin; Jo, Jae Heung
2018-06-01
The visible TV system used in the Korea Superconducting Tokamak Advanced Research device has been equipped with a periscope to minimize the damage on its CCD pixels from neutron radiation. The periscope with more than 2.3 m in overall length has been designed for the visible camera system with its semi-diagonal field of view as wide as 30° and its effective focal length as short as 5.57 mm. The design performance of the periscope includes the modulation transfer function greater than 0.25 at 68 cycles/mm with low distortion. The installed periscope system has confirmed the image qualities as designed and also as comparable as those from its predecessor but with far less probabilities of neutral damages on the camera.
Analysis of staged Z-pinch implosion trajectories from experiments on Zebra
NASA Astrophysics Data System (ADS)
Ross, Mike P.; Conti, F.; Darling, T. W.; Ruskov, E.; Valenzuela, J.; Wessel, F. J.; Beg, F.; Narkis, J.; Rahman, H. U.
2017-10-01
The Staged Z-pinch plasma confinement concept relies on compressing an annular liner of high-Z plasma onto a target plasma column of deuterium fuel. The interface between the liner and target is stable against the Magneto-Rayleigh-Taylor Instability, which leads to effective fuel compression and makes the concept interesting as a potential fusion reactor. The liner initiates as a neutral gas puff, while the target plasma is a partially ionized (Zeff < 10 percent column ejected from a coaxial plasma gun. The Zebra pulsed power generator (1 MA peak current, 100 ns rise time) provides the discharge that ionizes the liner and drives the Z-pinch implosion. Diverse diagnostics observe the 100-300 km/s implosions including silicon diodes, photo-conducting detectors (PCDs), laser shadowgraphy, an XUV framing camera, and a visible streak camera. The imaging diagnostics track instabilities smaller than 0.1 mm, and Z-pinch diameters below 2.5 mm are seen at peak compression. This poster correlates the data from these diagnostics to elucidate implosion behavior dependencies on liner gas, liner pressure, target pressure, and applied, axial-magnetic field. Funded by the Advanced Research Projects Agency - Energy, DE-AR0000569.
Large format geiger-mode avalanche photodiode LADAR camera
NASA Astrophysics Data System (ADS)
Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison
2013-05-01
Recently Spectrolab has successfully demonstrated a compact 32x32 Laser Detection and Range (LADAR) camera with single photo-level sensitivity with small size, weight, and power (SWAP) budget for threedimensional (3D) topographic imaging at 1064 nm on various platforms. With 20-kHz frame rate and 500- ps timing uncertainty, this LADAR system provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. At a 10 mph forward speed and 1000 feet above ground level (AGL), it covers 0.5 square-mile per hour with a resolution of 25 in2/pixel after data averaging. In order to increase the forward speed to fit for more platforms and survey a large area more effectively, Spectrolab is developing 32x128 Geiger-mode LADAR camera with 43 frame rate. With the increase in both frame rate and array size, the data collection rate is improved by 10 times. With a programmable bin size from 0.3 ps to 0.5 ns and 14-bit timing dynamic range, LADAR developers will have more freedom in system integration for various applications. Most of the special features of Spectrolab 32x32 LADAR camera, such as non-uniform bias correction, variable range gate width, windowing for smaller arrays, and short pixel protection, are implemented in this camera.
Software for Acquiring Image Data for PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Cheung, H. M.; Kressler, Brian
2003-01-01
PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.
2017 Total Solar Eclipse - ISS Transit - (NHQ201708210203)
2017-08-21
2017 Total Solar Eclipse - ISS Transit - (NHQ201708210203) In this video captured at 1,500 frames per second with a high-speed camera, the International Space Station, with a crew of six onboard, is seen in silhouette as it transits the sun at roughly five miles per second during a partial solar eclipse, Monday, Aug. 21, 2017 near Banner, Wyoming. Onboard as part of Expedition 52 are: NASA astronauts Peggy Whitson, Jack Fischer, and Randy Bresnik; Russian cosmonauts Fyodor Yurchikhin and Sergey Ryazanskiy; and ESA (European Space Agency) astronaut Paolo Nespoli. A total solar eclipse swept across a narrow portion of the contiguous United States from Lincoln Beach, Oregon to Charleston, South Carolina. A partial solar eclipse was visible across the entire North American continent along with parts of South America, Africa, and Europe. Photo Credit: (NASA/Joel Kowsky)
NASA Astrophysics Data System (ADS)
Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.
2015-12-01
Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.
Into the blue: AO science with MagAO in the visible
NASA Astrophysics Data System (ADS)
Close, Laird M.; Males, Jared R.; Follette, Katherine B.; Hinz, Phil; Morzinski, Katie; Wu, Ya-Lin; Kopon, Derek; Riccardi, Armando; Esposito, Simone; Puglisi, Alfio; Pinna, Enrico; Xompero, Marco; Briguglio, Runa; Quiros-Pacheco, Fernando
2014-08-01
We review astronomical results in the visible (λ<1μm) with adaptive optics. Other than a brief period in the early 1990s, there has been little astronomical science done in the visible with AO until recently. The most productive visible AO system to date is our 6.5m Magellan telescope AO system (MagAO). MagAO is an advanced Adaptive Secondary system at the Magellan 6.5m in Chile. This secondary has 585 actuators with < 1 msec response times (0.7 ms typically). We use a pyramid wavefront sensor. The relatively small actuator pitch (~23 cm/subap) allows moderate Strehls to be obtained in the visible (0.63-1.05 microns). We use a CCD AO science camera called "VisAO". On-sky long exposures (60s) achieve <30mas resolutions, 30% Strehls at 0.62 microns (r') with the VisAO camera in 0.5" seeing with bright R < 8 mag stars. These relatively high visible wavelength Strehls are made possible by our powerful combination of a next generation ASM and a Pyramid WFS with 378 controlled modes and 1000 Hz loop frequency. We'll review the key steps to having good performance in the visible and review the exciting new AO visible science opportunities and refereed publications in both broad-band (r,i,z,Y) and at Halpha for exoplanets, protoplanetary disks, young stars, and emission line jets. These examples highlight the power of visible AO to probe circumstellar regions/spatial resolutions that would otherwise require much larger diameter telescopes with classical infrared AO cameras.
NASA Astrophysics Data System (ADS)
Wróżyński, Rafał; Pyszny, Krzysztof; Sojka, Mariusz; Przybyła, Czesław; Murat-Błażejewska, Sadżide
2017-06-01
The article describes how the Structure-from-Motion (SfM) method can be used to calculate the volume of anthropogenic microtopography. In the proposed workflow, data is obtained using mass-market devices such as a compact camera (Canon G9) and a smartphone (iPhone5). The volume is computed using free open source software (VisualSFMv0.5.23, CMPMVSv0.6.0., MeshLab) on a PCclass computer. The input data is acquired from video frames. To verify the method laboratory tests on the embankment of a known volume has been carried out. Models of the test embankment were built using two independent measurements made with those two devices. No significant differences were found between the models in a comparative analysis. The volumes of the models differed from the actual volume just by 0.7‰ and 2‰. After a successful laboratory verification, field measurements were carried out in the same way. While building the model from the data acquired with a smartphone, it was observed that a series of frames, approximately 14% of all the frames, was rejected. The missing frames caused the point cloud to be less dense in the place where they had been rejected. This affected the model's volume differed from the volume acquired with a camera by 7%. In order to improve the homogeneity, the frame extraction frequency was increased in the place where frames have been previously missing. A uniform model was thereby obtained with point cloud density evenly distributed. There was a 1.5% difference between the embankment's volume and the volume calculated from the camera-recorded video. The presented method permits the number of input frames to be increased and the model's accuracy to be enhanced without making an additional measurement, which may not be possible in the case of temporary features.
Design of a Remote Infrared Images and Other Data Acquisition Station for outdoor applications
NASA Astrophysics Data System (ADS)
Béland, M.-A.; Djupkep, F. B. D.; Bendada, A.; Maldague, X.; Ferrarini, G.; Bison, P.; Grinzato, E.
2013-05-01
The Infrared Images and Other Data Acquisition Station enables a user, who is located inside a laboratory, to acquire visible and infrared images and distances in an outdoor environment with the help of an Internet connection. This station can acquire data using an infrared camera, a visible camera, and a rangefinder. The system can be used through a web page or through Python functions.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-03-16
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.
10. 22'X34' original blueprint, VariableAngle Launcher, 'SIDE VIEW CAMERA CARSTEEL ...
10. 22'X34' original blueprint, Variable-Angle Launcher, 'SIDE VIEW CAMERA CAR-STEEL FRAME AND AXLES' drawn at 1/2'=1'-0'. (BOURD Sketch # 209124). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Variable-Interval Sequenced-Action Camera (VINSAC). Dissemination Document No. 1.
ERIC Educational Resources Information Center
Ward, Ted
The 16 millimeter (mm) Variable-Interval Sequenced-Action Camera (VINSAC) is designed for inexpensive photographic recording of effective teacher instruction and use of instructional materials for teacher education and research purposes. The camera photographs single frames at preselected time intervals (.5 second to 20 seconds) which are…
Students' Framing of Laboratory Exercises Using Infrared Cameras
ERIC Educational Resources Information Center
Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.
2015-01-01
Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N = 30) partook in four IR-camera laboratory activities, designed around the…
Chrominance watermark for mobile applications
NASA Astrophysics Data System (ADS)
Reed, Alastair; Rogers, Eliot; James, Dan
2010-01-01
Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.
NASA Astrophysics Data System (ADS)
Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.
2014-07-01
The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-α and to detect the Hanle effect in the line core. Due to the nature of Lyman-α polarizationin the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. The CLASP cameras were designed to operate with ≤ 10 e-/pixel/second dark current, ≤ 25 e- read noise, a gain of 2.0 +- 0.5 and ≤ 1.0% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.
2015-03-01
Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.
Optical fringe-reflection deflectometry with bundle adjustment
NASA Astrophysics Data System (ADS)
Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng
2018-06-01
Liquid crystal display (LCD) screens are located outside of a camera's field of view in fringe-reflection deflectometry. Therefore, fringes that are displayed on LCD screens are obtained through specular reflection by a fixed camera. Thus, the pose calibration between the camera and LCD screen is one of the main challenges in fringe-reflection deflectometry. A markerless planar mirror is used to reflect the LCD screen more than three times, and the fringes are mapped into the fixed camera. The geometrical calibration can be accomplished by estimating the pose between the camera and the virtual image of fringes. Considering the relation between their pose, the incidence and reflection rays can be unified in the camera frame, and a forward triangulation intersection can be operated in the camera frame to measure three-dimensional (3D) coordinates of the specular surface. In the final optimization, constraint-bundle adjustment is operated to refine simultaneously the camera intrinsic parameters, including distortion coefficients, estimated geometrical pose between the LCD screen and camera, and 3D coordinates of the specular surface, with the help of the absolute phase collinear constraint. Simulation and experiment results demonstrate that the pose calibration with planar mirror reflection is simple and feasible, and the constraint-bundle adjustment can enhance the 3D coordinate measurement accuracy in fringe-reflection deflectometry.
Synchronization of video recording and laser pulses including background light suppression
NASA Technical Reports Server (NTRS)
Kalshoven, Jr., James E. (Inventor); Tierney, Jr., Michael (Inventor); Dabney, Philip W. (Inventor)
2004-01-01
An apparatus for and a method of triggering a pulsed light source, in particular a laser light source, for predictable capture of the source by video equipment. A frame synchronization signal is derived from the video signal of a camera to trigger the laser and position the resulting laser light pulse in the appropriate field of the video frame and during the opening of the electronic shutter, if such shutter is included in the camera. Positioning of the laser pulse in the proper video field allows, after recording, for the viewing of the laser light image with a video monitor using the pause mode on a standard cassette-type VCR. This invention also allows for fine positioning of the laser pulse to fall within the electronic shutter opening. For cameras with externally controllable electronic shutters, the invention provides for background light suppression by increasing shutter speed during the frame in which the laser light image is captured. This results in the laser light appearing in one frame in which the background scene is suppressed with the laser light being uneffected, while in all other frames, the shutter speed is slower, allowing for the normal recording of the background scene. This invention also allows for arbitrary (manual or external) triggering of the laser with full video synchronization and background light suppression.
1989-08-23
P-34679 Range : 2 million km. ( 1.2 million miles ) In this Voyager 2, wide-angle image, the two main rings of Neptune can be clearly seen. In the lower part of the frame, the originally-announced ring arc, consisting of three distinct features, is visible. This feature covers about 35 degrees of longitude and has yet to be radially resolved in Voyager Images. from higher resolution images it is known that this region contains much more material than the diffuse belts seen elsewhere in its orbit, which seem to encircle the planet. This is consistent with the fact that ground-based observations of stellar occultations by the rings show them to be very broken and clumpy. The more sensitive, wide-angle camera is revealing more widely distributed but fainter material. Each of these rings of material lies just outside of the orbit of a newly discovered moon. One of these moons, 1989N2, may be seen in the upper right corner. The moon is streaked by its orbital motion, whereas the stars in the frame are less smeared. the dark area around the bright moon and star are artifacts of the processing required to bring out the faint rings.
NASA Technical Reports Server (NTRS)
Reda, Daniel C.; Muratore, Joseph J., Jr.; Heineck, James T.
1993-01-01
Time and flow-direction responses of shearstress-sensitive liquid crystal coatings were explored experimentally. For the time-response experiments, coatings were exposed to transient, compressible flows created during the startup and off-design operation of an injector-driven supersonic wind tunnel. Flow transients were visualized with a focusing Schlieren system and recorded with a 1000 frame/sec color video camera. Liquid crystal responses to these changing-shear environments were then recorded with the same video system, documenting color-play response times equal to, or faster than, the time interval between sequential frames (i.e., 1 millisecond). For the flow-direction experiments, a planar test surface was exposed to equal-magnitude and known-direction surface shear stresses generated by both normal and tangential subsonic jet-impingement flows. Under shear, the sense of the angular displacement of the liquid crystal dispersed (reflected) spectrum was found to be a function of the instantaneous direction of the applied shear. This technique thus renders dynamic flow reversals or flow divergences visible over entire test surfaces at image recording rates up to 1 KHz. Extensions of the technique to visualize relatively small changes in surface shear stress direction appear feasible.
Innovative Solution to Video Enhancement
NASA Technical Reports Server (NTRS)
2001-01-01
Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.
2014-06-20
ISS040-E-016422 (20 June 2014) --- One of the Expedition 40 crew members aboard the International Space Station used a 28mm focal length to record this long stretch of California's Pacific Coast on June 20, 2014. Guadalupe Island and the surrounding von Karman cloud vortices over the Pacific can be seen just above frame center. San Diego is visible in upper left and the Los Angeles Basin is just to the left of center frame. Much of the Mojave Desert is visible in bottom frame.
A New Hyperspectral Designed for Small UAS Tested in Real World Applications
NASA Astrophysics Data System (ADS)
Marcucci, E.; Saiet, E., II; Hatfield, M. C.
2014-12-01
The ability to investigate landscape and vegetation from airborne instruments offers many advantages, including high resolution data, ability to deploy instruments over a specific area, and repeat measurements. The Alaska Center for Unmanned Aircraft Systems Integration (ACUASI) has recently integrated a hyperspectral imaging camera onto their Ptarmigan hexacopter. The Rikola Hyperspectral Camera manufactured by VTT and Rikola, Ltd. is capable of obtaining data within the 400-950 nm range with an accuracy of ~1 nm. Using the compact flash on the UAV limits the maximum number of channels to 24 this summer. The camera uses a single frame to sequentially record the spectral bands of interest in a 37° field-of-view. Because the camera collects data as single frames it takes a finite amount of time to compile the complete spectral. Although each frame takes only 5 nanoseconds, co-registration of frames is still required. The hovering ability of the hexacopter helps eliminate frame shift. GPS records data for incorporation into a larger dataset. Conservatively, the Ptarmigan can fly at an altitude of 400 feet, for 15 minutes, and 7000 feet away from the operator. The airborne hyperspectral instrument will be extremely useful to scientists as a platform that can provide data on-request. Since the spectral range of the camera is ideal for the study of vegetation, this study 1) examines seasonal changes of vegetation of the Fairbanks area, 2) ground-truths satellite measurements, and 3) ties vegetation conditions around a weather tower to the tower readings. Through this proof of concept, ACUASI provides a means for scientists to request the most up-to-date and location-specific data for their field sites. Additionally, the resolution of the airborne instruments is much higher than that of satellite data, these may be readily tasked, and they have the advantage over manned flights in terms of manpower and cost.
A simple demonstration when studying the equivalence principle
NASA Astrophysics Data System (ADS)
Mayer, Valery; Varaksina, Ekaterina
2016-06-01
The paper proposes a lecture experiment that can be demonstrated when studying the equivalence principle formulated by Albert Einstein. The demonstration consists of creating stroboscopic photographs of a ball moving along a parabola in Earth's gravitational field. In the first experiment, a camera is stationary relative to Earth's surface. In the second, the camera falls freely downwards with the ball, allowing students to see that the ball moves uniformly and rectilinearly relative to the frame of reference of the freely falling camera. The equivalence principle explains this result, as it is always possible to propose an inertial frame of reference for a small region of a gravitational field, where space-time effects of curvature are negligible.
Fast, deep record length, time-resolved visible spectroscopy of plasmas using fiber grids
NASA Astrophysics Data System (ADS)
Brockington, Samuel; Case, Andrew; Cruz, Edward; Witherspoon, F. Douglas; Horton, Robert; Klauser, Ruth; Hwang, D. Q.
2016-10-01
HyperV Technologies is developing a fiber-coupled, deep-record-length, low-light camera head for performing high time resolution spectroscopy on visible emission from plasma events. New solid-state Silicon Photo-Multiplier (SiPM) chips are capable of single photon event detection and high speed data acquisition. By coupling the output of a spectrometer to an imaging fiber bundle connected to a bank of amplified SiPMs, time-resolved spectroscopic imagers of 100 to 1,000 pixels can be constructed. Target pixel performance is 10 Megaframes/sec with record lengths of up to 256,000 frames yielding 25.6 milliseconds of record at10 Megasamples/sec resolution. Pixel resolutions of 8 to 12 bits are pos- sible. Pixel pitch can be refined by using grids of 100 μm to 1000 μm diameter fibers. A prototype 32-pixel spectroscopic imager employing this technique was constructed and successfully tested at the University of California at Davis Compact Toroid Injection Experiment (CTIX) as a full demonstration of the concept. Experimental results will be dis-cussed, along with future plans for the Phase 2 project, and potential applications to plasma experiments . Work supported by USDOE SBIR Grant DE-SC0013801.
NASA Astrophysics Data System (ADS)
Mens, Alain; Alozy, Eric; Aubert, Damien; Benier, Jacky; Bourgade, Jean-Luc; Boutin, Jean-Yves; Brunel, Patrick; Charles, Gilbert; Chollet, Clement; Desbat, Laurent; Gontier, Dominique; Jacquet, Henri-Patrick; Jasmin, Serge; Le Breton, Jean-Pierre; Marchet, Bruno; Masclet-Gobin, Isabelle; Mercier, Patrick; Millier, Philippe; Missault, Carole; Negre, Jean-Paul; Paul, Serge; Rosol, Rodolphe; Sommerlinck, Thierry; Veaux, Jacqueline; Veron, Laurent; Vincent de Araujo, Manuel; Jaanimagi, Paul; Pien, Greg
2003-07-01
This paper gives an overview of works undertaken at CEA/DIF in high speed cinematography, optoelectronic imaging and ultrafast photonics for the needs of the CEA/DAM experimental programs. We have developed a new multichannel velocimeter, and a new probe for shock breakout timing measurements in detonics experiments. A brief description and a recall of their main performances will be made. We have implemented three new optoelectronic imaging systems, in order to observe dynamic scenes in the ranges of 50 - 100 keV and 4 MeV. These systems are described, their main specifications and performances are given. Then we describe our contribution to the ICF program: after recalling the specifications of LIL plasma diagnostics, we describe the features and performances of visible streak tubes, X-ray streak tubes, visible and X-ray framing cameras and the associated systems developed to match these specifications. At last we introduce the subject of components and systems vulnerability in the LMJ target area, the principles identified to mitigate this problem and the first results of studies (image relay, response of streak tube phosphors, MCP image intensifiers and CCDs to fusion neutrons) related to this subject. Results obtained so far are presented.
USDA-ARS?s Scientific Manuscript database
This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
3D kinematic measurement of human movement using low cost fish-eye cameras
NASA Astrophysics Data System (ADS)
Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.
2017-02-01
3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.
NASA Astrophysics Data System (ADS)
Jaanimagi, Paul A.
1992-01-01
This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.
System selects framing rate for spectrograph camera
NASA Technical Reports Server (NTRS)
1965-01-01
Circuit using zero-order light is reflected to a photomultiplier in the incoming radiation of a spectrograph monitor to provide an error signal which controls the advancing and driving rate of the film through the camera.
Comet Wild 2 Up Close and Personal
NASA Technical Reports Server (NTRS)
2004-01-01
On January 2, 2004 NASA's Stardust spacecraft made a close flyby of comet Wild 2 (pronounced 'Vilt-2'). Among the equipment the spacecraft carried on board was a navigation camera. This is the 34th of the 72 images taken by Stardust's navigation camera during close encounter. The exposure time was 10 milliseconds. The two frames are actually of 1 single exposure. The frame on the left depicts the comet as the human eye would see it. The frame on the right depicts the same image but 'stretched' so that the faint jets emanating from Wild 2 can be plainly seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter.
Data rate enhancement of optical camera communications by compensating inter-frame gaps
NASA Astrophysics Data System (ADS)
Nguyen, Duy Thong; Park, Youngil
2017-07-01
Optical camera communications (OCC) is a convenient way of transmitting data between LED lamps and image sensors that are included in most smart devices. Although many schemes have been suggested to increase the data rate of the OCC system, it is still much lower than that of the photodiode-based LiFi system. One major reason of this low data rate is attributed to the inter-frame gap (IFG) of image sensor system, that is, the time gap between consecutive image frames. In this paper, we propose a way to compensate for this IFG efficiently by an interleaved Hamming coding scheme. The proposed scheme is implemented and the performance is measured.
An Automatic Portable Telecine Camera.
1978-08-01
five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the
Development of biostereometric experiments. [stereometric camera system
NASA Technical Reports Server (NTRS)
Herron, R. E.
1978-01-01
The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.
3-d Modeling of Comet Borrelly's Nucleus
NASA Astrophysics Data System (ADS)
Giese, B.; Oberst, J.; Howington-Kraus, E.; Kirk, R.; Soderblom, L.; Ds1 Science Team
During the DS1 encounter with comet Borrelly, the onboard camera MICAS (Minia- ture Integrated Camera and Spectrometer) acquired a series of images with spectac- ular detail [1]. Two of the highest resolution frames (58m/pxl, 47m/pxl) formed an effective stereo pair (8 deg convergence angle), on the basis of which teams at DLR and the USGS derived topographic models. Though different approaches were used in the analysis, the results are in remarkable agreement. The horizontal resolution of the stereo models is approx. 500m, and their vertical precision is expected to be in the range of 100m-150m, but perhaps three times worse in places with low surface texture. The visible area of the elongated nucleus (long axis approx. 8km, short axis approx. 4km) is characterized by a dichotomy. The "upper" end (toward the top of the image, as conventionally displayed) is gently tilted relative to the reference image plane and shows slopes of up to 40 deg towards the limb. The other end is smaller and canted relative to the "upper" end by approx. 35 deg in the direction towards the camera. Slopes towards the limb appear to be as high as 70 deg. The presence of faults and fractures near the boundary between the two ends additionally supports the view of a dichotomy. Perhaps, the nucleus is a contact binary, which formed by a collisional event. [1] Soderblom et al. (2002), submitted to Science.
Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570
Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).
Proposed patient motion monitoring system using feature point tracking with a web camera.
Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi
2017-12-01
Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.
Puget Sound, Seattle, WA, USA, Vancouver, British Columbia, Canada
1992-09-20
STS047-151-488 (12 - 20 Sept 1992) --- In this large format camera image, the forested Cascade Range appears along the left side; the Pacific Ocean, on the right. The frame was photographed as the Space Shuttle Endeavour flew north to south over Vancouver and Seattle. Many peaks in the Cascades reach altitudes greater than 9,000 feet and remain snowcapped even in mid-summer. The Strait of Juan de Fuca separates the Olympic Peninsula (top right) from Vancouver Island (bottom right). Snowcapped Mt. Olympus (7,965 feet) is one of the wettest places in the continental United States, with rainfall in excess of 120 inches per year. The port cities of Seattle and Tacoma occupy the heavily indented coastline of Puget Sound (top center). They appear as light-colored areas on the left side of the Sound. The angular street pattern of Tacoma is visible at the top of the picture. The international boundary between Canada and the United States of America runs across the middle of the view. The city of Victoria (center) is the light patch on the tip of Vancouver Island. Canada's Fraser River Delta provides flat topography on which the cities of Vancouver, Burnaby, and New Westminster were built. These cities appear as the light-colored area just left of center. The Fraser River can be seen snaking its way out of the mountains at the apex of the delta. Numerous ski resorts dot the slopes of the mountains (bottom left) that rise immediately to the north of Vancouver. In the same area the blue water of Harrison and other, smaller lakes fills some of the valleys that were excavated by glaciers in the "recent" geological past, according to NASA scientists studying the photography. A Linhof camera was used to expose the frame.
NASA Technical Reports Server (NTRS)
Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtin, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike
2014-01-01
The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30%) quantum efficiency at the Lyman-$\\alpha$ line. The CLASP cameras were designed to operate with =10 e- /pixel/second dark current, = 25 e- read noise, a gain of 2.0 and =0.1% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.
NASA Technical Reports Server (NTRS)
Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.
2014-01-01
The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1 percent in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1 percent polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30 percent) quantum efficiency at the Lyman-alpha line. The CLASP cameras were designed to operate with 10 e-/pixel/second dark current, 25 e- read noise, a gain of 2.0 +/- 0.5 and 1.0 percent residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.
Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-01-01
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783
Dense Region of Impact Craters
2011-09-23
NASA Dawn spacecraft obtained this image of the giant asteroid Vesta with its framing camera on Aug. 14 2011. This image was taken through the camera clear filter. The image has a resolution of about 260 meters per pixel.
Inexpensive Neutron Imaging Cameras Using CCDs for Astronomy
NASA Astrophysics Data System (ADS)
Hewat, A. W.
We have developed inexpensive neutron imaging cameras using CCDs originally designed for amateur astronomical observation. The low-light, high resolution requirements of such CCDs are similar to those for neutron imaging, except that noise as well as cost is reduced by using slower read-out electronics. For example, we use the same 2048x2048 pixel ;Kodak; KAI-4022 CCD as used in the high performance PCO-2000 CCD camera, but our electronics requires ∼5 sec for full-frame read-out, ten times slower than the PCO-2000. Since neutron exposures also require several seconds, this is not seen as a serious disadvantage for many applications. If higher frame rates are needed, the CCD unit on our camera can be easily swapped for a faster readout detector with similar chip size and resolution, such as the PCO-2000 or the sCMOS PCO.edge 4.2.
1986-01-25
P-29506BW Range: 1.12 million kilometers (690,000 miles) This high-resolution image of the epsilon ring of Uranus is a clear-filter picture from Voyager's narrow-angle camera and has a resolution of about 10 km (6 mi). The epsilon ring, approx. 100 km (60 mi) wide at this location, clearly shows a structural variation. Visible here are a broad, bright outer component about 40 km (25 mi) wide; a darker, middle region of comparable width; and a narrow, bright inner strip about 15 km (9 mi) wide. The epsilon-ring structure seen by Voyager is similiar to that observed from the ground with stellar-occultation techniques. This frame represents the first Voyager image that resolves these features within the epsilon ring. The occasional fuzzy splotches on the outer and innerparts of the ring are artifacts left by the removal of reseau marks (used for making measurements on the image).
Linnehan during Expedition 16/STS-123 EVA 3
2008-03-18
ISS016-E-033024 (17/18 March 2008) --- Astronaut Rick Linnehan, STS-123 mission specialist, uses a digital camera to expose a photo of his helmet visor during the mission's third scheduled session of extravehicular activity (EVA) as construction and maintenance continue on the International Space Station. Also visible in the reflections in the visor are various components of the station, the docked Space Shuttle Endeavour and a blue and white portion of Earth. During the 6-hour, 53-minute spacewalk, Linnehan and astronaut Robert L. Behnken (out of frame), mission specialist, installed a spare-parts platform and tool-handling assembly for Dextre, also known as the Special Purpose Dextrous Manipulator (SPDM). Among other tasks, they also checked out and calibrated Dextre's end effector and attached critical spare parts to an external stowage platform. The new robotic system is scheduled to be activated on a power and data grapple fixture located on the Destiny laboratory on flight day nine.
Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera
NASA Astrophysics Data System (ADS)
Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu
2016-09-01
We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.
A passive terahertz video camera based on lumped element kinetic inductance detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon
We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequencymore » domain multiplexing electronics.« less
Robust Behavior Recognition in Intelligent Surveillance Environments.
Batchuluun, Ganbayar; Kim, Yeong Gon; Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2016-06-30
Intelligent surveillance systems have been studied by many researchers. These systems should be operated in both daytime and nighttime, but objects are invisible in images captured by visible light camera during the night. Therefore, near infrared (NIR) cameras, thermal cameras (based on medium-wavelength infrared (MWIR), and long-wavelength infrared (LWIR) light) have been considered for usage during the nighttime as an alternative. Due to the usage during both daytime and nighttime, and the limitation of requiring an additional NIR illuminator (which should illuminate a wide area over a great distance) for NIR cameras during the nighttime, a dual system of visible light and thermal cameras is used in our research, and we propose a new behavior recognition in intelligent surveillance environments. Twelve datasets were compiled by collecting data in various environments, and they were used to obtain experimental results. The recognition accuracy of our method was found to be 97.6%, thereby confirming the ability of our method to outperform previous methods.
Analysis of edge density fluctuation measured by trial KSTAR beam emission spectroscopy systema)
NASA Astrophysics Data System (ADS)
Nam, Y. U.; Zoletnik, S.; Lampert, M.; Kovácsik, Á.
2012-10-01
A beam emission spectroscopy (BES) system based on direct imaging avalanche photodiode (APD) camera has been designed for Korea Superconducting Tokamak Advanced Research (KSTAR) and a trial system has been constructed and installed for evaluating feasibility of the design. The system contains two cameras, one is an APD camera for BES measurement and another is a fast visible camera for position calibration. Two pneumatically actuated mirrors were positioned at front and rear of lens optics. The front mirror can switch the measurement between edge and core region of plasma and the rear mirror can switch between the APD and the visible camera. All systems worked properly and the measured photon flux was reasonable as expected from the simulation. While the measurement data from the trial system were limited, it revealed some interesting characteristics of KSTAR plasma suggesting future research works with fully installed BES system. The analysis result and the development plan will be presented in this paper.
NASA Technical Reports Server (NTRS)
Stefanov, William L.; Lee, Yeon Jin; Dille, Michael
2016-01-01
Handheld astronaut photography of the Earth has been collected from the International Space Station (ISS) since 2000, making it the most temporally extensive remotely sensed dataset from this unique Low Earth orbital platform. Exclusive use of digital handheld cameras to perform Earth observations from the ISS began in 2004. Nadir viewing imagery is constrained by the inclined equatorial orbit of the ISS to between 51.6 degrees North and South latitude, however numerous oblique images of land surfaces above these latitudes are included in the dataset. While unmodified commercial off-the-shelf digital cameras provide only visible wavelength, three-band spectral information of limited quality current cameras used with long (400+ mm) lenses can obtain high quality spatial information approaching 2 meters/ground pixel resolution. The dataset is freely available online at the Gateway to Astronaut Photography of Earth site (http://eol.jsc.nasa.gov), and now comprises over 2 million images. Despite this extensive image catalog, use of the data for scientific research, disaster response, commercial applications and visualizations is minimal in comparison to other data collected from free-flying satellite platforms such as Landsat, Worldview, etc. This is due primarily to the lack of fully-georeferenced data products - while current digital cameras typically have integrated GPS, this does not function in the Low Earth Orbit environment. The Earth Science and Remote Sensing (ESRS) Unit at NASA Johnson Space Center provides training in Earth Science topics to ISS crews, performs daily operations and Earth observation target delivery to crews through the Crew Earth Observations (CEO) Facility on board ISS, and also catalogs digital handheld imagery acquired from orbit by manually adding descriptive metadata and determining an image geographic centerpoint using visual feature matching with other georeferenced data, e.g. Landsat, Google Earth, etc. The lack of full geolocation information native to the data makes it difficult to integrate astronaut photographs with other georeferenced data to facilitate quantitative analysis such as urban land cover/land use classification, change detection, or geologic mapping. The manual determination of image centerpoints is both time and labor-intensive, leading to delays in releasing geolocated and cataloged data to the public, such as the timely use of data for disaster response. The GeoCam Space project was funded by the ISS Program in 2015 to develop an on-orbit hardware and ground-based software system for increasing the efficiency of geolocating astronaut photographs from the ISS (Fig. 1). The Intelligent Robotics Group at NASA Ames Research Center leads the development of both the ground and on-orbit systems in collaboration with the ESRS Unit. The hardware component consists of modified smartphone elements including cameras, central processing unit, wireless Ethernet, and an inertial measurement unit (gyroscopes/accelerometers/magnetometers) reconfigured into a compact unit that attaches to the base of the current Nikon D4 camera - and its replacement, the Nikon D5 - and connects using the standard Nikon peripheral connector or USB port. This provides secondary, side and downward facing cameras perpendicular to the primary camera pointing direction. The secondary cameras observe calibration targets with known internal X, Y, and Z position affixed to the interior of the ISS to determine the camera pose corresponding to each image frame. This information is recorded by the GeoCam Space unit and indexed for correlation to the camera time recorded for each image frame. Data - image, EXIF header, and camera pose information - is transmitted to the ground software system (GeoRef) using the established Ku-band USOS downlink system. Following integration on the ground, the camera pose information provides an initial geolocation estimate for the individual film frame. This new capability represents a significant advance in geolocation from the manual feature-matching approach for both nadir and off-nadir viewing imagery. With the initial geolocation estimate, full georeferencing of an image is completed using the rapid tie-pointing interface in GeoRef, and the resulting data is added to the Gateway to Astronaut Photography of Earth online database in both Geotiff and Keyhole Markup Language (kml) formats. The integration of the GeoRef software component of Geocam Space into the CEO image cataloging workflow is complete, and disaster response imagery acquired by the ISS crew is now fully georeferenced as a standard data product. The on-orbit hardware component (GeoSens) is in final prototyping phase, and is on-schedule for launch to the ISS in late 2016. Installation and routine use of the Geocam Space system for handheld digital camera photography from the ISS is expected to significantly improve the usefulness of this unique dataset for a variety of public- and private-sector applications.
False-Color Image of an Impact Crater on Vesta
2011-08-24
NASA Dawn spacecraft obtained this false-color image right of an impact crater in asteroid Vesta equatorial region with its framing camera on July 25, 2011. The view on the left is from the camera clear filter.
Rapid orthophoto development system.
DOT National Transportation Integrated Search
2013-06-01
The DMC system procured in the project represented state-of-the-art, large-format digital aerial camera systems at the start of : project. DMC is based on the frame camera model, and to achieve large ground coverage with high spatial resolution, the ...
Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing
NASA Technical Reports Server (NTRS)
Crooke, Julie A.
2003-01-01
The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.
Non-flickering 100 m RGB visible light communication transmission based on a CMOS image sensor.
Chow, Chi-Wai; Shiu, Ruei-Jie; Liu, Yen-Chun; Liu, Yang; Yeh, Chien-Hung
2018-03-19
We demonstrate a non-flickering 100 m long-distance RGB visible light communication (VLC) transmission based on a complementary-metal-oxide-semiconductor (CMOS) camera. Experimental bit-error rate (BER) measurements under different camera ISO values and different transmission distances are evaluated. Here, we also experimentally reveal that the rolling shutter effect- (RSE) based VLC system cannot work at long distance transmission, and the under-sampled modulation- (USM) based VLC system is a good choice.
Perez-Mendez, V.
1997-01-21
A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.
Perez-Mendez, Victor
1997-01-01
A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.
Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung
2018-05-24
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
Multithreaded hybrid feature tracking for markerless augmented reality.
Lee, Taehee; Höllerer, Tobias
2009-01-01
We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.
Comet Wild 2 Up Close and Personal
2004-01-02
On January 2, 2004 NASA's Stardust spacecraft made a close flyby of comet Wild 2 (pronounced "Vilt-2"). Among the equipment the spacecraft carried on board was a navigation camera. This is the 34th of the 72 images taken by Stardust's navigation camera during close encounter. The exposure time was 10 milliseconds. The two frames are actually of 1 single exposure. The frame on the left depicts the comet as the human eye would see it. The frame on the right depicts the same image but "stretched" so that the faint jets emanating from Wild 2 can be plainly seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter. http://photojournal.jpl.nasa.gov/catalog/PIA05571
Mitigation of Atmospheric Effects on Imaging Systems
2004-03-31
focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted
Big Crater as Viewed by Pathfinder Lander
NASA Technical Reports Server (NTRS)
1997-01-01
The 'Big Crater' is actually a relatively small Martian crater to the southeast of the Mars Pathfinder landing site. It is 1500 meters (4900 feet) in diameter, or about the same size as Meteor Crater in Arizona. Superimposed on the rim of Big Crater (the central part of the rim as seen here) is a smaller crater nicknamed 'Rimshot Crater.' The distance to this smaller crater, and the nearest portion of the rim of Big Crater, is 2200 meters (7200 feet). To the right of Big Crater, south from the spacecraft, almost lost in the atmospheric dust 'haze,' is the large streamlined mountain nicknamed 'Far Knob.' This mountain is over 450 meters (1480 feet) tall, and is over 30 kilometers (19 miles) from the spacecraft. Another, smaller and closer knob, nicknamed 'Southeast Knob' can be seen as a triangular peak to the left of the flanks of the Big Crater rim. This knob is 21 kilometers (13 miles) southeast from the spacecraft.
The larger features visible in this scene - Big Crater, Far Knob, and Southeast Knob - were discovered on the first panoramas taken by the IMP camera on the 4th of July, 1997, and subsequently identified in Viking Orbiter images taken over 20 years ago. The scene includes rocky ridges and swales or 'hummocks' of flood debris that range from a few tens of meters away from the lander to the distance of South Twin Peak. The largest rock in the nearfield, just left of center in the foreground, nicknamed 'Otter', is about 1.5 meters (4.9 feet) long and 10 meters (33 feet) from the spacecraft.This view of Big Crater was produced by combining 6 individual 'Superpan' scenes from the left and right eyes of the IMP camera. Each frame consists of 8 individual frames (left eye) and 7 frames (right eye) taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech). The IMP was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.Ultra-fast high-resolution hybrid and monolithic CMOS imagers in multi-frame radiography
NASA Astrophysics Data System (ADS)
Kwiatkowski, Kris; Douence, Vincent; Bai, Yibin; Nedrow, Paul; Mariam, Fesseha; Merrill, Frank; Morris, Christopher L.; Saunders, Andy
2014-09-01
A new burst-mode, 10-frame, hybrid Si-sensor/CMOS-ROIC FPA chip has been recently fabricated at Teledyne Imaging Sensors. The intended primary use of the sensor is in the multi-frame 800 MeV proton radiography at LANL. The basic part of the hybrid is a large (48×49 mm2) stitched CMOS chip of 1100×1100 pixel count, with a minimum shutter speed of 50 ns. The performance parameters of this chip are compared to the first generation 3-frame 0.5-Mpixel custom hybrid imager. The 3-frame cameras have been in continuous use for many years, in a variety of static and dynamic experiments at LANSCE. The cameras can operate with a per-frame adjustable integration time of ~ 120ns-to- 1s, and inter-frame time of 250ns to 2s. Given the 80 ms total readout time, the original and the new imagers can be externally synchronized to 0.1-to-5 Hz, 50-ns wide proton beam pulses, and record up to ~1000-frame radiographic movies typ. of 3-to-30 minute duration. The performance of the global electronic shutter is discussed and compared to that of a high-resolution commercial front-illuminated monolithic CMOS imager.
A state observer for using a slow camera as a sensor for fast control applications
NASA Astrophysics Data System (ADS)
Gahleitner, Reinhard; Schagerl, Martin
2013-03-01
This contribution concerns about a problem that often arises in vision based control, when a camera is used as a sensor for fast control applications, or more precisely, when the sample rate of the control loop is higher than the frame rate of the camera. In control applications for mechanical axes, e.g. in robotics or automated production, a camera and some image processing can be used as a sensor to detect positions or angles. The sample time in these applications is typically in the range of a few milliseconds or less and this demands the use of a camera with a high frame rate up to 1000 fps. The presented solution is a special state observer that can work with a slower and therefore cheaper camera to estimate the state variables at the higher sample rate of the control loop. To simplify the image processing for the determination of positions or angles and make it more robust, some LED markers are applied to the plant. Simulation and experimental results show that the concept can be used even if the plant is unstable like the inverted pendulum.
MS Walheim poses with a Hasselblad camera on the flight deck of Atlantis during STS-110
2002-04-08
STS110-E-5017 (8 April 2002) --- Astronaut Rex J. Walheim, STS-110 mission specialist, holds a camera on the aft flight deck of the Space Shuttle Atlantis. A blue and white Earth is visible through the overhead windows of the orbiter. The image was taken with a digital still camera.
Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking
NASA Technical Reports Server (NTRS)
Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.
2005-01-01
This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.
Evaluation of sequential images for photogrammetrically point determination
NASA Astrophysics Data System (ADS)
Kowalczyk, M.
2011-12-01
Close range photogrammetry encounters many problems with reconstruction of objects three-dimensional shape. Relative orientation parameters of taken photos makes usually key role leading to right solution of this problem. Automation of technology process is hardly performed due to recorded scene complexity and configuration of camera positions. This configuration makes the process of joining photos into one set usually impossible automatically. Application of camcorder is the solution widely proposed in literature for support in 3D models creation. Main advantages of this tool are connected with large number of recorded images and camera positions. Exterior orientation changes barely between two neighboring frames. Those features of film sequence gives possibilities for creating models with basic algorithms, working faster and more robust, than with remotely taken photos. The first part of this paper presents results of experiments determining interior orientation parameters of some sets of frames, presenting three-dimensional test field. This section describes calibration repeatability of film frames taken from camcorder. It is important due to stability of interior camera geometric parameters. Parametric model of systematical errors was applied for correcting images. Afterwards a short film of the same test field had been taken for determination of check points group. This part has been done for controlling purposes of camera application in measurement tasks. Finally there are presented some results of experiments which compare determination of recorded object points in 3D space. In common digital photogrammetry, where separate photos are used, first levels of image pyramids are taken to connect with feature based matching. This complicated process creates a lot of emergencies, which can produce false detections of image similarities. In case of digital film camera, authors of publications avoid this dangerous step, going straightly to area based matching, aiming high degree of similarity for two corresponding film frames. First approximation, in establishing connections between photos, comes from whole image distance. This image distance method can work with more than just two dimensions of translation vector. Scale and angles are also used for improving image matching. This operation creates more similar looking frames where corresponding characteristic points lays close to each other. Procedure searching for pairs of points works faster and more accurately, because analyzed areas can be reduced. Another proposed solution comes from image created by adding differences between particular frames, gives more rough results, but works much faster than standard matching.
Stargazing at 'Husband Hill Observatory' on Mars
NASA Technical Reports Server (NTRS)
2005-01-01
NASA's Mars Exploration Rover Spirit continues to take advantage of extra solar energy by occasionally turning its cameras upward for night sky observations. Most recently, Spirit made a series of observations of bright star fields from the summit of 'Husband Hill' in Gusev Crater on Mars. Scientists use the images to assess the cameras' sensitivity and to search for evidence of nighttime clouds or haze. The image on the left is a computer simulation of the stars in the constellation Orion. The next three images are actual views of Orion captured with Spirit's panoramic camera during exposures of 10, 30, and 60 seconds. Because Spirit is in the southern hemisphere of Mars, Orion appears upside down compared to how it would appear to viewers in the Northern Hemisphere of Earth. 'Star trails' in the longer exposures are a result of the planet's rotation. The faintest stars visible in the 60-second exposure are about as bright as the faintest stars visible with the naked eye from Earth (about magnitude 6 in astronomical terms). The Orion Nebula, famous as a nursery of newly forming stars, is also visible in these images. Bright streaks in some parts of the images aren't stars or meteors or unidentified flying objects, but are caused by solar and galactic cosmic rays striking the camera's detector. Spirit acquired these images with the panoramic camera on Martian day, or sol, 632 (Oct. 13, 2005) at around 45 minutes past midnight local time, using the camera's broadband filter (wavelengths of 739 nanometers plus or minus 338 nanometers).Behavior of Compact Toroid Injected into C-2U Confinement Vessel
NASA Astrophysics Data System (ADS)
Matsumoto, Tadafumi; Roche, T.; Allrey, I.; Sekiguchi, J.; Asai, T.; Conroy, M.; Gota, H.; Granstedt, E.; Hooper, C.; Kinley, J.; Valentine, T.; Waggoner, W.; Binderbauer, M.; Tajima, T.; the TAE Team
2016-10-01
The compact toroid (CT) injector system has been developed for particle refueling on the C-2U device. A CT is formed by a magnetized coaxial plasma gun (MCPG) and the typical ejected CT/plasmoid parameters are as follows: average velocity 100 km/s, average electron density 1.9 ×1015 cm-3, electron temperature 30-40 eV, mass 12 μg . To refuel particles into FC plasma the CT must penetrate the transverse magnetic field that surrounds the FRC. The kinetic energy density of the CT should be higher than magnetic energy density of the axial magnetic field, i.e., ρv2 / 2 >=B2 / 2μ0 , where ρ, v, and B are mass density, velocity, and surrounded magnetic field, respectively. Also, the penetrated CT's trajectory is deflected by the transverse magnetic field (Bz 1 kG). Thus, we have to estimate CT's energy and track the CT trajectory inside the magnetic field, for which we adopted a fast-framing camera on C-2U: framing rate is up to 1.25 MHz for 120 frames. By employing the camera we clearly captured the CT/plasmoid trajectory. Comparisons between the fast-framing camera and some other diagnostics as well as CT injection results on C-2U will be presented.
Infrared Imaging Camera Final Report CRADA No. TC02061.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, E. V.; Nebeker, S.
This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF onmore » this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.« less
Research on range-gated laser active imaging seeker
NASA Astrophysics Data System (ADS)
You, Mu; Wang, PengHui; Tan, DongJie
2013-09-01
Compared with other imaging methods such as millimeter wave imaging, infrared imaging and visible light imaging, laser imaging provides both a 2-D array of reflected intensity data as well as 2-D array of range data, which is the most important data for use in autonomous target acquisition .In terms of application, it can be widely used in military fields such as radar, guidance and fuse. In this paper, we present a laser active imaging seeker system based on range-gated laser transmitter and sensor technology .The seeker system presented here consist of two important part, one is laser image system, which uses a negative lens to diverge the light from a pulse laser to flood illuminate a target, return light is collected by a camera lens, each laser pulse triggers the camera delay and shutter. The other is stabilization gimbals, which is designed to be a rotatable structure both in azimuth and elevation angles. The laser image system consists of transmitter and receiver. The transmitter is based on diode pumped solid-state lasers that are passively Q-switched at 532nm wavelength. A visible wavelength was chosen because the receiver uses a Gen III image intensifier tube with a spectral sensitivity limited to wavelengths less than 900nm.The receiver is image intensifier tube's micro channel plate coupled into high sensitivity charge coupled device camera. The image has been taken at range over one kilometer and can be taken at much longer range in better weather. Image frame frequency can be changed according to requirement of guidance with modifiable range gate, The instantaneous field of views of the system was found to be 2×2 deg. Since completion of system integration, the seeker system has gone through a series of tests both in the lab and in the outdoor field. Two different kinds of buildings have been chosen as target, which is located at range from 200m up to 1000m.To simulate dynamic process of range change between missile and target, the seeker system has been placed on the truck vehicle running along the road in an expected speed. The test result shows qualified image and good performance of the seeker system.
Round-Horizon Version of Curiosity Low-Angle Selfie at Buckskin
2015-08-19
This version of a self-portrait of NASA's Curiosity Mars rover at a drilling site called "Buckskin" on lower Mount Sharp is presented as a stereographic projection, which shows the horizon as a circle. It is a mosaic assembled from the same set of 92 component raw images used for the flatter-horizon version at PIA19807. The component images were taken by Curiosity's Mars Hand Lens Imager (MAHLI) on Aug. 5, 2015, during the 1,065th Martian day, or sol, of the rover's work on Mars. Curiosity drilled the hole at Buckskin during Sol 1060 (July 30, 2015). Two patches of pale, powdered rock material pulled from inside Buckskin are visible in this scene, in front of the rover. The patch closer to the rover is where the sample-handling mechanism on Curiosity's robotic arm dumped collected material that did not pass through a sieve in the mechanism. Sieved sample material was delivered to laboratory instruments inside the rover. The patch farther in front of the rover, roughly triangular in shape, shows where fresh tailings spread downhill from the drilling process. The drilled hole, 0.63 inch (1.6 centimeters) in diameter, is at the upper point of the tailings. The rover is facing northeast, looking out over the plains from the crest of a 20-foot (6-meter) hill that it climbed to reach the "Marias Pass" area. The upper levels of Mount Sharp are visible behind the rover, while Gale Crater's northern rim dominates most of the rest of the horizon.the horizon on the left and right of the mosaic. MAHLI is mounted at the end of the rover's robotic arm. For this self-portrait, the rover team positioned the camera lower in relation to the rover body than for any previous full self-portrait of Curiosity. The assembled mosaic does not include the rover's arm beyond a portion of the upper arm held nearly vertical from the shoulder joint. Shadows from the rest of the arm and the turret of tools at the end of the arm are visible on the ground. With the wrist motions and turret rotations used in pointing the camera for the component images, the arm was positioned out of the shot in the frames or portions of frames used in this mosaic. MAHLI was built by Malin Space Science Systems, San Diego. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Science Laboratory Project for the NASA Science Mission Directorate, Washington. JPL designed and built the project's Curiosity rover. http://photojournal.jpl.nasa.gov/catalog/PIA19806
ERIC Educational Resources Information Center
Tanner-Smith, Emily E.; Fisher, Benjamin W.
2015-01-01
Many U.S. schools use visible security measures (security cameras, metal detectors, security personnel) in an effort to keep schools safe and promote adolescents' academic success. This study examined how different patterns of visible security utilization were associated with U.S. middle and high school students' academic performance, attendance,…
Full-Frame Reference for Test Photo of Moon
2005-09-10
This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor
Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio
2011-01-01
This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016
Cassini "Noodle" Mosaic of Saturn
2017-07-24
This mosaic of images combines views captured by NASA's Cassini spacecraft as it made the first dive of the mission's Grand Finale on April 26, 2017. It shows a vast swath of Saturn's atmosphere, from the north polar vortex to the boundary of the hexagon-shaped jet stream, to details in bands and swirls at middle latitudes and beyond. The mosaic is a composite of 137 images captured as Cassini made its first dive toward the gap between Saturn and its rings. It is an update to a previously released image product. In the earlier version, the images were presented as individual movie frames, whereas here, they have been combined into a single, continuous mosaic. The mosaic is presented as a still image as well as a video that pans across its length. Imaging scientists referred to this long, narrow mosaic as a "noodle" in planning the image sequence. The first frame of the mosaic is centered on Saturn's north pole, and the last frame is centered on a region at 18 degrees north latitude. During the dive, the spacecraft's altitude above the clouds changed from 45,000 to 3,200 miles (72,400 to 8374 kilometers), while the image scale changed from 5.4 miles (8.7 kilometers) per pixel to 0.6 mile (1 kilometer) per pixel. The bottom of the mosaic (near the end of the movie) has a curved shape. This is where the spacecraft rotated to point its high-gain antenna in the direction of motion as a protective measure before crossing Saturn's ring plane. The images in this sequence were captured in visible light using the Cassini spacecraft wide-angle camera. The original versions of these images, as sent by the spacecraft, have a size of 512 by 512 pixels. The small image size was chosen in order to allow the camera to take images quickly as Cassini sped over Saturn. These images of the planet's curved surface were projected onto a flat plane before being combined into a mosaic. Each image was mapped in stereographic projection centered at 55 degree north latitude. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21617
Experiments on helical modes in magnetized thin foil-plasmas
NASA Astrophysics Data System (ADS)
Yager-Elorriaga, David
2017-10-01
This paper gives an in-depth experimental study of helical features on magnetized, ultrathin foil-plasmas driven by the 1-MA linear transformer driver at University of Michigan. Three types of cylindrical liner loads were designed to produce: (a) pure magneto-hydrodynamic (MHD) modes (defined as being void of the acceleration-driven magneto-Rayleigh-Taylor instability, MRT) using a non-imploding geometry, (b) pure kink modes using a non-imploding, kink-seeded geometry, and (c) MRT-MHD coupled modes in an unseeded, imploding geometry. For each configuration, we applied relatively small axial magnetic fields of Bz = 0.2-2.0 T (compared to peak azimuthal fields of 30-40 T). The resulting liner-plasmas and instabilities were imaged using 12-frame laser shadowgraphy and visible self-emission on a fast framing camera. The azimuthal mode number was carefully identified with a tracking algorithm of self-emission minima. Our experiments show that the helical structures are a manifestation of discrete eigenmodes. The pitch angle of the helix is simply m / kR , from implosion to explosion, where m, k, and R are the azimuthal mode number, axial wavenumber, and radius of the helical instability. Thus, the pitch angle increases (decreases) during implosion (explosion) as R becomes smaller (larger). We found that there are one, or at most two, discrete helical modes that arise for magnetized liners, with no apparent threshold on the applied Bz for the appearance of helical modes; increasing the axial magnetic field from zero to 0.5 T changes the relative weight between the m = 0 and m = 1 modes. Further increasing the applied axial magnetic fields yield higher m modes. Finally, the seeded kink instability overwhelms the intrinsic instability modes of the plasma. These results are corroborated with our analytic theory on the effects of radial acceleration on the classical sausage, kink, and higher m modes. Work supported by US DOE award DE-SC0012328, Sandia National Laboratories, and the National Science Foundation. D.Y.E. was supported by NSF fellowship Grant Number DGE 1256260. The fast framing camera was supported by a DURIP, AFOSR Grant FA9550-15-1-0419.
Slow Speed--Fast Motion: Time-Lapse Recordings in Physics Education
ERIC Educational Resources Information Center
Vollmer, Michael; Möllmann, Klaus-Peter
2018-01-01
Video analysis with a 30 Hz frame rate is the standard tool in physics education. The development of affordable high-speed-cameras has extended the capabilities of the tool for much smaller time scales to the 1 ms range, using frame rates of typically up to 1000 frames s[superscript -1], allowing us to study transient physics phenomena happening…
NASA Astrophysics Data System (ADS)
Gouverneur, B.; Verstockt, S.; Pauwels, E.; Han, J.; de Zeeuw, P. M.; Vermeiren, J.
2012-10-01
Various visible and infrared cameras have been tested for the early detection of wildfires to protect archeological treasures. This analysis was possible thanks to the EU Firesense project (FP7-244088). Although visible cameras are low cost and give good results during daytime for smoke detection, they fall short under bad visibility conditions. In order to improve the fire detection probability and reduce the false alarms, several infrared bands are tested ranging from the NIR to the LWIR. The SWIR and the LWIR band are helpful to locate the fire through smoke if there is a direct Line Of Sight. The Emphasis is also put on the physical and the electro-optical system modeling for forest fire detection at short and longer ranges. The fusion in three bands (Visible, SWIR, LWIR) is discussed at the pixel level for image enhancement and for fire detection.
Development of plenoptic infrared camera using low dimensional material based photodetectors
NASA Astrophysics Data System (ADS)
Chen, Liangliang
Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.
Study of atmospheric discharges caracteristics using with a standard video camera
NASA Astrophysics Data System (ADS)
Ferraz, E. C.; Saba, M. M. F.
In this study is showed some preliminary statistics on lightning characteristics such as: flash multiplicity, number of ground contact points, formation of new and altered channels and presence of continuous current in the strokes that form the flash. The analysis is based on the images of a standard video camera (30 frames.s-1). The results obtained for some flashes will be compared to the images of a high-speed CCD camera (1000 frames.s-1). The camera observing site is located in São José dos Campos (23°S,46° W) at an altitude of 630m. This observational site has nearly 360° field of view at a height of 25m. It is possible to visualize distant thunderstorms occurring within a radius of 25km from the site. The room, situated over a metal structure, has water and power supplies, a telephone line and a small crane on the roof. KEY WORDS: Video images, Lightning, Multiplicity, Stroke.
HIGH SPEED KERR CELL FRAMING CAMERA
Goss, W.C.; Gilley, L.F.
1964-01-01
The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)
NASA Astrophysics Data System (ADS)
Zelazny, Amy; Benson, Robert; Deegan, John; Walsh, Ken; Schmidt, W. David; Howe, Russell
2013-06-01
We describe the benefits to camera system SWaP-C associated with the use of aspheric molded glasses and optical polymers in the design and manufacture of optical components and elements. Both camera objectives and display eyepieces, typical for night vision man-portable EO/IR systems, are explored. We discuss optical trade-offs, system performance, and cost reductions associated with this approach in both visible and non-visible wavebands, specifically NIR and LWIR. Example optical models are presented, studied, and traded using this approach.
Frames of Reference in the Classroom
NASA Astrophysics Data System (ADS)
Grossman, Joshua
2012-12-01
The classic film "Frames of Reference"1,2 effectively illustrates concepts involved with inertial and non-inertial reference frames. In it, Donald G. Ivey and Patterson Hume use the cameras perspective to allow the viewer to see motion in reference frames translating with a constant velocity, translating while accelerating, and rotating—all with respect to the Earth frame. The film is a classic for good reason, but today it does have a couple of drawbacks: 1) The film by nature only accommodates passive learning. It does not give students the opportunity to try any of the experiments themselves. 2) The dated style of the 50-year-old film can distract students from the physics content. I present here a simple setup that can recreate many of the movies demonstrations in the classroom. The demonstrations can be used to supplement the movie or in its place, if desired. All of the materials except perhaps the inexpensive web camera should likely be available already in most teaching laboratories. Unlike previously described activities, these experiments do not require travel to another location3 or an involved setup.4,5
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
Camera Trajectory fromWide Baseline Images
NASA Astrophysics Data System (ADS)
Havlena, M.; Torii, A.; Pajdla, T.
2008-09-01
Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.
JunoCam: Science and Outreach Opportunities with Juno
NASA Astrophysics Data System (ADS)
Hansen, C. J.; Orton, G. S.
2015-12-01
JunoCam is a visible imager on the Juno spacecraft en route to Jupiter. Although the primary role of the camera is for outreach, science objectives will be addressed too. JunoCam is a wide angle camera (58 deg field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm. Juno's elliptical polar orbit will offer unique views of Jupiter's polar regions with a spatial scale of ~50 km/pixel. The polar vortex, polar cloud morphology, and winds will be investigated. RGB color mages of the aurora will be acquired. Stereo images and images taken with the methane filter will allow us to estimate cloudtop heights. Resolution exceeds that of Cassini about an hour from closest approach and at closest approach images will have a spatial scale of ~3 km/pixel. JunoCam is a push-frame imager on a rotating spacecraft. The use of time-delayed integration takes advantage of the spacecraft spin to build up signal. JunoCam will acquire limb-to-limb views of Jupiter during a spacecraft rotation, and has the possibility of acquiring images of the rings from in-between Jupiter and the inner edge of the rings. Galilean satellite views will be fairly distant but some images will be acquired. Small ring moons Metis and Adrastea will also be imaged. The theme of our outreach is "science in a fish bowl", with an invitation to the science community and the public to participate. Amateur astronomers will supply their ground-based images for planning, so that we can predict when prominent atmospheric features will be visible. With the aid of professional astronomers observing at infrared wavelengths, we'll predict when hot spots will be visible to JunoCam. Amateur image processing enthusiasts are prepared to create image products. Between the planning and products will be the decision-making on what images to take when and why. We invite our colleagues to propose science questions for JunoCam to address, and to be part of the participatory process of deciding how to use our resources and scientifically analyze the data.
KA-102 Film/EO Standoff System
NASA Astrophysics Data System (ADS)
Turpin, Richard T.
1984-12-01
The KA-102 is an in-flight selectable film or electro-optic (EU) visible reconnaissance camera with a real-time data link. The lens is a 66-in., f/4 refractor with a 4° field-of-view. The focal plane is a continuous line array of 10,240 COD elements that opera tes in the pushbroom mode. In the film mode, the camera use standard 5-in.-wide 3414 or 3412 film. The E0 imagery is transmitted up to 500 n.mi. to the ground station over a 75-Mbit/sec )(- band data link via a relay aircraft (see Figure 1). The camera may be controlled from the ground station via an uplink or from the cockpit control panel. The 8-ft-diameter ground tracking antenna is located on high ground and linked to the ground station via a 1-mile-long, two-way fiber optic system. In the ground station the imagery is calibrated and displayed in real time on three crt's. Selected imagery may be stored on disk and enhanced, analyzed, and annotated in near-real-time. The imagery may be enhanced and magnified in real time. Hardcopy frames may be made on 8 x 10-in. Polaroid, 35-1m film, or dry silver paper. All the received image and engineering data is recorded on a high-density tape recorder. The aircraft track is recorded on a map plotter. Ground support equipment (GSE), manuals, spares, and training are included in the system. Falcon 20 aircraft were modified on a subcontract to Dynelectron--Ft. Worth.
A target detection multi-layer matched filter for color and hyperspectral cameras
NASA Astrophysics Data System (ADS)
Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.
2018-05-01
In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.
High speed imaging - An important industrial tool
NASA Technical Reports Server (NTRS)
Moore, Alton; Pinelli, Thomas E.
1986-01-01
High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.
Dynamic characteristics of far-field radiation of current modulated phase-locked diode laser arrays
NASA Technical Reports Server (NTRS)
Elliott, R. A.; Hartnett, K.
1987-01-01
A versatile and powerful streak camera/frame grabber system for studying the evolution of the near and far field radiation patterns of diode lasers was assembled and tested. Software needed to analyze and display the data acquired with the steak camera/frame grabber system was written and the total package used to record and perform preliminary analyses on the behavior of two types of laser, a ten emitter gain guided array and a flared waveguide Y-coupled array. Examples of the information which can be gathered with this system are presented.
One-click scanning of large-size documents using mobile phone camera
NASA Astrophysics Data System (ADS)
Liu, Sijiang; Jiang, Bo; Yang, Yuanjie
2016-07-01
Currently mobile apps for document scanning do not provide convenient operations to tackle large-size documents. In this paper, we present a one-click scanning approach for large-size documents using mobile phone camera. After capturing a continuous video of documents, our approach automatically extracts several key frames by optical flow analysis. Then based on key frames, a mobile GPU based image stitching method is adopted to generate a completed document image with high details. There are no extra manual intervention in the process and experimental results show that our app performs well, showing convenience and practicability for daily life.
Mapping Vesta Equatorial Quadrangle V-8EDL: Various Craters and Giant Grooves
NASA Astrophysics Data System (ADS)
Le Corre, L.; Nathues, A.; Reddy, V.; Buczkowski, D.; Denevi, B. W.; Gaffey, M.; Williams, D. A.; Garry, W. B.; Yingst, R.; Jaumann, R.; Pieters, C. M.; Russell, C. T.; Raymond, C. A.
2011-12-01
NASA's Dawn spacecraft arrived at the asteroid 4Vesta on July 15, 2011, and is now collecting imaging, spectroscopic, and elemental abundance data during its one-year orbital mission. As part of the geological analysis of the surface, a series of 15 quadrangle maps are being produced based on Framing Camera images (FC: spatial resolution: ~65 m/pixel) along with Visible & Infrared Spectrometer data (VIR: spatial resolution: ~180 m/pixel) obtained during the High-Altitude Mapping Orbit (HAMO). This poster presentation concentrates on our geologic analysis and mapping of quadrangle V-8EDL located between -22 and 22 degrees latitude and 144 and 216 degrees East longitude. This quadrangle is dominated by old craters (without any ejecta visible in the clear and color bands), but one small recent crater can be seen with bright ejecta blanket and rays. The latter has some small, dark units outside and inside the crater rim that could be indicative of impact melt. This quadrangle also contains a set of giant linear grooves running almost parallel to the equator that might have formed subsequent to a big impact. We will use FC mosaics with clear images and false color composites as well as VIR spectroscopy data in order to constrain the geology and identify the nature of each unit present in this quadrangle.
Investigating the Origin of Bright Materials on Vesta: Synthesis, Conclusions, and Implications
NASA Technical Reports Server (NTRS)
Li, Jian-Yang; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Schroder, S. E.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.
2012-01-01
The Dawn spacecraft started orbiting the second largest asteroid (4) Vesta in August 2011, revealing the details of its surface at an unprecedented pixel scale as small as approx.70 m in Framing Camera (FC) clear and color filter images and approx.180 m in the Visible and Infrared Spectrometer (VIR) data in its first two science orbits, the Survey Orbit and the High Altitude Mapping Orbit (HAMO) [1]. The surface of Vesta displays the greatest diversity in terms of geology and mineralogy of all asteroids studied in detail [2, 3]. While the albedo of Vesta of approx.0.38 in the visible wavelengths [4, 5] is one of the highest among all asteroids, the surface of Vesta shows the largest variation of albedos found on a single asteroid, with geometric albedos ranging at least from approx.0.10 to approx.0.67 in HAMO images [5]. There are many distinctively bright and dark areas observed on Vesta, associated with various geological features and showing remarkably different forms. Here we report our initial attempt to understand the origin of the areas that are distinctively brighter than their surroundings. The dark materials on Vesta clearly are different in origin from bright materials and are reported in a companion paper [6].
The Visible Imaging System (VIS) for the Polar Spacecraft
NASA Technical Reports Server (NTRS)
Frank, L. A.; Sigwarth, J. B.; Craven, J. D.; Cravens, J. P.; Dolan, J. S.; Dvorsky, M. R.; Hardebeck, P. K.; Harvey, J. D.; Muller, D. W.
1995-01-01
The Visible Imaging System (VIS) is a set of three low-light-level cameras to be flown on the POLAR spacecraft of the Global Geospace Science (GGS) program which is an element of the International Solar-Terrestrial Physics (ISTP) campaign. Two of these cameras share primary and some secondary optics and are designed to provide images of the nighttime auroral oval at visible wavelengths. A third camera is used to monitor the directions of the fields-of-view of these sensitive auroral cameras with respect to sunlit Earth. The auroral emissions of interest include those from N+2 at 391.4 nm, 0 I at 557.7 and 630.0 nm, H I at 656.3 nm, and 0 II at 732.0 nm. The two auroral cameras have different spatial resolutions. These resolutions are about 10 and 20 km from a spacecraft altitude of 8 R(sub e). The time to acquire and telemeter a 256 x 256-pixel image is about 12 s. The primary scientific objectives of this imaging instrumentation, together with the in-situ observations from the ensemble of ISTP spacecraft, are (1) quantitative assessment of the dissipation of magnetospheric energy into the auroral ionosphere, (2) an instantaneous reference system for the in-situ measurements, (3) development of a substantial model for energy flow within the magnetosphere, (4) investigation of the topology of the magnetosphere, and (5) delineation of the responses of the magnetosphere to substorms and variable solar wind conditions.
Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
Research on inosculation between master of ceremonies or players and virtual scene in virtual studio
NASA Astrophysics Data System (ADS)
Li, Zili; Zhu, Guangxi; Zhu, Yaoting
2003-04-01
A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.
STS-43 TDRS-E during preflight processing at KSC's VPF
NASA Technical Reports Server (NTRS)
1991-01-01
STS-43 Tracking and Data Relay Satellite E (TDRS-E) undergoes preflight processing in the Kennedy Space Center's (KSC's) Vertical Processing Facility (VPF) before being loaded into a payload canister for transfer to the launch pad and eventually into Atlantis', Orbiter Vehicle (OV) 104's, payload bay (PLB). This side of the TDRS-E will rest at the bottom of the PLB therefore the airborne support equipment (ASE) forward frame keel pin (at center of spacecraft) and the umbilical boom running between the two ASE frames are visible. The solar array panels are covered with protective TRW shields. Above the shields the stowed antenna and solar sail are visible. The inertial upper stage (IUS) booster is the white portion of the spacecraft and rests in the ASE forward frame and ASE aft frame tilt actuator (AFTA) frame (at the bottom of the IUS). The IUS booster nozzle extends beyond the AFTA frame. View provided by KSC with alternate number KSC-91PC-1079.
Curiosity Observes Whirlwinds Carrying Martian Dust
2017-02-27
Dust devils dance in the distance in this frame from a sequence of images taken by the Navigation Camera on NASA's Curiosity Mars rover on Feb. 12, 2017, during the summer afternoon of the rover's 1,607th Martian day, or sol. Within a broader context view, the rectangular area outlined in black was imaged multiple times over a span of several minutes to check for dust devils. Images from the period with most activity are shown in the inset area. The images are in pairs that were taken about 12 seconds apart, with an interval of about 90 seconds between pairs. Timing is accelerated and not fully proportional in this animation. One dust devil appears at the right edge of the inset -- toward the south from the rover -- in the first few frames. Another appears on the left -- toward south-southeast -- later in the sequence. Contrast has been modified to make frame-to-frame changes easier to see. A black frame is added between repeats of the sequence. Portions of Curiosity are visible in the foreground. The cylindrical UHF (ultra-high frequency) antenna on the left is used for sending data to Mars orbiters, which relay the data to Earth. The angled planes to the right of this antenna are fins of the rover's radioisotope thermoelectric generator, which provides the vehicle's power. The post with a knob on top at right is a low-gain, non-directional antenna that can be used for receiving transmissions from Earth, as backup to the main high-gain antenna (not shown here) used for that purpose. On Mars as on Earth, dust devils are whirlwinds that result from sunshine warming the ground, prompting convective rising of air that has gained heat from the ground. Observations of Martian dust devils provide information about wind directions and interaction between the surface and the atmosphere. An animation is available at http://photojournal.jpl.nasa.gov/catalog/PIA21482
Stephey, L; Wurden, G A; Schmitz, O; Frerichs, H; Effenberg, F; Biedermann, C; Harris, J; König, R; Kornejew, P; Krychowiak, M; Unterberg, E A
2016-11-01
A combined IR and visible camera system [G. A. Wurden et al., "A high resolution IR/visible imaging system for the W7-X limiter," Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and H α photon flux, and the filterscope system provided H α , H β , He-I, He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. The resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., "Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X," Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P .
Development of a table tennis robot for ball interception using visual feedback
NASA Astrophysics Data System (ADS)
Parnichkun, Manukid; Thalagoda, Janitha A.
2016-07-01
This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.
Satellite markers: a simple method for ground truth car pose on stereo video
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco
2018-04-01
Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.
STS-44 Defense Support Program (DSP) / IUS during preflight operations
NASA Technical Reports Server (NTRS)
1991-01-01
STS-44 Defense Support Program (DSP) satellite atop the inertial upper stage (IUS) is prepared for transfer in a processing facility at Cape Canaveral Air Force Station. Clean-suited technicians overseeing the operation are dwarfed by the size of the 5,200-pound DSP satellite and the IUS. The underside of the IUS (bottom) mounted in the airborne support equipment (ASE) aft frame tilt actuator (AFTA) table and ASE forward frame is visible at the base. The umbilical boom between the two ASE frames and the forward frame keel trunnion are visible. DSP, a surveillance satellite that can detect missle and space launches as well as nuclear detonations will be boosted into geosynchronous Earth orbit by the IUS. View provided by KSC with alternate number KSC-91PC-1749.
NASA Astrophysics Data System (ADS)
Harvey, Nate
2016-08-01
Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.
Strategic options towards an affordable high-performance infrared camera
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.
2016-05-01
The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.
The Atlases of Vesta derived from Dawn Framing Camera images
NASA Astrophysics Data System (ADS)
Roatsch, T.; Kersten, E.; Matz, K.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.
2013-12-01
The Dawn Framing Camera acquired during its two HAMO (High Altitude Mapping Orbit) phases in 2011 and 2012 about 6,000 clear filter images with a resolution of about 60 m/pixel. We combined these images in a global ortho-rectified mosaic of Vesta (60 m/pixel resolution). Only very small areas near the northern pole were still in darkness and are missing in the mosaic. The Dawn Framing Camera also acquired about 10,000 high-resolution clear filter images (about 20 m/pixel) of Vesta during its Low Altitude Mapping Orbit (LAMO). Unfortunately, the northern part of Vesta was still in darkness during this phase, good illumination (incidence angle < 70°) was only available for 66.8 % of the surface [1]. We used the LAMO images to calculate another global mosaic of Vesta, this time with 20 m/pixel resolution. Both global mosaics were used to produce atlases of Vesta: a HAMO atlas with 15 tiles at a scale of 1:500,000 and a LAMO atlas with 30 tiles at a scale between 1:200,000 and 1:225,180. The nomenclature used in these atlases is based on names and places historically associated with the Roman goddess Vesta, and is compliant with the rules of the IAU. 65 names for geological features were already approved by the IAU, 39 additional names are currently under review. Selected examples of both atlases will be shown in this presentation. Reference: [1]Roatsch, Th., etal., High-resolution Vesta Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images. Planetary and Space Science (2013), http://dx.doi.org/10.1016/j.pss.2013.06.024i
NASA Astrophysics Data System (ADS)
Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.
2018-02-01
Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity < 50mK) as one unit. Right and left cameras record the videos of right and left foot respectively. A compactible embedded system (raspberry pi3 model Bv1.2) is used to configure the sensors, capture and stream the video via ethernet. The resulting video has 160x120 active pixels (8 frames/second). We compared the temperature measurement of feet obtained using low-cost camera against the gold standard highend FLIR SC305. Twelve subjects (aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.
Object recognition through turbulence with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher
2015-03-01
Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.
Feng, Yongqiang; Max, Ludo
2014-01-01
Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484
Marker-less multi-frame motion tracking and compensation in PET-brain imaging
NASA Astrophysics Data System (ADS)
Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.
2015-03-01
In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.
NASA Astrophysics Data System (ADS)
Zhang, Bing; Li, Kunyang
2018-02-01
The “Breakthrough Starshot” aims at sending near-speed-of-light cameras to nearby stellar systems in the future. Due to the relativistic effects, a transrelativistic camera naturally serves as a spectrograph, a lens, and a wide-field camera. We demonstrate this through a simulation of the optical-band image of the nearby galaxy M51 in the rest frame of the transrelativistic camera. We suggest that observing celestial objects using a transrelativistic camera may allow one to study the astronomical objects in a special way, and to perform unique tests on the principles of special relativity. We outline several examples that suggest transrelativistic cameras may make important contributions to astrophysics and suggest that the Breakthrough Starshot cameras may be launched in any direction to serve as a unique astronomical observatory.
Development of a 3-D visible limiter imaging system for the HSX stellarator
NASA Astrophysics Data System (ADS)
Buelo, C.; Stephey, L.; Anderson, F. S. B.; Eisert, D.; Anderson, D. T.
2017-12-01
A visible camera diagnostic has been developed to study the Helically Symmetric eXperiment (HSX) limiter plasma interaction. A straight line view from the camera location to the limiter was not possible due to the complex 3D stellarator geometry of HSX, so it was necessary to insert a mirror/lens system into the plasma edge. A custom support structure for this optical system tailored to the HSX geometry was designed and installed. This system holds the optics tube assembly at the required angle for the desired view to both minimize system stress and facilitate robust and repeatable camera positioning. The camera system has been absolutely calibrated and using Hα and C-III filters can provide hydrogen and carbon photon fluxes, which through an S/XB coefficient can be converted into particle fluxes. The resulting measurements have been used to obtain the characteristic penetration length of hydrogen and C-III species. The hydrogen λiz value shows reasonable agreement with the value predicted by a 1D penetration length calculation.
[Evaluation of Iris Morphology Viewed through Stromal Edematous Corneas by Infrared Camera].
Kobayashi, Masaaki; Morishige, Naoyuki; Morita, Yukiko; Yamada, Naoyuki; Kobayashi, Motomi; Sonoda, Koh-Hei
2016-02-01
We reported that the application of infrared camera enables us to observe iris morphology in Peters' anomaly through edematous corneas. To observe the iris morphology in bullous keratopathy or failure grafts with an infrared camera. Eleven bullous keratopathy or failure grafts subjects (6 men and 5 women, mean age ± SD; 72.7 ± 13.0 years old) were enrolled in this study. The iris morphology was observed by applying visible light mode and near infrared light mode of infrared camera (MeibomPen). The detectability of pupil shapes, iris patterns and presence of iridectomy was evaluated. Infrared mode observation enabled us to detect the pupil shapes in 11 out of 11 cases, iris patterns in 3 out of 11 cases, and presence of iridetomy in 9 out of 11 cases although visible light mode observation could not detect any iris morphological changes. Applying infrared optics was valuable for observation of the iris morphology through stromal edematous corneas.
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-27
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Lang, Timothy J.; Leake, Skye; Runco, Mario, Jr.; Blakeslee, Richard J.
2017-01-01
Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how geo referenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration.
Curiosity Low-Angle Self-Portrait at Buckskin Drilling Site on Mount Sharp
2015-08-19
This low-angle self-portrait of NASA's Curiosity Mars rover shows the vehicle above the "Buckskin" rock target, where the mission collected its seventh drilled sample. The site is in the "Marias Pass" area of lower Mount Sharp. The scene combines dozens of images taken by Curiosity's Mars Hand Lens Imager (MAHLI) on Aug. 5, 2015, during the 1,065th Martian day, or sol, of the rover's work on Mars. The 92 component images are among MAHLI Sol 1065 raw images at http://mars.nasa.gov/msl/multimedia/raw/?s=1065&camera=MAHLI. For scale, the rover's wheels are 20 inches (50 centimeters) in diameter and about 16 inches (40 centimeters) wide. Curiosity drilled the hole at Buckskin during Sol 1060 (July 30, 2015). Two patches of pale, powdered rock material pulled from Buckskin are visible in this scene, in front of the rover. The patch closer to the rover is where the sample-handling mechanism on Curiosity's robotic arm dumped collected material that did not pass through a sieve in the mechanism. Sieved sample material was delivered to laboratory instruments inside the rover. The patch farther in front of the rover, roughly triangular in shape, shows where fresh tailings spread downhill from the drilling process. The drilled hole, 0.63 inch (1.6 centimeters) in diameter, is at the upper point of the tailings. The rover is facing northeast, looking out over the plains from the crest of a 20-foot (6-meter) hill that it climbed to reach the Marias Pass area. The upper levels of Mount Sharp are visible behind the rover, while Gale Crater's northern rim dominates the horizon on the left and right of the mosaic. A portion of this selfie cropped tighter around the rover is at PIA19808. Another version of the wide view, presented in a projection that shows the horizon as a circle, is at PIA19806. MAHLI is mounted at the end of the rover's robotic arm. For this self-portrait, the rover team positioned the camera lower in relation to the rover body than for any previous full self-portrait of Curiosity. This yielded a view that includes the rover's "belly," as in a partial self-portrait (PIA16137) taken about five weeks after Curiosity's August 2012 landing inside Mars' Gale Crater. Before sending Curiosity the arm-positioning commands for this Buckskin belly panorama, the team previewed the low-angle sequence of camera pointings on a test rover in California. A mosaic from that test is at PIA19810. This selfie at Buckskin does not include the rover's robotic arm beyond a portion of the upper arm held nearly vertical from the shoulder joint. Shadows from the rest of the arm and the turret of tools at the end of the arm are visible on the ground. With the wrist motions and turret rotations used in pointing the camera for the component images, the arm was positioned out of the shot in the frames or portions of frames used in this mosaic. This process was used previously in acquiring and assembling Curiosity self-portraits taken at sample-collection sites "Rocknest" (PIA16468), "John Klein" (PIA16937), "Windjana" (PIA18390) and "Mojave" (PIA19142). MAHLI was built by Malin Space Science Systems, San Diego. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Science Laboratory Project for the NASA Science Mission Directorate, Washington. JPL designed and built the project's Curiosity rover. http://photojournal.jpl.nasa.gov/catalog/PIA19807
Fifty Years of Mars Imaging: from Mariner 4 to HiRISE
2017-11-20
This image from NASA's Mars Reconnaissance Orbiter (MRO) shows Mars' surface in detail. Mars has captured the imagination of astronomers for thousands of years, but it wasn't until the last half a century that we were able to capture images of its surface in detail. This particular site on Mars was first imaged in 1965 by the Mariner 4 spacecraft during the first successful fly-by mission to Mars. From an altitude of around 10,000 kilometers, this image (the ninth frame taken) achieved a resolution of approximately 1.25 kilometers per pixel. Since then, this location has been observed by six other visible cameras producing images with varying resolutions and sizes. This includes HiRISE (highlighted in yellow), which is the highest-resolution and has the smallest "footprint." This compilation, spanning Mariner 4 to HiRISE, shows each image at full-resolution. Beginning with Viking 1 and ending with our HiRISE image, this animation documents the historic imaging of a particular site on another world. In 1976, the Viking 1 orbiter began imaging Mars in unprecedented detail, and by 1980 had successfully mosaicked the planet at approximately 230 meters per pixel. In 1999, the Mars Orbiter Camera onboard the Mars Global Surveyor (1996) also imaged this site with its Wide Angle lens, at around 236 meters per pixel. This was followed by the Thermal Emission Imaging System on Mars Odyssey (2001), which also provided a visible camera producing the image we see here at 17 meters per pixel. Later in 2012, the High-Resolution Stereo Camera on the Mars Express orbiter (2003) captured this image of the surface at 25 meters per pixel. In 2010, the Context Camera on the Mars Reconnaissance Orbiter (2005) imaged this site at about 5 meters per pixel. Finally, in 2017, HiRISE acquired the highest resolution image of this location to date at 50 centimeters per pixel. When seen at this unprecedented scale, we can discern a crater floor strewn with small rocky deposits, boulders several meters across, and wind-blown deposits in the floors of small craters and depressions. This compilation of Mars images spanning over 50 years gives us a visual appreciation of the evolution of orbital Mars imaging over a single site. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.2 centimeters (20.6 inches) per pixel (with 2 x 2 binning); objects on the order of 156 centimeters (61.4 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22115
Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views
2014-11-10
collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video
Neil A. Clark; Sang-Mook Lee
2004-01-01
This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...
Lunar UV-visible-IR mapping interferometric spectrometer
NASA Technical Reports Server (NTRS)
Smith, W. Hayden; Haskin, L.; Korotev, R.; Arvidson, R.; Mckinnon, W.; Hapke, B.; Larson, S.; Lucey, P.
1992-01-01
Ultraviolet-visible-infrared mapping digital array scanned interferometers for lunar compositional surveys was developed. The research has defined a no-moving-parts, low-weight and low-power, high-throughput, and electronically adaptable digital array scanned interferometer that achieves measurement objectives encompassing and improving upon all the requirements defined by the LEXSWIG for lunar mineralogical investigation. In addition, LUMIS provides a new, important, ultraviolet spectral mapping, high-spatial-resolution line scan camera, and multispectral camera capabilities. An instrument configuration optimized for spectral mapping and imaging of the lunar surface and provide spectral results in support of the instrument design are described.
A framed, 16-image Kirkpatrick–Baez x-ray microscope
Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.; ...
2017-09-08
A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less
A framed, 16-image Kirkpatrick–Baez x-ray microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.
A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less
Center of parcel with picture tube wall along walkway. Leaning ...
Center of parcel with picture tube wall along walkway. Leaning Tower of Bottle Village at frame right; oblique view of Rumpus Room, remnants of Little Hut destroyed by Northridge earthquake at frame left. Camera facing northeast. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA
2003-11-06
KENNEDY SPACE CENTER, FLA. - The camera installed on the aft skirt of a solid rocket booster is seen here, framed by the railing. The installation is in preparation for a vibration test of the Mobile Launcher Platform with SRBs and external tank mounted. The MLP will roll from one bay to another in the Vehicle Assembly Building.
Pulsed x-ray sources for characterization of gated framing cameras
NASA Astrophysics Data System (ADS)
Filip, Catalin V.; Koch, Jeffrey A.; Freeman, Richard R.; King, James A.
2017-08-01
Gated X-ray framing cameras are used to measure important characteristics of inertial confinement fusion (ICF) implosions such as size and symmetry, with 50 ps time resolution in two dimensions. A pulsed source of hard (>8 keV) X-rays, would be a valuable calibration device, for example for gain-droop measurements of the variation in sensitivity of the gated strips. We have explored the requirements for such a source and a variety of options that could meet these requirements. We find that a small-size dense plasma focus machine could be a practical single-shot X-ray source for this application if timing uncertainties can be overcome.
Holder, J P; Benedetti, L R; Bradley, D K
2016-11-01
Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.
A Digital Video System for Observing and Recording Occultations
NASA Astrophysics Data System (ADS)
Barry, M. A. Tony; Gault, Dave; Pavlov, Hristo; Hanna, William; McEwan, Alistair; Filipović, Miroslav D.
2015-09-01
Stellar occultations by asteroids and outer solar system bodies can offer ground based observers with modest telescopes and camera equipment the opportunity to probe the shape, size, atmosphere, and attendant moons or rings of these distant objects. The essential requirements of the camera and recording equipment are: good quantum efficiency and low noise; minimal dead time between images; good horological faithfulness of the image timestamps; robustness of the recording to unexpected failure; and low cost. We describe an occultation observing and recording system which attempts to fulfil these requirements and compare the system with other reported camera and recorder systems. Five systems have been built, deployed, and tested over the past three years, and we report on three representative occultation observations: one being a 9 ± 1.5 s occultation of the trans-Neptunian object 28978 Ixion (m v =15.2) at 3 seconds per frame; one being a 1.51 ± 0.017 s occultation of Deimos, the 12 km diameter satellite of Mars, at 30 frames per second; and one being a 11.04 ± 0.4 s occultation, recorded at 7.5 frames per second, of the main belt asteroid 361 Havnia, representing a low magnitude drop (Δm v = ~0.4) occultation.
Particle displacement tracking applied to air flows
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
Electronic Particle Image Velocimeter (PIV) techniques offer many advantages over conventional photographic PIV methods such as fast turn around times and simplified data reduction. A new all electronic PIV technique was developed which can measure high speed gas velocities. The Particle Displacement Tracking (PDT) technique employs a single cw laser, small seed particles (1 micron), and a single intensified, gated CCD array frame camera to provide a simple and fast method of obtaining two-dimensional velocity vector maps with unambiguous direction determination. Use of a single CCD camera eliminates registration difficulties encountered when multiple cameras are used to obtain velocity magnitude and direction information. An 80386 PC equipped with a large memory buffer frame-grabber board provides all of the data acquisition and data reduction operations. No array processors of other numerical processing hardware are required. Full video resolution (640x480 pixel) is maintained in the acquired images, providing high resolution video frames of the recorded particle images. The time between data acquisition to display of the velocity vector map is less than 40 sec. The new electronic PDT technique is demonstrated on an air nozzle flow with velocities less than 150 m/s.
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
NASA Astrophysics Data System (ADS)
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
Mars Science Laboratory Frame Manager for Centralized Frame Tree Database and Target Pointing
NASA Technical Reports Server (NTRS)
Kim, Won S.; Leger, Chris; Peters, Stephen; Carsten, Joseph; Diaz-Calderon, Antonio
2013-01-01
The FM (Frame Manager) flight software module is responsible for maintaining the frame tree database containing coordinate transforms between frames. The frame tree is a proper tree structure of directed links, consisting of surface and rover subtrees. Actual frame transforms are updated by their owner. FM updates site and saved frames for the surface tree. As the rover drives to a new area, a new site frame with an incremented site index can be created. Several clients including ARM and RSM (Remote Sensing Mast) update their related rover frames that they own. Through the onboard centralized FM frame tree database, client modules can query transforms between any two frames. Important applications include target image pointing for RSM-mounted cameras and frame-referenced arm moves. The use of frame tree eliminates cumbersome, error-prone calculations of coordinate entries for commands and thus simplifies flight operations significantly.
LIFTING THE VEIL OF DUST TO REVEAL THE SECRETS OF SPIRAL GALAXIES
NASA Technical Reports Server (NTRS)
2002-01-01
Astronomers have combined information from the NASA Hubble Space Telescope's visible- and infrared-light cameras to show the hearts of four spiral galaxies peppered with ancient populations of stars. The top row of pictures, taken by a ground-based telescope, represents complete views of each galaxy. The blue boxes outline the regions observed by the Hubble telescope. The bottom row represents composite pictures from Hubble's visible- and infrared-light cameras, the Wide Field and Planetary Camera 2 (WFPC2) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Astronomers combined views from both cameras to obtain the true ages of the stars surrounding each galaxy's bulge. The Hubble telescope's sharper resolution allows astronomers to study the intricate structure of a galaxy's core. The galaxies are ordered by the size of their bulges. NGC 5838, an 'S0' galaxy, is dominated by a large bulge and has no visible spiral arms; NGC 7537, an 'Sbc' galaxy, has a small bulge and loosely wound spiral arms. Astronomers think that the structure of NGC 7537 is very similar to our Milky Way. The galaxy images are composites made from WFPC2 images taken with blue (4445 Angstroms) and red (8269 Angstroms) filters, and NICMOS images taken in the infrared (16,000 Angstroms). They were taken in June, July, and August of 1997. Credits for the ground-based images: Allan Sandage (The Observatories of the Carnegie Institution of Washington) and John Bedke (Computer Sciences Corporation and the Space Telescope Science Institute) Credits for WFPC2 and NICMOS composites: NASA, ESA, and Reynier Peletier (University of Nottingham, United Kingdom)
Reasoning About Visibility in Mirrors: A Comparison Between a Human Observer and a Camera.
Bertamini, Marco; Soranzo, Alessandro
2018-01-01
Human observers make errors when predicting what is visible in a mirror. This is true for perception with real mirrors as well as for reasoning about mirrors shown in diagrams. We created an illustration of a room, a top-down view, with a mirror on a wall and objects (nails) on the opposite wall. The task was to select which nails were visible in the mirror from a given position (viewpoint). To study the importance of the social nature of the viewpoint, we divided the sample ( N = 108) in two groups. One group ( n = 54) were tested with a scene in which there was the image of a person. The other group ( n = 54) were tested with the same scene but with a camera replacing the person. Participants were instructed to think about what would be captured by a camera on a tripod. This manipulation tests the effect of social perspective-taking in reasoning about mirrors. As predicted, performance on the task shows an overestimation of what can be seen in a mirror and a bias to underestimate the role of the different viewpoints, that is, a tendency to treat the mirror as if it captures information independently of viewpoint. In terms of the comparison between person and camera, there were more errors for the camera, suggesting an advantage for evaluating a human viewpoint as opposed to an artificial viewpoint. We suggest that social mechanisms may be involved in perspective-taking in reasoning rather than in automatic attention allocation.
NASA Astrophysics Data System (ADS)
Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.
2017-11-01
A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.
Inspecting rapidly moving surfaces for small defects using CNN cameras
NASA Astrophysics Data System (ADS)
Blug, Andreas; Carl, Daniel; Höfler, Heinrich
2013-04-01
A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
High-Speed Videography Instrumentation And Procedures
NASA Astrophysics Data System (ADS)
Miller, C. E.
1982-02-01
High-speed videography has been an electronic analog of low-speed film cameras, but having the advantages of instant-replay and simplicity of operation. Recent advances have pushed frame-rates into the realm of the rotating prism camera. Some characteristics of videography systems are discussed in conjunction with applications in sports analysis, and with sports equipment testing.
Phase Curves of Nix and Hydra from the New Horizons Imaging Cameras
NASA Astrophysics Data System (ADS)
Verbiscer, Anne J.; Porter, Simon B.; Buratti, Bonnie J.; Weaver, Harold A.; Spencer, John R.; Showalter, Mark R.; Buie, Marc W.; Hofgartner, Jason D.; Hicks, Michael D.; Ennico-Smith, Kimberly; Olkin, Catherine B.; Stern, S. Alan; Young, Leslie A.; Cheng, Andrew; (The New Horizons Team
2018-01-01
NASA’s New Horizons spacecraft’s voyage through the Pluto system centered on 2015 July 14 provided images of Pluto’s small satellites Nix and Hydra at viewing angles unattainable from Earth. Here, we present solar phase curves of the two largest of Pluto’s small moons, Nix and Hydra, observed by the New Horizons LOng Range Reconnaissance Imager and Multi-spectral Visible Imaging Camera, which reveal the scattering properties of their icy surfaces in visible light. Construction of these solar phase curves enables comparisons between the photometric properties of Pluto’s small moons and those of other icy satellites in the outer solar system. Nix and Hydra have higher visible albedos than those of other resonant Kuiper Belt objects and irregular satellites of the giant planets, but not as high as small satellites of Saturn interior to Titan. Both Nix and Hydra appear to scatter visible light preferentially in the forward direction, unlike most icy satellites in the outer solar system, which are typically backscattering.
The use of near-infrared photography to image fired bullets and cartridge cases.
Stein, Darrell; Yu, Jorn Chi Chung
2013-09-01
An imaging technique that is capable of reducing glare, reflection, and shadows can greatly assist the process of toolmarks comparison. In this work, a camera with near-infrared (near-IR) photographic capabilities was fitted with an IR filter, mounted to a stereomicroscope, and used to capture images of toolmarks on fired bullets and cartridge cases. Fluorescent, white light-emitting diode (LED), and halogen light sources were compared for use with the camera. Test-fired bullets and cartridge cases from different makes and models of firearms were photographed under either near-IR or visible light. With visual comparisons, near-IR images and visible light images were comparable. The use of near-IR photography did not reveal more details and could not effectively eliminate reflections and glare associated with visible light photography. Near-IR photography showed little advantages in manual examination of fired evidence when it was compared with visible light (regular) photography. © 2013 American Academy of Forensic Sciences.
Automatic treatment of flight test images using modern tools: SAAB and Aeritalia joint approach
NASA Astrophysics Data System (ADS)
Kaelldahl, A.; Duranti, P.
The use of onboard cine cameras, as well as that of on ground cinetheodolites, is very popular in flight tests. The high resolution of film and the high frame rate of cinecameras are still not exceeded by video technology. Video technology can successfully enter the flight test scenario once the availability of solid-state optical sensors dramatically reduces the dimensions, and weight of TV cameras, thus allowing to locate them in positions compatible with space or operational limitations (e.g., HUD cameras). A proper combination of cine and video cameras is the typical solution for a complex flight test program. The output of such devices is very helpful in many flight areas. Several sucessful applications of this technology are summarized. Analysis of the large amount of data produced (frames of images) requires a very long time. The analysis is normally carried out manually. In order to improve the situation, in the last few years, several flight test centers have devoted their attention to possible techniques which allow for quicker and more effective image treatment.
Electronic camera-management system for 35-mm and 70-mm film cameras
NASA Astrophysics Data System (ADS)
Nielsen, Allan
1993-01-01
Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.
1996-01-01
used to locate and characterize a magnetic dipole source, and this finding accelerated the development of superconducting tensor gradiometers for... superconducting magnetic field gradiometer, two-color infrared camera, synthetic aperture radar, and a visible spectrum camera. The combination of these...Pieter Hoekstra, Blackhawk GeoSciences ......................................... 68 Prediction for UXO Shape and Orientation Effects on Magnetic
Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...
PhenoCam Dataset v1.0: Vegetation Phenology from Digital Camera Imagery, 2000-2015
USDA-ARS?s Scientific Manuscript database
This data set provides a time series of vegetation phenological observations for 133 sites across diverse ecosystems of North America and Europe from 2000-2015. The phenology data were derived from conventional visible-wavelength automated digital camera imagery collected through the PhenoCam Networ...
Earth Observations taken by the Expedition Seven crew
2003-10-26
ISS007-E-18087 (26 October 2003) --- The fires in the San Bernardino Mountains, fueled by Santa Ana winds, burned out of control on the morning of Oct. 26, 2003, when this image and several others were taken from the International Space Station. This frame and image numbers 18086 and 18088 were taken at approximately 19:54 GMT, October 26, 2003 with a digital still camera equipped with a 400mm lens. Silverwood Lake is visible at the bottom of the image. Content was provided by JSCs Earth Observation Lab. The International Space Station Program {link to http://spaceflight.nasa.gov} supports the laboratory to help astronauts take pictures of Earth that will be of the greatest value to scientists and the public, and to make those images freely available on the Internet. Additional images taken by astronauts and cosmonauts can be viewed at the NASA/JSC Gateway to Astronaut Photography of Earth [link to http://eol.jsc.nasa.gov/].
NASA Technical Reports Server (NTRS)
2000-01-01
This single frame from a color movie of Jupiter from NASA's Cassini spacecraft shows what it would look like to unpeel the entire globe of Jupiter, stretch it out on a wall into the form of a rectangular map.The image is a color cylindrical projection of the complete circumference of Jupiter, from 60 degrees south to 60 degrees north. It was produced from six images taken by Cassini's narrow-band camera on Oct. 31, 2000, in each of three filters: red, green and blue.The smallest visible features at the equator are about 600 kilometers (about 370 miles) across. In a map of this type, the most extreme northern and southern latitudes are unnaturally stretched out.Cassini is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini mission for NASA's Office of Space Science, Washington, D.C.Optical diagnostics on the Magnetized Shock Experiment (MSX)
NASA Astrophysics Data System (ADS)
Boguski, J. C.; Weber, T. E.; Intrator, T. P.; Smith, R. J.; Dunn, J. P.; Hutchinson, T. M.; Gao, K. W.
2013-10-01
The Magnetized Shock Experiment (MSX) at Los Alamos National Laboratory was built to investigate the physics of high Alfvén Mach number, supercritical, magnetized shocks through the acceleration and subsequent stagnation of a Field Reversed Configuration (FRC) plasmoid against a magnetic mirror and/or plasma target. A suite of optical diagnostics has recently been fielded on MSX to characterize plasma conditions during the formation, acceleration, and stagnation phases of the experiment. CCD-backed streak and framing cameras, and a fiber-based visible light array, provide information regarding FRC shape, velocity, and instability growth. Time-resolved narrow and broadband spectroscopy provides information on pre-shock plasma temperature, impurity levels, shock location, and non-thermal ion distributions within the shock region. Details of the diagnostic design, configuration, and characterization will be presented along with initial results. This work is supported by the Center for Magnetic Self Organization, DoE OFES and NNSA under LANS contract DE-AC52-06NA25369. Approved for public release: LA-UR- 13-25190.
NASA Astrophysics Data System (ADS)
Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter
2002-06-01
In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.
Blunck, Ch; Becker, F; Urban, M
2011-03-01
In nuclear medicine therapies, people working with beta radiators such as (90)Y may be exposed to non-negligible partial body doses. For radiation protection, it is important to know the characteristics of the radiation field and possible dose exposures at relevant positions in the working area. Besides extensive measurements, simulations can provide these data. For this purpose, a movable hand phantom for Monte Carlo simulations was developed. Specific beta radiator handling scenarios can be modelled interactively with forward kinematics or automatically with an inverse kinematics procedure. As a first investigation, the dose distribution on a medical doctor's hand injecting a (90)Y solution was measured and simulated with the phantom. Modelling was done with the interactive method based on five consecutive frames from a video recorded during the injection. Owing to the use of only one camera, not each detail of the radiation scenario is visible in the video. In spite of systematic uncertainties, the measured and simulated dose values are in good agreement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oz, E.; Myers, C. E.; Yamada, M.
2011-07-19
The stability properties of partial toroidal flux ropes are studied in detail in the laboratory, motivated by ubiquitous arched magnetic structures found on the solar surface. The flux ropes studied here are magnetized arc discharges formed between two electrodes in the Magnetic Reconnection Experiment (MRX) [Yamada et al., Phys. Plasmas, 4, 1936 (1997)]. The three dimensional evolution of these flux ropes is monitored by a fast visible light framing camera, while their magnetic structure is measured by a variety of internal magnetic probes. The flux ropes are consistently observed to undergo large-scale oscillations as a result of an external kinkmore » instability. Using detailed scans of the plasma current, the guide field strength, and the length of the flux rope, we show that the threshold for kink stability is governed by the Kruskal-Shafranov limit for a flux rope that is held fixed at both ends (i.e., qa = 1).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oz, E.; Myers, C. E.; Yamada, M.
2011-10-15
The stability properties of partial-toroidal flux ropes are studied in detail in the laboratory, motivated by ubiquitous arched magnetic structures found on the solar surface. The flux ropes studied here are magnetized arc discharges formed between two electrodes in the Magnetic Reconnection Experiment (MRX) [Yamada et al., Phys. Plasmas 4, 1936 (1997)]. The three dimensional evolution of these flux ropes is monitored by a fast visible light framing camera, while their magnetic structure is measured by a variety of internal magnetic probes. The flux ropes are consistently observed to undergo large-scale oscillations as a result of an external kink instability.more » Using detailed scans of the plasma current, the guide field strength, and the length of the flux rope, we show that the threshold for kink stability is governed by the Kruskal-Shafranov limit for a flux rope that is held fixed at both ends (i.e., q{sub a} = 1).« less
Enhancing Close-Up Image Based 3d Digitisation with Focus Stacking
NASA Astrophysics Data System (ADS)
Kontogianni, G.; Chliverou, R.; Koutsoudis, A.; Pavlidis, G.; Georgopoulos, A.
2017-08-01
The 3D digitisation of small artefacts is a very complicated procedure because of their complex morphological feature structures, concavities, rich decorations, high frequency of colour changes in texture, increased accuracy requirements etc. Image-based methods present a low cost, fast and effective alternative because laser scanning does not meet the accuracy requirements in general. A shallow Depth of Field (DoF) affects the image-based 3D reconstruction and especially the point matching procedure. This is visible not only in the total number of corresponding points but also in the resolution of the produced 3D model. The extension of the DoF is a very important task that should be incorporated in the data collection to attain a better quality of the image set and a better 3D model. An extension of the DoF can be achieved with many methods and especially with the use of the focus stacking technique. In this paper, the focus stacking technique was tested in a real-world experiment to digitise a museum artefact in 3D. The experiment conditions include the use of a full frame camera equipped with a normal lens (50mm), with the camera being placed close to the object. The artefact has already been digitised with a structured light system and that model served as the reference model in which 3D models were compared and the results were presented.
Explosives Instrumentation Group Trial 6/77-Propellant Fire Trials (Series Two).
1981-10-01
frames/s. A 19 mm Sony U-Matic video cassette recorder (VCR) and camera were used to view the hearth from a tower 100 m from ground-zero (GZ). Normal...camera started. This procedure permitted increased recording time of the event. A 19 mm Sony U-Matic VCR and camera was used to view the container...Lumpur, Malaysia Exchange Section, British Library, U.K. Periodicals Recording Section, Science Reference Library, British Library, U.K. Library, Chemical
NASA Astrophysics Data System (ADS)
Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.
2017-08-01
This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.
Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott
2015-01-01
The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.
CMOS Imaging Sensor Technology for Aerial Mapping Cameras
NASA Astrophysics Data System (ADS)
Neumann, Klaus; Welzenbach, Martin; Timm, Martin
2016-06-01
In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.
Use of cameras for monitoring visibility impairment
NASA Astrophysics Data System (ADS)
Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie
2018-02-01
Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.
High-Speed Video Analysis in a Conceptual Physics Class
NASA Astrophysics Data System (ADS)
Desbien, Dwain M.
2011-09-01
The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.
Stephey, L.; Wurden, G. A.; Schmitz, O.; ...
2016-08-08
A combined IR and visible camera system [G. A. Wurden et al., “A high resolution IR/visible imaging system for the W7-X limiter,” Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and Hα photon flux, and the filterscope system provided H α, H β, He-I,more » He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. Finally, the resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., “Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X,” Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P.« less
Broadband image sensor array based on graphene-CMOS integration
NASA Astrophysics Data System (ADS)
Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank
2017-06-01
Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.
NASA Technical Reports Server (NTRS)
Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike
2014-01-01
The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1 percent in the line core. CLASP is a dual-beam spectro- polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1 percent polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30 percent) quantum efficiency at the Lyman-alpha line. The CLASP cameras were designed to operate with a gain of 2.0 +/- 0.5, less than or equal to 25 e- readout noise, less than or equal to 10 e-/second/pixel dark current, and less than 0.1percent residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; system gain, dark current, read noise, and residual non-linearity.
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones
Wang, Zhen; Jin, Bingwen; Geng, Weidong
2017-01-01
The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765
Whirlwind Drama During Spirit's 496th Sol
NASA Technical Reports Server (NTRS)
2005-01-01
This movie clip shows a dust devil growing in size and blowing across the plain inside Mars' Gusev Crater. The clip consists of frames taken by the navigation camera on NASA's Mars Exploration Rover Spirit during the morning of the rover's 496th martian day, or sol (May 26, 2005). Contrast has been enhanced for anything in the images that changes from frame to frame, that is, for the dust moved by wind.Field trials for determining the visible and infrared transmittance of screening smoke
NASA Astrophysics Data System (ADS)
Sánchez Oliveros, Carmen; Santa-María Sánchez, Guillermo; Rosique Pérez, Carlos
2009-09-01
In order to evaluate the concealment capability of smoke, the Countermeasures Laboratory of the Institute of Technology "Marañosa" (ITM) has done a set of tests for measuring the transmittances of multispectral smoke tins in several bands of the electromagnetic spectrum. The smoke composition based on red phosphorous has been developed and patented by this laboratory as a part of a projectile development. The smoke transmittance was measured by means of thermography as well as spectroradiometry. Black bodies and halogen lamps were used as infrared and visible source of radiation. The measurements were carried out in June of 2008 at the Marañosa field (Spain) with two MWIR cameras, two LWIR cameras, one CCD visible camera, one CVF IR spectroradiometer covering the interval 1.5 to 14 microns and one array silicon based spectroradiometer for the 0.2 to 1.1 μm spectra. The transmittance and dimensions of the smoke screen were characterized in the visible band, MWIR (3 - 5 μm and LWIR (8 - 12 μm) regions. The size of the screen was about 30 meters wide and 5 meters high. The transmittances in the IR bands were about 0.3 and better than 0.1 in the visible one. The screens showed to be effective over the time of persistence for all of the tests. The results obtained from the imaging and non-imaging systems were in good accordance. The meteorological conditions during tests such as the wind speed are determinant for the use of this kind of optical countermeasures.
A Summary of the Evaluation of PPG Herculite XP Glass in Punched Window and Storefront Assemblies
2013-01-01
frames for all IGU windows extruded from existing dies. The glazing was secured to the frame on all four sides with a 1/2-in bead width of DOW 995...lite and non-laminated IGU debris tests. A wood frame with a 4-in wide slit was placed behind the window to transform the debris cloud into a narrow...speed camera DIC Set-up laser deflection gauge shock tube window wood frame with slit high speed camerawell lit backdrop Debris Tracking Set-up laser
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-03-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor (the size of the standard 35mm frame) with the means to select left and right image information. Even with the added stereoscopic capability the appearance of existing camera bodies will be unaltered.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-07-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.
Singh, Warsha; Örnólfsdóttir, Erla B.; Stefansson, Gunnar
2014-01-01
An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was and deg that resulted in error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately cm was seen, which could be attributed to pixel error, where each pixel represented cm. After correcting for this difference the estimated heights ranged from cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region. PMID:25303243
Singh, Warsha; Örnólfsdóttir, Erla B; Stefansson, Gunnar
2014-01-01
An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.
D Surface Generation from Aerial Thermal Imagery
NASA Astrophysics Data System (ADS)
Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.
2015-12-01
Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.
Snapshot hyperspectral fovea vision system (HyperVideo)
NASA Astrophysics Data System (ADS)
Kriesel, Jason; Scriven, Gordon; Gat, Nahum; Nagaraj, Sheela; Willson, Paul; Swaminathan, V.
2012-06-01
The development and demonstration of a new snapshot hyperspectral sensor is described. The system is a significant extension of the four dimensional imaging spectrometer (4DIS) concept, which resolves all four dimensions of hyperspectral imaging data (2D spatial, spectral, and temporal) in real-time. The new sensor, dubbed "4×4DIS" uses a single fiber optic reformatter that feeds into four separate, miniature visible to near-infrared (VNIR) imaging spectrometers, providing significantly better spatial resolution than previous systems. Full data cubes are captured in each frame period without scanning, i.e., "HyperVideo". The current system operates up to 30 Hz (i.e., 30 cubes/s), has 300 spectral bands from 400 to 1100 nm (~2.4 nm resolution), and a spatial resolution of 44×40 pixels. An additional 1.4 Megapixel video camera provides scene context and effectively sharpens the spatial resolution of the hyperspectral data. Essentially, the 4×4DIS provides a 2D spatially resolved grid of 44×40 = 1760 separate spectral measurements every 33 ms, which is overlaid on the detailed spatial information provided by the context camera. The system can use a wide range of off-the-shelf lenses and can either be operated so that the fields of view match, or in a "spectral fovea" mode, in which the 4×4DIS system uses narrow field of view optics, and is cued by a wider field of view context camera. Unlike other hyperspectral snapshot schemes, which require intensive computations to deconvolve the data (e.g., Computed Tomographic Imaging Spectrometer), the 4×4DIS requires only a linear remapping, enabling real-time display and analysis. The system concept has a range of applications including biomedical imaging, missile defense, infrared counter measure (IRCM) threat characterization, and ground based remote sensing.
Underwater image mosaicking and visual odometry
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott
2017-05-01
This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
NASA Technical Reports Server (NTRS)
Papanyan, Valeri; Oshle, Edward; Adamo, Daniel
2008-01-01
Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.
LROC WAC Ultraviolet Reflectance of the Moon
NASA Astrophysics Data System (ADS)
Robinson, M. S.; Denevi, B. W.; Sato, H.; Hapke, B. W.; Hawke, B. R.
2011-10-01
Earth-based color filter photography, first acquired in the 1960s, showed color differences related to morphologic boundaries on the Moon [1]. These color units were interpreted to indicate compositional differences, thought to be the result of variations in titanium content [1]. Later it was shown that iron abundance (FeO) also plays a dominant role in controlling color in lunar soils [2]. Equally important is the maturity of a lunar soil in terms of its reflectance properties (albedo and color) [3]. Maturity is a measure of the state of alteration of surface materials due to sputtering and high velocity micrometeorite impacts over time [3]. The Clementine (CL) spacecraft provided the first global and digital visible through infrared observations of the Moon [4]. This pioneering dataset allowed significant advances in our understanding of compositional (FeO and TiO2) and maturation differences across the Moon [5,6]. Later, the Lunar Prospector (LP) gamma ray and neutron experiments provided the first global, albeit low resolution, elemental maps [7]. Newly acquired Moon Mineralogic Mapper hyperspectral measurements are now providing the means to better characterize mineralogic variations on a global scale [8]. Our knowledge of ultraviolet color differences between geologic units is limited to low resolution (km scale) nearside telescopic observations, and high resolution Hubble Space Telescope images of three small areas [9], and laboratory analyses of lunar materials [10,11]. These previous studies detailed color differences in the UV (100 to 400 nm) related to composition and physical state. HST UV (250 nm) and visible (502 nm) color differences were found to correlate with TiO2, and were relatively insensitive to maturity effects seen in visible ratios (CL) [9]. These two results led to the conclusion that improvements in TiO2 estimation accuracy over existing methods may be possible through a simple UV/visible ratio [9]. The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) provides the first global lunar ultraviolet through visible (321 nm to 689 nm) multispectral observations [12]. The WAC is a sevencolor push-frame imager with nominal resolutions of 400 m (321, 360 nm) and 100 m (415, 566, 604, 643, 689 nm). Due to its wide field-of-view (60° in color mode) the phase angle within a single line varies ±30°, thus requiring the derivation of a precise photometric characterization [13] before any interpretations of lunar reflectance properties can be made. The current WAC photometric correction relies on multiple WAC observations of the same area over a broad range of phase angles and typically results in relative corrections good to a few percent [13].
The Example of Using the Xiaomi Cameras in Inventory of Monumental Objects - First Results
NASA Astrophysics Data System (ADS)
Markiewicz, J. S.; Łapiński, S.; Bienkowski, R.; Kaliszewska, A.
2017-11-01
At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II) and middle-frame camera (Hasselblad-Hd4). In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.
Full-Frame Reference for Test Photo of Moon
NASA Technical Reports Server (NTRS)
2005-01-01
This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images. Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across. The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.The Far Ultra-Violet Imager on the Icon Mission
NASA Astrophysics Data System (ADS)
Mende, S. B.; Frey, H. U.; Rider, K.; Chou, C.; Harris, S. E.; Siegmund, O. H. W.; England, S. L.; Wilkins, C.; Craig, W.; Immel, T. J.; Turin, P.; Darling, N.; Loicq, J.; Blain, P.; Syrstad, E.; Thompson, B.; Burt, R.; Champagne, J.; Sevilla, P.; Ellis, S.
2017-10-01
ICON Far UltraViolet (FUV) imager contributes to the ICON science objectives by providing remote sensing measurements of the daytime and nighttime atmosphere/ionosphere. During sunlit atmospheric conditions, ICON FUV images the limb altitude profile in the shortwave (SW) band at 135.6 nm and the longwave (LW) band at 157 nm perpendicular to the satellite motion to retrieve the atmospheric O/N2 ratio. In conditions of atmospheric darkness, ICON FUV measures the 135.6 nm recombination emission of O+ ions used to compute the nighttime ionospheric altitude distribution. ICON Far UltraViolet (FUV) imager is a Czerny-Turner design Spectrographic Imager with two exit slits and corresponding back imager cameras that produce two independent images in separate wavelength bands on two detectors. All observations will be processed as limb altitude profiles. In addition, the ionospheric 135.6 nm data will be processed as longitude and latitude spatial maps to obtain images of ion distributions around regions of equatorial spread F. The ICON FUV optic axis is pointed 20 degrees below local horizontal and has a steering mirror that allows the field of view to be steered up to 30 degrees forward and aft, to keep the local magnetic meridian in the field of view. The detectors are micro channel plate (MCP) intensified FUV tubes with the phosphor fiber-optically coupled to Charge Coupled Devices (CCDs). The dual stack MCP-s amplify the photoelectron signals to overcome the CCD noise and the rapidly scanned frames are co-added to digitally create 12-second integrated images. Digital on-board signal processing is used to compensate for geometric distortion and satellite motion and to achieve data compression. The instrument was originally aligned in visible light by using a special grating and visible cameras. Final alignment, functional and environmental testing and calibration were performed in a large vacuum chamber with a UV source. The test and calibration program showed that ICON FUV meets its design requirements and is ready to be launched on the ICON spacecraft.
'Lyell' Panorama inside Victoria Crater
NASA Technical Reports Server (NTRS)
2008-01-01
During four months prior to the fourth anniversary of its landing on Mars, NASA's Mars Exploration Rover Opportunity examined rocks inside an alcove called 'Duck Bay' in the western portion of Victoria Crater. The main body of the crater appears in the upper right of this stereo panorama, with the far side of the crater lying about 800 meters (half a mile) away. Bracketing that part of the view are two promontories on the crater's rim at either side of Duck Bay. They are 'Cape Verde,' about 6 meters (20 feet) tall, on the left, and 'Cabo Frio,' about 15 meters (50 feet) tall, on the right. The rest of the image, other than sky and portions of the rover, is ground within Duck Bay. Opportunity's targets of study during the last quarter of 2007 were rock layers within a band exposed around the interior of the crater, about 6 meters (20 feet) from the rim. Bright rocks within the band are visible in the foreground of the panorama. The rover science team assigned informal names to three subdivisions of the band: 'Steno,' 'Smith,' and 'Lyell.' This view combines many images taken by Opportunity's panoramic camera (Pancam) from the 1,332nd through 1,379th Martian days, or sols, of the mission (Oct. 23 to Dec. 11, 2007). Images taken through Pancam filters centered on wavelengths of 753 nanometers, 535 nanometers and 432 nanometers were mixed to produce an approximately true-color panorama. Some visible patterns in dark and light tones are the result of combining frames that were affected by dust on the front sapphire window of the rover's camera. Opportunity landed on Jan. 25, 2004, Universal Time, (Jan. 24, Pacific Time) inside a much smaller crater about 6 kilometers (4 miles) north of Victoria Crater, to begin a surface mission designed to last 3 months and drive about 600 meters (0.4 mile).'Lyell' Panorama inside Victoria Crater (Stereo)
NASA Technical Reports Server (NTRS)
2008-01-01
During four months prior to the fourth anniversary of its landing on Mars, NASA's Mars Exploration Rover Opportunity examined rocks inside an alcove called 'Duck Bay' in the western portion of Victoria Crater. The main body of the crater appears in the upper right of this stereo panorama, with the far side of the crater lying about 800 meters (half a mile) away. Bracketing that part of the view are two promontories on the crater's rim at either side of Duck Bay. They are 'Cape Verde,' about 6 meters (20 feet) tall, on the left, and 'Cabo Frio,' about 15 meters (50 feet) tall, on the right. The rest of the image, other than sky and portions of the rover, is ground within Duck Bay. Opportunity's targets of study during the last quarter of 2007 were rock layers within a band exposed around the interior of the crater, about 6 meters (20 feet) from the rim. Bright rocks within the band are visible in the foreground of the panorama. The rover science team assigned informal names to three subdivisions of the band: 'Steno,' 'Smith,' and 'Lyell.' This view incorporates many images taken by Opportunity's panoramic camera (Pancam) from the 1,332nd through 1,379th Martian days, or sols, of the mission (Oct. 23 to Dec. 11, 2007). It combines a stereo pair so that it appears three-dimensional when seen through blue-red glasses. Some visible patterns in dark and light tones are the result of combining frames that were affected by dust on the front sapphire window of the rover's camera. Opportunity landed on Jan. 25, 2004, Universal Time, (Jan. 24, Pacific Time) inside a much smaller crater about 6 kilometers (4 miles) north of Victoria Crater, to begin a surface mission designed to last 3 months and drive about 600 meters (0.4 mile).Spectral survey of helium lines in a linear plasma device for use in HELIOS imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, H. B., E-mail: rayhb@ornl.gov; Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831; Biewer, T. M.
2016-11-15
Fast visible cameras and a filterscope are used to examine the visible light emission from Oak Ridge National Laboratory’s Proto-MPEX. The filterscope has been configured to perform helium line ratio measurements using emission lines at 667.9, 728.1, and 706.5 nm. The measured lines should be mathematically inverted and the ratios compared to a collisional radiative model (CRM) to determine T{sub e} and n{sub e}. Increasing the number of measurement chords through the plasma improves the inversion calculation and subsequent T{sub e} and n{sub e} localization. For the filterscope, one spatial chord measurement requires three photomultiplier tubes (PMTs) connected to pelliclemore » beam splitters. Multiple, fast visible cameras with narrowband filters are an alternate technique for performing these measurements with superior spatial resolution. Each camera contains millions of pixels; each pixel is analogous to one filterscope PMT. The data can then be inverted and the ratios compared to the CRM to determine 2-dimensional “images” of T{sub e} and n{sub e} in the plasma. An assessment is made in this paper of the candidate He I emission lines for an imaging technique.« less
Fusion of thermal- and visible-band video for abandoned object detection
NASA Astrophysics Data System (ADS)
Beyan, Cigdem; Yigit, Ahmet; Temizel, Alptekin
2011-07-01
Timely detection of packages that are left unattended in public spaces is a security concern, and rapid detection is important for prevention of potential threats. Because constant surveillance of such places is challenging and labor intensive, automated abandoned-object-detection systems aiding operators have started to be widely used. In many studies, stationary objects, such as people sitting on a bench, are also detected as suspicious objects due to abandoned items being defined as items newly added to the scene and remained stationary for a predefined time. Therefore, any stationary object results in an alarm causing a high number of false alarms. These false alarms could be prevented by classifying suspicious items as living and nonliving objects. In this study, a system for abandoned object detection that aids operators surveilling indoor environments such as airports, railway or metro stations, is proposed. By analysis of information from a thermal- and visible-band camera, people and the objects left behind can be detected and discriminated as living and nonliving, reducing the false-alarm rate. Experiments demonstrate that using data obtained from a thermal camera in addition to a visible-band camera also increases the true detection rate of abandoned objects.
An approach enabling adaptive FEC for OFDM in fiber-VLLC system
NASA Astrophysics Data System (ADS)
Wei, Yiran; He, Jing; Deng, Rui; Shi, Jin; Chen, Shenghai; Chen, Lin
2017-12-01
In this paper, we propose an orthogonal circulant matrix transform (OCT)-based adaptive frame-level-forward error correction (FEC) scheme for fiber-visible laser light communication (VLLC) system and experimentally demonstrate by Reed-Solomon (RS) Code. In this method, no extra bits are spent for adaptive message, except training sequence (TS), which is simultaneously used for synchronization and channel estimation. Therefore, RS-coding can be adaptively performed frames by frames via the last received codeword-error-rate (CER) feedback estimated by the TSs of the previous few OFDM frames. In addition, the experimental results exhibit that over 20 km standard single-mode fiber (SSMF) and 8 m visible light transmission, the costs of RS codewords are at most 14.12% lower than those of conventional adaptive subcarrier-RS-code based 16-QAM OFDM at bit error rate (BER) of 10-5.
Augmented reality in laser laboratories
NASA Astrophysics Data System (ADS)
Quercioli, Franco
2018-05-01
Laser safety glasses block visibility of the laser light. This is a big nuisance when a clear view of the beam path is required. A headset made up of a smartphone and a viewer can overcome this problem. The user looks at the image of the real world on the cellphone display, captured by its rear camera. An unimpeded and safe sight of the laser beam is then achieved. If the infrared blocking filter of the smartphone camera is removed, the spectral sensitivity of the CMOS image sensor extends in the near infrared region up to 1100 nm. This substantial improvement widens the usability of the device to many laser systems for industrial and medical applications, which are located in this spectral region. The paper describes this modification of a phone camera to extend its sensitivity beyond the visible and make a true augmented reality laser viewer.
"Teacher in Space" Trainees - Arriflex Motion Picture Camera
1985-09-20
S85-40670 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe and Barbara R. Morgan (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. McAuliffe zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA
"Teacher in Space" Trainees - Arriflex Motion Picture Camera
1985-09-20
S85-40671 (18 Sept. 1985) --- The two teachers, Barbara R. Morgan and Sharon Christa McAuliffe (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Morgan zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA
Deployment of the RCA Satcom K-2 communications satellite
1985-11-28
61B-38-36W (28 Nov 1985) --- The 4,144-pound RCA Satcom K-2 communications satellite is photographed as it spins from the cargo bay of the Earth-orbiting Atlantis. A TV camera at right records the deployment for a later playback to Earth. This frame was photographed with a handheld Hasselblad camera inside the spacecraft.
Validation of Viewing Reports: Exploration of a Photographic Method.
ERIC Educational Resources Information Center
Fletcher, James E.; Chen, Charles Chao-Ping
A time lapse camera loaded with Super 8 film was employed to photographically record the area in front of a conventional television receiver in selected homes. The camera took one picture each minute for three days, including in the same frame the face of the television receiver. Family members kept a conventional viewing diary of their viewing…
Auto-converging stereo cameras for 3D robotic tele-operation
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Aycock, Todd; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
1996-01-29
In this image from NASA's Voyager wide-angle taken on Aug. 23 1989, the two main rings of Neptune can be clearly seen. In the lower part of the frame the originally announced ring arc, consisting of three distinct features, is visible. This feature covers about 35 degrees of longitude and has yet to be radially resolved in Voyager images. From higher resolution images it is known that this region contains much more material than the diffuse belts seen elsewhere in its orbit, which seem to encircle the planet. This is consistent with the fact that ground-based observations of stellar occultations by the rings show them to be very broken and clumpy. The more sensitive wide-angle camera is revealing more widely distributed but fainter material. Each of these rings of material lies just outside of the orbit of a newly discovered moon. One of these moons, 1989N2, may be seen in the upper right corner. The moon is streaked by its orbital motion, whereas the stars in the frame are less smeared. The dark area around the bright moon and star are artifacts of the processing required to bring out the faint rings. This wide-angle image was taken from a range of 2 million kilometers (1.2 million miles), through the clear filter. http://photojournal.jpl.nasa.gov/catalog/PIA00053
Optimal design of an earth observation optical system with dual spectral and high resolution
NASA Astrophysics Data System (ADS)
Yan, Pei-pei; Jiang, Kai; Liu, Kai; Duan, Jing; Shan, Qiusha
2017-02-01
With the increasing demand of the high-resolution remote sensing images by military and civilians, Countries around the world are optimistic about the prospect of higher resolution remote sensing images. Moreover, design a visible/infrared integrative optic system has important value in earth observation. Because visible system can't identify camouflage and recon at night, so we should associate visible camera with infrared camera. An earth observation optical system with dual spectral and high resolution is designed. The paper mainly researches on the integrative design of visible and infrared optic system, which makes the system lighter and smaller, and achieves one satellite with two uses. The working waveband of the system covers visible, middle infrared (3-5um). Dual waveband clear imaging is achieved with dispersive RC system. The focal length of visible system is 3056mm, F/# is 10.91. And the focal length of middle infrared system is 1120mm, F/# is 4. In order to suppress the middle infrared thermal radiation and stray light, the second imaging system is achieved and the narcissus phenomenon is analyzed. The system characteristic is that the structure is simple. And the especial requirements of the Modulation Transfer Function (MTF), spot, energy concentration, and distortion etc. are all satisfied.
A device for synchronizing biomechanical data with cine film.
Rome, L C
1995-03-01
Biomechanists are faced with two problems in synchronizing continuous physiological data to discrete, frame-based kinematic data from films. First, the accuracy of most synchronization techniques is good only to one frame and hence depends on framing rate. Second, even if perfectly correlated at the beginning of a 'take', the film and physiological data may become progressively desynchronized as the 'take' proceeds. A system is described, which provides synchronization between cine film and continuous physiological data with an accuracy of +/- 0.2 ms, independent of framing rate and the duration of the film 'take'. Shutter pulses from the camera were output to a computer recording system where they were recorded and counted, and to a digital device which counted the pulses and illuminated the count on the bank of LEDs which was filmed with the subject. Synchronization was performed by using the rising edge of the shutter pulse and by comparing the frame number imprinted on the film to the frame number recorded by the computer system. In addition to providing highly accurate synchronization over long film 'takes', this system provides several other advantages. First, having frame numbers imprinted both on the film and computer record greatly facilitates analysis. Second, the LEDs were designed to show the 'take number' while the camera is coming up to speed, thereby avoiding the use of cue cards which disturb the animal. Finally, use of this device results in considerable savings in film.
Optical design of space cameras for automated rendezvous and docking systems
NASA Astrophysics Data System (ADS)
Zhu, X.
2018-05-01
Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.
Clouds Sailing Above Martian Horizon, Enhanced
2017-08-09
Clouds drift across the sky above a Martian horizon in this accelerated sequence of enhanced images from NASA's Curiosity Mars rover. The rover's Navigation Camera (Navcam) took these eight images over a span of four minutes early in the morning of the mission's 1,758th Martian day, or sol (July 17, 2017), aiming toward the south horizon. They have been processed by first making a "flat field' adjustment for known differences in sensitivity among pixels and correcting for camera artifacts due to light reflecting within the camera, and then generating an "average" of all the frames and subtracting that average from each frame. This subtraction emphasizes changes whether due to movement -- such as the clouds' motion -- or due to lighting -- such as changing shadows on the ground as the morning sunlight angle changed. On the same Martian morning, Curiosity also observed clouds nearly straight overhead. The clouds resemble Earth's cirrus clouds, which are ice crystals at high altitudes. These Martian clouds are likely composed of crystals of water ice that condense onto dust grains in the cold Martian atmosphere. Cirrus wisps appear as ice crystals fall and evaporate in patterns known as "fall streaks" or "mare's tails." Such patterns have been seen before at high latitudes on Mars, for instance by the Phoenix Mars Lander in 2008, and seasonally nearer the equator, for instance by the Opportunity rover. However, Curiosity has not previously observed such clouds so clearly visible from the rover's study area about five degrees south of the equator. The Hubble Space Telescope and spacecraft orbiting Mars have observed a band of clouds to appear near the Martian equator around the time of the Martian year when the planet is farthest from the Sun. With a more elliptical orbit than Earth's, Mars experiences more annual variation than Earth in its distance from the Sun. The most distant point in an orbit around the Sun is called the aphelion. The near-equatorial Martian cloud pattern observed at that time of year is called the "aphelion cloud belt." These new images from Curiosity were taken about two months before aphelion, but the morning clouds observed may be an early stage of the aphelion cloud belt. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21840
Context View from 11' on ladder from southeast corner of ...
Context View from 11' on ladder from southeast corner of Bottle Village parcel, just inside fence. Doll Head Shrine at far left frame, Living Trailer (c.1960 "Spartanette") in center frame. Little Wishing Well at far right frame. Some shrines and small buildings were destroyed in the January 1994 Northridge earthquake, and only their perimeter walls and foundations exist. Camera facing north northwest. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA
Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera
NASA Astrophysics Data System (ADS)
Rahman, Samiur; Ullah, Sana; Ullah, Sehat
2018-01-01
Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.
Io's Sodium Cloud (Clear Filter and Green-Yellow Filter with Intensity Contours)
NASA Technical Reports Server (NTRS)
1997-01-01
This picture contains two images of Jupiter's moon Io and its surrounding sky. The original frame was exposed twice, once through a clear filter and once through a green-yellow filter. The camera pointed in slightly different directions for the two exposures, placing a clear filter image of Io in the top half of the frame, and a green-yellow filter image of Io in the bottom half of the frame. This picture shows the entire original frame with the addition of intensity contours and false color. East is to the right.
Most of Io's visible surface is in shadow, though part of a white crescent can be seen on its western side. This crescent is being illuminated mostly by 'Jupitershine' (i.e., sunlight reflected off Jupiter). Near Io's eastern equatorial edge is a burst of white light which shows up best in the lower image. This sunlight being scattered by the plume of the volcano Prometheus. Prometheus lies just beyond the visible edge of the moon on Io's far side. Its plume extends about 100 kilometers above the surface, and is being hit by sunlight just a little east of Io's eastern edge.The sky is full of diffuse light, some of which is scattered light from Prometheus' plume and Io's lit crescent (particularly in the half of the frame dominated by the clear filter). However, much of the diffuse emission comes from Io's Sodium Cloud: sodium atoms within Io's extensive material halo are scattering sunlight into both the clear and green-yellow filters at a wavelength of about 589 nanometers.The intensity contours help to illustrate that: (i) significant diffuse emission is present all the way to the eastern edge of the frame (indeed, the Sodium Cloud is known to extend far beyond that edge); (ii) the diffuse emission exhibits a directional feature at about four o'clock relative to Io's center (similar features have been seen in the Sodium Cloud at greater distances from Io).The upper image of Io exhibits a roundish white spot in the bottom half of Io's shadowed side. This corresponds to thermal emission from the volcano Pele. The lower image bears a much smaller trace of this emission because the clear filter is far more sensitive than the green-yellow filter to those relatively long wavelengths where thermal emission is strongest.This image was taken at 5 hours 30 minutes Universal Time on Nov. 9, 1996 by the solid state imaging (CCD) system aboard NASA's Galileo spacecraft. Galileo was then in Jupiter's shadow, and located about 2.3 million kilometers (about 32 Jovian radii) from both Jupiter and Io.The Jet Propulsion Laboratory, Pasadena, CA, manages the mission for NASA's Office of Space Science, Washington D.C. This image and other images and data received from Galileo are posted on the World Wide Web Galileo mission home page at: http://galileo.jpl.nasa.gov.SFDT-1 Camera Pointing and Sun-Exposure Analysis and Flight Performance
NASA Technical Reports Server (NTRS)
White, Joseph; Dutta, Soumyo; Striepe, Scott
2015-01-01
The Supersonic Flight Dynamics Test (SFDT) vehicle was developed to advance and test technologies of NASA's Low Density Supersonic Decelerator (LDSD) Technology Demonstration Mission. The first flight test (SFDT-1) occurred on June 28, 2014. In order to optimize the usefulness of the camera data, analysis was performed to optimize parachute visibility in the camera field of view during deployment and inflation and to determine the probability of sun-exposure issues with the cameras given the vehicle heading and launch time. This paper documents the analysis, results and comparison with flight video of SFDT-1.
CIFAR10-DVS: An Event-Stream Dataset for Object Classification
Li, Hongmin; Liu, Hanchao; Ji, Xiangyang; Li, Guoqi; Shi, Luping
2017-01-01
Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as “CIFAR10-DVS.” The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification. PMID:28611582
Coates, Colin G; Denvir, Donal J; McHale, Noel G; Thornbury, Keith D; Hollywood, Mark A
2004-01-01
The back-illuminated electron multiplying charge-coupled device (EMCCD) camera is having a profound influence on the field of low-light dynamic cellular microscopy, combining highest possible photon collection efficiency with the ability to virtually eliminate the readout noise detection limit. We report here the use of this camera, in 512 x 512 frame-transfer chip format at 10-MHz pixel readout speed, in optimizing a demanding ultra-low-light intracellular calcium flux microscopy setup. The arrangement employed includes a spinning confocal Nipkow disk, which, while facilitating the need to both generate images at very rapid frame rates and minimize background photons, yields very weak signals. The challenge for the camera lies not just in detecting as many of these scarce photons as possible, but also in operating at a frame rate that meets the temporal resolution requirements of many low-light microscopy approaches, a particular demand of smooth muscle calcium flux microscopy. Results presented illustrate both the significant sensitivity improvement offered by this technology over the previous standard in ultra-low-light CCD detection, the GenIII+intensified charge-coupled device (ICCD), and also portray the advanced temporal and spatial resolution capabilities of the EMCCD. Copyright 2004 Society of Photo-Optical Instrumentation Engineers.
CIFAR10-DVS: An Event-Stream Dataset for Object Classification.
Li, Hongmin; Liu, Hanchao; Ji, Xiangyang; Li, Guoqi; Shi, Luping
2017-01-01
Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as "CIFAR10-DVS." The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification.
Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.
Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart
2017-01-01
Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built-in web cameras are a standard feature of most smart devices (e.g., laptops, tablets, smart phones) and can be effectively employed to track eye movements on decisional tasks with high accuracy and minimal cost.
A New Instrument for the IRTF: the MIT Optical Rapid Imaging System (MORIS)
NASA Astrophysics Data System (ADS)
Gulbis, Amanda A. S.; Elliot, J. L.; Rojas, F. E.; Bus, S. J.; Rayner, J. T.; Stahlberger, W. E.; Tokunaga, A. T.; Adams, E. R.; Person, M. J.
2010-10-01
NASA's 3-m Infrared Telescope Facility (IRTF) on Mauna Kea, HI plays a leading role in obtaining planetary science observations. However, there has been no capability for high-speed, visible imaging from this telescope. Here we present a new IRTF instrument, MORIS, the MIT Optical Rapid Imaging System. MORIS is based on POETS (Portable Occultation Eclipse and Transit Systems; Souza et al., 2006, PASP, 118, 1550). Its primary component is an Andor iXon camera, a 512x512 array of 16-micron pixels with high quantum efficiency, low read noise, low dark current, and full-frame readout rates of between 3.5 Hz (6 e /pixel read noise) and 35 Hz (49 e /pixel read noise at electron-multiplying gain=1). User-selectable binning and subframing can increase the cadence to a few hundred Hz. An electron-multiplying mode can be employed for photon counting, effectively reducing the read noise to sub-electron levels at the expense of dynamic range. Data cubes, or individual frames, can be triggered to nanosecond accuracy using a GPS. MORIS is mounted on the side-facing widow of SpeX (Rayner et al. 2003, PASP, 115, 362), allowing simultaneous near-infrared and visible observations. The mounting box contains 3:1 reducing optics to produce a 60 arcsec x 60 arcsec field of view at f/12.7. It hosts a ten-slot filter wheel, with Sloan g×, r×, i×, and z×, VR, Johnson V, and long-pass red filters. We describe the instrument design, components, and measured characteristics. We report results from the first science observations, a 24 June 2008 stellar occultation by Pluto. We also discuss a recent overhaul of the optical path, performed in order to eliminate scattered light. This work is supported in part by NASA Planetary Major Equipment grant NNX07AK95G. We are indebted to the University of Hawai'i Institute for Astronomy machine shop, in particular Randy Chung, for fabricating instrument components.
Mission Specialist (MS) Bluford exercises on middeck treadmill
1983-09-05
STS008-13-0361 (30 Aug.-5 Sept. 1983) --- Astronaut Guion S. Bluford, STS-8 mission specialist, assists Dr. William E. Thornton (out of frame) with a medical test that requires use of the treadmill exercising device designed for spaceflight by the STS-8 medical doctor. This frame was shot with a 35mm camera. Photo credit: NASA
Mission Specialist Hawley works with the SWUIS experiment
2013-11-18
STS093-350-022 (22-27 July 1999) --- Astronaut Steven A. Hawley, mission specialist, works with the Southwest Ultraviolet Imaging System (SWUIS) experiment onboard the Earth-orbiting Space Shuttle Columbia. The SWUIS is based around a Maksutov-design Ultraviolet (UV) telescope and a UV-sensitive, image-intensified Charge-Coupled Device (CCD) camera that frames at video frame rates.
High frame rate imaging systems developed in Northwest Institute of Nuclear Technology
NASA Astrophysics Data System (ADS)
Li, Binkang; Wang, Kuilu; Guo, Mingan; Ruan, Linbo; Zhang, Haibing; Yang, Shaohua; Feng, Bing; Sun, Fengrong; Chen, Yanli
2007-01-01
This paper presents high frame rate imaging systems developed in Northwest Institute of Nuclear Technology in recent years. Three types of imaging systems are included. The first type of system utilizes EG&G RETICON Photodiode Array (PDA) RA100A as the image sensor, which can work at up to 1000 frame per second (fps). Besides working continuously, the PDA system is also designed to switch to capture flash light event working mode. A specific time sequence is designed to satisfy this request. The camera image data can be transmitted to remote area by coaxial or optic fiber cable and then be stored. The second type of imaging system utilizes PHOTOBIT Complementary Metal Oxygen Semiconductor (CMOS) PB-MV13 as the image sensor, which has a high resolution of 1280 (H) ×1024 (V) pixels per frame. The CMOS system can operate at up to 500fps in full frame and 4000fps partially. The prototype scheme of the system is presented. The third type of imaging systems adopts charge coupled device (CCD) as the imagers. MINTRON MTV-1881EX, DALSA CA-D1 and CA-D6 camera head are used in the systems development. The features comparison of the RA100A, PB-MV13, and CA-D6 based systems are given in the end.
Invisible marker based augmented reality system
NASA Astrophysics Data System (ADS)
Park, Hanhoon; Park, Jong-Il
2005-07-01
Augmented reality (AR) has recently gained significant attention. The previous AR techniques usually need a fiducial marker with known geometry or objects of which the structure can be easily estimated such as cube. Placing a marker in the workspace of the user can be intrusive. To overcome this limitation, we present an AR system using invisible markers which are created/drawn with an infrared (IR) fluorescent pen. Two cameras are used: an IR camera and a visible camera, which are positioned in each side of a cold mirror so that their optical centers coincide with each other. We track the invisible markers using IR camera and visualize AR in the view of visible camera. Additional algorithms are employed for the system to have a reliable performance in the cluttered background. Experimental results are given to demonstrate the viability of the proposed system. As an application of the proposed system, the invisible marker can act as a Vision-Based Identity and Geometry (VBIG) tag, which can significantly extend the functionality of RFID. The invisible tag is the same as RFID in that it is not perceivable while more powerful in that the tag information can be presented to the user by direct projection using a mobile projector or by visualizing AR on the screen of mobile PDA.
Vacuum compatible miniature CCD camera head
Conder, Alan D.
2000-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors
Everding, Lukas; Conradt, Jörg
2018-01-01
In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor. PMID:29515386
Movable Cameras And Monitors For Viewing Telemanipulator
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1993-01-01
Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.
ERIC Educational Resources Information Center
Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert
2011-01-01
A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…
ERIC Educational Resources Information Center
Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff
2010-01-01
These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color…
NASA Technical Reports Server (NTRS)
1976-01-01
The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.
Passive stand-off terahertz imaging with 1 hertz frame rate
NASA Astrophysics Data System (ADS)
May, T.; Zieger, G.; Anders, S.; Zakosarenko, V.; Starkloff, M.; Meyer, H.-G.; Thorwirth, G.; Kreysa, E.
2008-04-01
Terahertz (THz) cameras are expected to be a powerful tool for future security applications. If such a technology shall be useful for typical security scenarios (e.g. airport check-in) it has to meet some minimum standards. A THz camera should record images with video rate from a safe distance (stand-off). Although active cameras are conceivable, a passive system has the benefit of concealed operation. Additionally, from an ethic perspective, the lack of exposure to a radiation source is a considerable advantage in public acceptance. Taking all these requirements into account, only cooled detectors are able to achieve the needed sensitivity. A big leap forward in the detector performance and scalability was driven by the astrophysics community. Superconducting bolometers and midsized arrays of them have been developed and are in routine use. Although devices with many pixels are foreseeable nowadays a device with an additional scanning optic is the straightest way to an imaging system with a useful resolution. We demonstrate the capabilities of a concept for a passive Terahertz video camera based on superconducting technology. The actual prototype utilizes a small Cassegrain telescope with a gyrating secondary mirror to record 2 kilopixel THz images with 1 second frame rate.
Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking
Tang, Shengjun; Chen, Wu; Wang, Weixi; Li, Xiaoming; Li, Wenbin; Huang, Zhengdong; Hu, Han; Guo, Renzhong
2018-01-01
Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features. PMID:29723974
Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking.
Tang, Shengjun; Chen, Wu; Wang, Weixi; Li, Xiaoming; Darwish, Walid; Li, Wenbin; Huang, Zhengdong; Hu, Han; Guo, Renzhong
2018-05-01
Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features.
ERIC Educational Resources Information Center
Fisher, Diane K.; Novati, Alexander
2009-01-01
On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…
Single-camera visual odometry to track a surgical X-ray C-arm base.
Esfandiari, Hooman; Lichti, Derek; Anglin, Carolyn
2017-12-01
This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and 4% for all the cases studied and angular accuracy of better than 2% (of absolute cumulative changes in orientation) were achieved with this method. This study provides a robust and accurate tracking framework that not only can be integrated with the current C-arm joint-tracking system (i.e. TC-arm) but also is capable of being employed for similar applications in other fields (e.g. robotics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ranson, W.F.; Schaeffel, J.A.; Murphree, E.A.
The response of prestressed and preheated plates subject to an exponentially decaying blast load was experimentally determined. A grid was reflected from the front surface of the plate and the response was recorded with a high speed camera. The camera used in this analysis was a rotating drum camera operating at 20,000 frames per second with a maximum of 224 frames at 39 microseconds separation. Inplane tension loads were applied to the plate by means of air cylinders. Maximum biaxial load applied to the plate was 500 pounds. Plate preheating was obtained with resistance heaters located in the specimen platemore » holder with a maximum capability of 500F. Data analysis was restricted to the maximum conditions at the center of the plate. Strains were determined from the photographic data and the stresses were calculated from the strain data. Results were obtained from zero preload conditions to a maximum of 480 pounds inplane tension loads and a plate temperature of 490F. The blast load ranged from 6 to 23 psi.« less
NASA Technical Reports Server (NTRS)
Viton, M.; Courtes, G.; Sivan, J. P.; Decher, R.; Gary, A.
1985-01-01
Technical difficulties encountered using the Very Wide Field Camera (VWFC) during the Spacelab 1 Shuttle mission are reported. The VWFC is a wide low resolution (5 arcmin half-half width) photographic camera, capable of operating in both spectrometric and photometric modes. The bandpasses of the photometric mode of the VWFC are defined by three Al + MgF2 interference filters. A piggy-back spectrograph attached to the VWFC was used for observations in the spectrometric mode. A total of 48 astronomical frames were obtained using the VWFC, of which only 20 were considered to be of adequate quality for astronomical data processing. Preliminary analysis of the 28 poor-quality images revealed the following possible defects in the VWFC: darkness in the spacing frames, twilight/dawn UV straylight, and internal UV straylight. Improvements in the VWFC astronomical data processing scheme are expected to help identify and eliminate UV straylight sources in the future.
JunoCam: Outreach and Science Opportunities
NASA Astrophysics Data System (ADS)
Hansen, Candice; Ingersoll, Andy; Caplinger, Mike; Ravine, Mike; Orton, Glenn
2014-11-01
JunoCam is a visible imager on the Juno spacecraft en route to Jupiter. Although the primary role of the camera is for outreach, science objectives will be addressed too. JunoCam is a wide angle camera (58 deg field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm. Juno’s elliptical polar orbit will offer unique views of Jupiter’s polar regions with a spatial scale of ~50 km/pixel. The polar vortex, polar cloud morphology, and winds will be investigated. RGB color mages of the aurora will be acquired. Stereo images and images taken with the methane filter will allow us to estimate cloudtop heights. Resolution exceeds that of Cassini about an hour from closest approach and at closest approach images will have a spatial scale of ~3 km/pixel. JunoCam is a push-frame imager on a rotating spacecraft. The use of time-delayed integration takes advantage of the spacecraft spin to build up signal. JunoCam will acquire limb-to-limb views of Jupiter during a spacecraft rotation, and has the possibility of acquiring images of the rings from in-between Jupiter and the inner edge of the rings. Galilean satellite views will be fairly distant but some images will be acquired. Outer irregular satellites and small ring moons Metis and Adrastea will also be imaged. The theme of our outreach is “science in a fish bowl”, with an invitation to the science community and the public to participate. Amateur astronomers will supply their ground-based images for planning, so that we can predict when prominent atmospheric features will be visible. With the aid of professional astronomers observing at infrared wavelengths, we’ll predict when hot spots will be visible to JunoCam. Amateur image processing enthusiasts are onboard to create image products. Many of the earth flyby image products from Juno’s earth gravity assist were processed by amateurs. Between the planning and products will be the decision-making on what images to take when and why. We invite our colleagues to propose science questions for JunoCam to address, and to be part of the participatory process of deciding how to use our resources and scientifically analyze the data.
NASA Astrophysics Data System (ADS)
Javh, Jaka; Slavič, Janko; Boltežar, Miha
2018-02-01
Instantaneous full-field displacement fields can be measured using cameras. In fact, using high-speed cameras full-field spectral information up to a couple of kHz can be measured. The trouble is that high-speed cameras capable of measuring high-resolution fields-of-view at high frame rates prove to be very expensive (from tens to hundreds of thousands of euro per camera). This paper introduces a measurement set-up capable of measuring high-frequency vibrations using slow cameras such as DSLR, mirrorless and others. The high-frequency displacements are measured by harmonically blinking the lights at specified frequencies. This harmonic blinking of the lights modulates the intensity changes of the filmed scene and the camera-image acquisition makes the integration over time, thereby producing full-field Fourier coefficients of the filmed structure's displacements.
Location memory biases reveal the challenges of coordinating visual and kinesthetic reference frames
Simmering, Vanessa R.; Peterson, Clayton; Darling, Warren; Spencer, John P.
2008-01-01
Five experiments explored the influence of visual and kinesthetic/proprioceptive reference frames on location memory. Experiments 1 and 2 compared visual and kinesthetic reference frames in a memory task using visually-specified locations and a visually-guided response. When the environment was visible, results replicated previous findings of biases away from the midline symmetry axis of the task space, with stability for targets aligned with this axis. When the environment was not visible, results showed some evidence of bias away from a kinesthetically-specified midline (trunk anterior–posterior [a–p] axis), but there was little evidence of stability when targets were aligned with body midline. This lack of stability may reflect the challenges of coordinating visual and kinesthetic information in the absence of an environmental reference frame. Thus, Experiments 3–5 examined kinesthetic guidance of hand movement to kinesthetically-defined targets. Performance in these experiments was generally accurate with no evidence of consistent biases away from the trunk a–p axis. We discuss these results in the context of the challenges of coordinating reference frames within versus between multiple sensori-motor systems. PMID:17703284
Marshall, F J; Radha, P B
2014-11-01
A method to simultaneously image both the absorption and the self-emission of an imploding inertial confinement fusion plasma has been demonstrated on the OMEGA Laser System. The technique involves the use of a high-Z backlighter, half of which is covered with a low-Z material, and a high-speed x-ray framing camera aligned to capture images backlit by this masked backlighter. Two strips of the four-strip framing camera record images backlit by the high-Z portion of the backlighter, while the other two strips record images aligned with the low-Z portion of the backlighter. The emission from the low-Z material is effectively eliminated by a high-Z filter positioned in front of the framing camera, limiting the detected backlighter emission to that of the principal emission line of the high-Z material. As a result, half of the images are of self-emission from the plasma and the other half are of self-emission plus the backlighter. The advantage of this technique is that the self-emission simultaneous with backlighter absorption is independently measured from a nearby direction. The absorption occurs only in the high-Z backlit frames and is either spatially separated from the emission or the self-emission is suppressed by filtering, or by using a backlighter much brighter than the self-emission, or by subtraction. The masked-backlighter technique has been used on the OMEGA Laser System to simultaneously measure the emission profiles and the absorption profiles of polar-driven implosions.
1999-08-25
Mosaic of Triton constructed from 16 individual images. After globally minimizing the camera pointing errors, the frames we reprocessed by map projections, photometric function removal and placement in the mosaic.
Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system
NASA Astrophysics Data System (ADS)
Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi
2010-05-01
Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.
Toward real-time quantum imaging with a single pixel camera
Lawrie, B. J.; Pooser, R. C.
2013-03-19
In this paper, we present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively pass macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. Finally, in low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imagingmore » with sensitivity below the photon shot noise limit.« less
An Application for Driver Drowsiness Identification based on Pupil Detection using IR Camera
NASA Astrophysics Data System (ADS)
Kumar, K. S. Chidanand; Bhowmick, Brojeshwar
A Driver drowsiness identification system has been proposed that generates alarms when driver falls asleep during driving. A number of different physical phenomena can be monitored and measured in order to detect drowsiness of driver in a vehicle. This paper presents a methodology for driver drowsiness identification using IR camera by detecting and tracking pupils. The face region is first determined first using euler number and template matching. Pupils are then located in the face region. In subsequent frames of video, pupils are tracked in order to find whether the eyes are open or closed. If eyes are closed for several consecutive frames then it is concluded that the driver is fatigued and alarm is generated.
Compact Kirkpatrick–Baez microscope mirrors for imaging laser-plasma x-ray emission
Marshall, F. J.
2012-07-18
Compact Kirkpatrick–Baez microscope mirror components for use in imaging laser-plasma x-ray emission have been manufactured, coated, and tested. A single mirror pair has dimensions of 14 × 7 × 9 mm and a best resolution of ~5 μm. The mirrors are coated with Ir providing a useful energy range of 2-8 keV when operated at a grazing angle of 0.7°. The mirrors can be circularly arranged to provide 16 images of the target emission a configuration best suited for use in combination with a custom framing camera. As a result, an alternative arrangement of the mirrors would allow alignment ofmore » the images with a fourstrip framing camera.« less
NASA Astrophysics Data System (ADS)
Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben
2015-03-01
A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.
Correction And Use Of Jitter In Television Images
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.
1989-01-01
Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.
Rapid and highly integrated FPGA-based Shack-Hartmann wavefront sensor for adaptive optics system
NASA Astrophysics Data System (ADS)
Chen, Yi-Pin; Chang, Chia-Yuan; Chen, Shean-Jen
2018-02-01
In this study, a field programmable gate array (FPGA)-based Shack-Hartmann wavefront sensor (SHWS) programmed on LabVIEW can be highly integrated into customized applications such as adaptive optics system (AOS) for performing real-time wavefront measurement. Further, a Camera Link frame grabber embedded with FPGA is adopted to enhance the sensor speed reacting to variation considering its advantage of the highest data transmission bandwidth. Instead of waiting for a frame image to be captured by the FPGA, the Shack-Hartmann algorithm are implemented in parallel processing blocks design and let the image data transmission synchronize with the wavefront reconstruction. On the other hand, we design a mechanism to control the deformable mirror in the same FPGA and verify the Shack-Hartmann sensor speed by controlling the frequency of the deformable mirror dynamic surface deformation. Currently, this FPGAbead SHWS design can achieve a 266 Hz cyclic speed limited by the camera frame rate as well as leaves 40% logic slices for additionally flexible design.
Online tracking of outdoor lighting variations for augmented reality with moving cameras.
Liu, Yanli; Granier, Xavier
2012-04-01
In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.
NASA Astrophysics Data System (ADS)
Jeong, Mira; Nam, Jae-Yeal; Ko, Byoung Chul
2017-09-01
In this paper, we focus on pupil center detection in various video sequences that include head poses and changes in illumination. To detect the pupil center, we first find four eye landmarks in each eye by using cascade local regression based on a regression forest. Based on the rough location of the pupil, a fast radial symmetric transform is applied using the previously found pupil location to rearrange the fine pupil center. As the final step, the pupil displacement is estimated between the previous frame and the current frame to maintain the level of accuracy against a false locating result occurring in a particular frame. We generated a new face dataset, called Keimyung University pupil detection (KMUPD), with infrared camera. The proposed method was successfully applied to the KMUPD dataset, and the results indicate that its pupil center detection capability is better than that of other methods and with a shorter processing time.