Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
A Fast Visible Camera Divertor-Imaging Diagnostic on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roquemore, A; Maingi, R; Lasnier, C
2007-06-19
In recent campaigns, the Photron Ultima SE fast framing camera has proven to be a powerful diagnostic when applied to imaging divertor phenomena on the National Spherical Torus Experiment (NSTX). Active areas of NSTX divertor research addressed with the fast camera include identification of types of EDGE Localized Modes (ELMs)[1], dust migration, impurity behavior and a number of phenomena related to turbulence. To compare such edge and divertor phenomena in low and high aspect ratio plasmas, a multi-institutional collaboration was developed for fast visible imaging on NSTX and DIII-D. More specifically, the collaboration was proposed to compare the NSTX smallmore » type V ELM regime [2] and the residual ELMs observed during Type I ELM suppression with external magnetic perturbations on DIII-D[3]. As part of the collaboration effort, the Photron camera was installed recently on DIII-D with a tangential view similar to the view implemented on NSTX, enabling a direct comparison between the two machines. The rapid implementation was facilitated by utilization of the existing optics that coupled the visible spectral output from the divertor vacuum ultraviolet UVTV system, which has a view similar to the view developed for the divertor tangential TV camera [4]. A remote controlled filter wheel was implemented, as was the radiation shield required for the DIII-D installation. The installation and initial operation of the camera are described in this paper, and the first images from the DIII-D divertor are presented.« less
Characterization of a thinned back illuminated MIMOSA V sensor as a visible light camera
NASA Astrophysics Data System (ADS)
Bulgheroni, Antonio; Bianda, Michele; Caccia, Massimo; Cappellini, Chiara; Mozzanica, Aldo; Ramelli, Renzo; Risigo, Fabio
2006-09-01
This paper reports the measurements that have been performed both in the Silicon Detector Laboratory at the University of Insubria (Como, Italy) and at the Instituto Ricerche SOlari Locarno (IRSOL) to characterize a CMOS pixel particle detector as a visible light camera. The CMOS sensor has been studied in terms of Quantum Efficiency in the visible spectrum, image blooming and reset inefficiency in saturation condition. The main goal of these measurements is to prove that this kind of particle detector can also be used as an ultra fast, 100% fill factor visible light camera in solar physics experiments.
2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup
NASA Astrophysics Data System (ADS)
Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.
2017-10-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask
NASA Astrophysics Data System (ADS)
Morel, Sébastien
2004-09-01
A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.
Fast visible imaging of turbulent plasma in TORPEX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iraji, D.; Diallo, A.; Fasoli, A.
2008-10-15
Fast framing cameras constitute an important recent diagnostic development aimed at monitoring light emission from magnetically confined plasmas, and are now commonly used to study turbulence in plasmas. In the TORPEX toroidal device [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], low frequency electrostatic fluctuations associated with drift-interchange waves are routinely measured by means of extensive sets of Langmuir probes. A Photron Ultima APX-RS fast framing camera has recently been acquired to complement Langmuir probe measurements, which allows comparing statistical and spectral properties of visible light and electrostatic fluctuations. A direct imaging system has been developed, which allows viewingmore » the light, emitted from microwave-produced plasmas tangentially and perpendicularly to the toroidal direction. The comparison of the probability density function, power spectral density, and autoconditional average of the camera data to those obtained using a multiple head electrostatic probe covering the plasma cross section shows reasonable agreement in the case of perpendicular view and in the plasma region where interchange modes dominate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldenbuerger, S.; Brandt, C.; Brochard, F.
2010-06-15
Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the goodmore » correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.« less
NASA Astrophysics Data System (ADS)
Oldenbürger, S.; Brandt, C.; Brochard, F.; Lemoine, N.; Bonhomme, G.
2010-06-01
Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the good correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.
Spectral survey of helium lines in a linear plasma device for use in HELIOS imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, H. B., E-mail: rayhb@ornl.gov; Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831; Biewer, T. M.
2016-11-15
Fast visible cameras and a filterscope are used to examine the visible light emission from Oak Ridge National Laboratory’s Proto-MPEX. The filterscope has been configured to perform helium line ratio measurements using emission lines at 667.9, 728.1, and 706.5 nm. The measured lines should be mathematically inverted and the ratios compared to a collisional radiative model (CRM) to determine T{sub e} and n{sub e}. Increasing the number of measurement chords through the plasma improves the inversion calculation and subsequent T{sub e} and n{sub e} localization. For the filterscope, one spatial chord measurement requires three photomultiplier tubes (PMTs) connected to pelliclemore » beam splitters. Multiple, fast visible cameras with narrowband filters are an alternate technique for performing these measurements with superior spatial resolution. Each camera contains millions of pixels; each pixel is analogous to one filterscope PMT. The data can then be inverted and the ratios compared to the CRM to determine 2-dimensional “images” of T{sub e} and n{sub e} in the plasma. An assessment is made in this paper of the candidate He I emission lines for an imaging technique.« less
Fast-camera imaging on the W7-X stellarator
NASA Astrophysics Data System (ADS)
Ballinger, S. B.; Terry, J. L.; Baek, S. G.; Tang, K.; Grulke, O.
2017-10-01
Fast cameras recording in the visible range have been used to study filamentary (``blob'') edge turbulence in tokamak plasmas, revealing that emissive filaments aligned with the magnetic field can propagate perpendicular to it at speeds on the order of 1 km/s in the SOL or private flux region. The motion of these filaments has been studied in several tokamaks, including MAST, NSTX, and Alcator C-Mod. Filaments were also observed in the W7-X Stellarator using fast cameras during its initial run campaign. For W7-X's upcoming 2017-18 run campaign, we have installed a Phantom V710 fast camera with a view of the machine cross section and part of a divertor module in order to continue studying edge and divertor filaments. The view is coupled to the camera via a coherent fiber bundle. The Phantom camera is able to record at up to 400,000 frames per second and has a spatial resolution of roughly 2 cm in the view. A beam-splitter is used to share the view with a slower machine-protection camera. Stepping-motor actuators tilt the beam-splitter about two orthogonal axes, making it possible to frame user-defined sub-regions anywhere within the view. The diagnostic has been prepared to be remotely controlled via MDSplus. The MIT portion of this work is supported by US DOE award DE-SC0014251.
Analysis of edge density fluctuation measured by trial KSTAR beam emission spectroscopy systema)
NASA Astrophysics Data System (ADS)
Nam, Y. U.; Zoletnik, S.; Lampert, M.; Kovácsik, Á.
2012-10-01
A beam emission spectroscopy (BES) system based on direct imaging avalanche photodiode (APD) camera has been designed for Korea Superconducting Tokamak Advanced Research (KSTAR) and a trial system has been constructed and installed for evaluating feasibility of the design. The system contains two cameras, one is an APD camera for BES measurement and another is a fast visible camera for position calibration. Two pneumatically actuated mirrors were positioned at front and rear of lens optics. The front mirror can switch the measurement between edge and core region of plasma and the rear mirror can switch between the APD and the visible camera. All systems worked properly and the measured photon flux was reasonable as expected from the simulation. While the measurement data from the trial system were limited, it revealed some interesting characteristics of KSTAR plasma suggesting future research works with fully installed BES system. The analysis result and the development plan will be presented in this paper.
Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung
2018-05-24
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.
Visible camera imaging of plasmas in Proto-MPEX
NASA Astrophysics Data System (ADS)
Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.
2015-11-01
The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Innovative diagnostics for ITER physics addressed in JET
NASA Astrophysics Data System (ADS)
Murari, A.; Edlington, T.; Alfier, A.; Alonso, A.; Andrew, Y.; Arnoux, G.; Beurskens, M.; Coad, P.; Crombe, C.; Gauthier, E.; Giroud, C.; Hidalgo, C.; Hong, S.; Kempenaars, M.; Kiptily, V.; Loarer, T.; Meigs, A.; Pasqualotto, R.; Tala, T.; Contributors, JET-EFDA
2008-12-01
In recent years, JET diagnostic capability has been significantly improved to widen the range of physical phenomena that can be studied and thus contribute to the understanding of some ITER relevant issues. The most significant results reported in this paper refer to the plasma wall interactions, the interplay between core and edge physics and fast particles. A synergy between new infrared cameras, visible cameras and spectroscopy diagnostics has allowed investigating a series of new aspects of the plasma wall interactions. The power loads on the plasma facing components of JET main chambers have been assessed at steady state and during transient events like ELMs and disruptions. Evidence of filaments in the edge region of the plasma has been collected with a new fast visible camera and high resolution Thomson scattering. The physics of detached plasmas and some new aspects of dust formation have also been devoted particular attention. The influence of the edge plasma on the core has been investigated with upgraded active spectroscopy, providing new information on momentum transport and the effects of impurity injection on ELMs and ITBs and their interdependence. Given the fact that JET is the only machine with a plasma volume big enough to confine the alphas, a coherent programme of diagnostic developments for the energetic particles has been undertaken. With upgraded γ-ray spectroscopy and a new scintillator probe, it is now possible to study both the redistribution and the losses of the fast particles in various plasma conditions.
Fast camera imaging of dust in the DIII-D tokamak
NASA Astrophysics Data System (ADS)
Yu, J. H.; Rudakov, D. L.; Pigarov, A. Yu.; Smirnov, R. D.; Brooks, N. H.; Muller, S. H.; West, W. P.
2009-06-01
Naturally occurring and injected dust particles are observed in the DIII-D tokamak in the outer midplane scrape-off-layer (SOL) using a visible fast-framing camera, and the size of dust particles is estimated using the observed particle lifetime and theoretical ablation rate of a carbon sphere. Using this method, the lower limit of detected dust radius is ˜3 μm and particles with inferred radius as large as ˜1 mm are observed. Dust particle 2D velocities range from approximately 10 to 300 m/s with velocities inversely correlated with dust size. Pre-characterized 2-4 μm diameter diamond dust particles are introduced at the lower divertor in an ELMing H-mode discharge using the divertor materials evaluation system (DiMES), and these particles are found to be at the lower size limit of detection using the camera with resolution of ˜0.2 cm 2 per pixel and exposure time of 330 μs.
Sub-picosecond streak camera measurements at LLNL: From IR to x-rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuba, J; Shepherd, R; Booth, R
An ultra fast, sub-picosecond resolution streak camera has been recently developed at the LLNL. The camera is a versatile instrument with a wide operating wavelength range. The temporal resolution of up to 300 fs can be achieved, with routine operation at 500 fs. The streak camera has been operated in a wide wavelength range from IR to x-rays up to 2 keV. In this paper we briefly review the main design features that result in the unique properties of the streak camera and present its several scientific applications: (1) Streak camera characterization using a Michelson interferometer in visible range, (2)more » temporally resolved study of a transient x-ray laser at 14.7 nm, which enabled us to vary the x-ray laser pulse duration from {approx}2-6 ps by changing the pump laser parameters, and (3) an example of a time-resolved spectroscopy experiment with the streak camera.« less
C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors
NASA Astrophysics Data System (ADS)
Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David
2018-02-01
After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.
A compact multichannel spectrometer for Thomson scatteringa)
NASA Astrophysics Data System (ADS)
Schoenbeck, N. L.; Schlossberg, D. J.; Dowd, A. S.; Fonck, R. J.; Winz, G. R.
2012-10-01
The availability of high-efficiency volume phase holographic (VPH) gratings and intensified CCD (ICCD) cameras have motivated a simplified, compact spectrometer for Thomson scattering detection. Measurements of Te < 100 eV are achieved by a 2971 l/mm VPH grating and measurements Te > 100 eV by a 2072 l/mm VPH grating. The spectrometer uses a fast-gated (˜2 ns) ICCD camera for detection. A Gen III image intensifier provides ˜45% quantum efficiency in the visible region. The total read noise of the image is reduced by on-chip binning of the CCD to match the 8 spatial channels and the 10 spectral bins on the camera. Three spectrometers provide a minimum of 12 spatial channels and 12 channels for background subtraction.
A compact multichannel spectrometer for Thomson scattering.
Schoenbeck, N L; Schlossberg, D J; Dowd, A S; Fonck, R J; Winz, G R
2012-10-01
The availability of high-efficiency volume phase holographic (VPH) gratings and intensified CCD (ICCD) cameras have motivated a simplified, compact spectrometer for Thomson scattering detection. Measurements of T(e) < 100 eV are achieved by a 2971 l∕mm VPH grating and measurements T(e) > 100 eV by a 2072 l∕mm VPH grating. The spectrometer uses a fast-gated (~2 ns) ICCD camera for detection. A Gen III image intensifier provides ~45% quantum efficiency in the visible region. The total read noise of the image is reduced by on-chip binning of the CCD to match the 8 spatial channels and the 10 spectral bins on the camera. Three spectrometers provide a minimum of 12 spatial channels and 12 channels for background subtraction.
PRIMAS: a real-time 3D motion-analysis system
NASA Astrophysics Data System (ADS)
Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans
1994-03-01
The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.
Kidd, David G; Brethwaite, Andrew
2014-05-01
This study identified the areas behind vehicles where younger and older children are not visible and measured the extent to which vehicle technologies improve visibility. Rear visibility of targets simulating the heights of a 12-15-month-old, a 30-36-month-old, and a 60-72-month-old child was assessed in 21 2010-2013 model year passenger vehicles with a backup camera or a backup camera plus parking sensor system. The average blind zone for a 12-15-month-old was twice as large as it was for a 60-72-month-old. Large SUVs had the worst rear visibility and small cars had the best. Increases in rear visibility provided by backup cameras were larger than the non-visible areas detected by parking sensors, but parking sensors detected objects in areas near the rear of the vehicle that were not visible in the camera or other fields of view. Overall, backup cameras and backup cameras plus parking sensors reduced the blind zone by around 90 percent on average and have the potential to prevent backover crashes if drivers use the technology appropriately. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomanowski, B. A., E-mail: b.a.lomanowski@durham.ac.uk; Sharples, R. M.; Meigs, A. G.
2014-11-15
The mirror-linked divertor spectroscopy diagnostic on JET has been upgraded with a new visible and near-infrared grating and filtered spectroscopy system. New capabilities include extended near-infrared coverage up to 1875 nm, capturing the hydrogen Paschen series, as well as a 2 kHz frame rate filtered imaging camera system for fast measurements of impurity (Be II) and deuterium Dα, Dβ, Dγ line emission in the outer divertor. The expanded system provides unique capabilities for studying spatially resolved divertor plasma dynamics at near-ELM resolved timescales as well as a test bed for feasibility assessment of near-infrared spectroscopy.
Blood pulsation measurement using cameras operating in visible light: limitations.
Koprowski, Robert
2016-10-03
The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).
Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.
Kopp, O; Markert, S; Tornow, R P
2002-01-01
To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.
NASA Astrophysics Data System (ADS)
Kadosh, Itai; Sarusi, Gabby
2017-10-01
The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.
1998-10-30
This picture of Neptune was produced from the last whole planet images taken through the green and orange filters on NASA's Voyager 2 narrow angle camera. The images were taken at a range of 4.4 million miles from the planet, 4 days and 20 hours before closest approach. The picture shows the Great Dark Spot and its companion bright smudge; on the west limb the fast moving bright feature called Scooter and the little dark spot are visible. These clouds were seen to persist for as long as Voyager's cameras could resolve them. North of these, a bright cloud band similar to the south polar streak may be seen. http://photojournal.jpl.nasa.gov/catalog/PIA01492
AO WFS detector developments at ESO to prepare for the E-ELT
NASA Astrophysics Data System (ADS)
Downing, Mark; Casali, Mark; Finger, Gert; Lewis, Steffan; Marchetti, Enrico; Mehrgan, Leander; Ramsay, Suzanne; Reyes, Javier
2016-07-01
ESO has a very active on-going AO WFS detector development program to not only meet the needs of the current crop of instruments for the VLT, but also has been actively involved in gathering requirements, planning, and developing detectors and controllers/cameras for the instruments in design and being proposed for the E-ELT. This paper provides an overall summary of the AO WFS Detector requirements of the E-ELT instruments currently in design and telescope focal units. This is followed by a description of the many interesting detector, controller, and camera developments underway at ESO to meet these needs; a) the rationale behind and plan to upgrade the 240x240 pixels, 2000fps, "zero noise", L3Vision CCD220 sensor based AONGC camera; b) status of the LGSD/NGSD High QE, 3e- RoN, fast 700fps, 1760x1680 pixels, Visible CMOS Imager and camera development; c) status of and development plans for the Selex SAPHIRA NIR eAPD and controller. Most of the instruments and detector/camera developments are described in more detail in other papers at this conference.
Fast soft x-ray images of magnetohydrodynamic phenomena in NSTX.
Bush, C E; Stratton, B C; Robinson, J; Zakharov, L E; Fredrickson, E D; Stutman, D; Tritz, K
2008-10-01
A variety of magnetohydrodynamic (MHD) phenomena have been observed on NSTX. Many of these affect fast particle losses, which are of major concern for future burning plasma experiments. Usual diagnostics for studying these phenomena are arrays of Mirnov coils for magnetic oscillations and p-i-n diode arrays for soft x-ray emission from the plasma core. Data reported here are from a unique fast soft x-ray imaging camera (FSXIC) with a wide-angle (pinhole) tangential view of the entire plasma minor cross section. The camera provides a 64x64 pixel image, on a charge coupled device chip, of light resulting from conversion of soft x rays incident on a phosphor to the visible. We have acquired plasma images at frame rates of 1-500 kHz (300 frames/shot) and have observed a variety of MHD phenomena: disruptions, sawteeth, fishbones, tearing modes, and edge localized modes (ELMs). New data including modes with frequency >90 kHz are also presented. Data analysis and modeling techniques used to interpret the FSXIC data are described and compared, and FSXIC results are compared to Mirnov and p-i-n diode array results.
NASA Astrophysics Data System (ADS)
Fuentes-Fernández, J.; Cuevas, S.; Watson, A. M.
2018-04-01
We present the optical design of COATLI, a two channel visible imager for a comercial 50 cm robotic telescope. COATLI will deliver diffraction-limited images (approximately 0.3 arcsec FWHM) in the riz bands, inside a 4.2 arcmin field, and seeing limited images (approximately 0.6 arcsec FWHM) in the B and g bands, inside a 5 arcmin field, by means of a tip-tilt mirror for fast guiding, and a deformable mirror for active optics, both located on two optically transferred pupil planes. The optical design is based on two collimator-camera systems plus a pupil transfer relay, using achromatic doublets of CaF2 and S-FTM16 and one triplet of N-BK7 and CaF2. We discuss the effciency, tolerancing, thermal behavior and ghosts. COATLI will be installed at the Observatorio Astronómico Nacional in Sierra San Pedro Mártir, Baja California, Mexico, in 2018.
Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung
2017-07-08
A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.
High speed line-scan confocal imaging of stimulus-evoked intrinsic optical signals in the retina
Li, Yang-Guo; Liu, Lei; Amthor, Franklin; Yao, Xin-Cheng
2010-01-01
A rapid line-scan confocal imager was developed for functional imaging of the retina. In this imager, an acousto-optic deflector (AOD) was employed to produce mechanical vibration- and inertia-free light scanning, and a high-speed (68,000 Hz) linear CCD camera was used to achieve sub-cellular and sub-millisecond spatiotemporal resolution imaging. Two imaging modalities, i.e., frame-by-frame and line-by-line recording, were validated for reflected light detection of intrinsic optical signals (IOSs) in visible light stimulus activated frog retinas. Experimental results indicated that fast IOSs were tightly correlated with retinal stimuli, and could track visible light flicker stimulus frequency up to at least 2 Hz. PMID:20125743
Krychowiak, M; Adnan, A; Alonso, A; Andreeva, T; Baldzuhn, J; Barbui, T; Beurskens, M; Biel, W; Biedermann, C; Blackwell, B D; Bosch, H S; Bozhenkov, S; Brakel, R; Bräuer, T; Brotas de Carvalho, B; Burhenn, R; Buttenschön, B; Cappa, A; Cseh, G; Czarnecka, A; Dinklage, A; Drews, P; Dzikowicka, A; Effenberg, F; Endler, M; Erckmann, V; Estrada, T; Ford, O; Fornal, T; Frerichs, H; Fuchert, G; Geiger, J; Grulke, O; Harris, J H; Hartfuß, H J; Hartmann, D; Hathiramani, D; Hirsch, M; Höfel, U; Jabłoński, S; Jakubowski, M W; Kaczmarczyk, J; Klinger, T; Klose, S; Knauer, J; Kocsis, G; König, R; Kornejew, P; Krämer-Flecken, A; Krawczyk, N; Kremeyer, T; Książek, I; Kubkowska, M; Langenberg, A; Laqua, H P; Laux, M; Lazerson, S; Liang, Y; Liu, S C; Lorenz, A; Marchuk, A O; Marsen, S; Moncada, V; Naujoks, D; Neilson, H; Neubauer, O; Neuner, U; Niemann, H; Oosterbeek, J W; Otte, M; Pablant, N; Pasch, E; Sunn Pedersen, T; Pisano, F; Rahbarnia, K; Ryć, L; Schmitz, O; Schmuck, S; Schneider, W; Schröder, T; Schuhmacher, H; Schweer, B; Standley, B; Stange, T; Stephey, L; Svensson, J; Szabolics, T; Szepesi, T; Thomsen, H; Travere, J-M; Trimino Mora, H; Tsuchiya, H; Weir, G M; Wenzel, U; Werner, A; Wiegel, B; Windisch, T; Wolf, R; Wurden, G A; Zhang, D; Zimbal, A; Zoletnik, S
2016-11-01
Wendelstein 7-X, a superconducting optimized stellarator built in Greifswald/Germany, started its first plasmas with the last closed flux surface (LCFS) defined by 5 uncooled graphite limiters in December 2015. At the end of the 10 weeks long experimental campaign (OP1.1) more than 20 independent diagnostic systems were in operation, allowing detailed studies of many interesting plasma phenomena. For example, fast neutral gas manometers supported by video cameras (including one fast-frame camera with frame rates of tens of kHz) as well as visible cameras with different interference filters, with field of views covering all ten half-modules of the stellarator, discovered a MARFE-like radiation zone on the inboard side of machine module 4. This structure is presumably triggered by an inadvertent plasma-wall interaction in module 4 resulting in a high impurity influx that terminates some discharges by radiation cooling. The main plasma parameters achieved in OP1.1 exceeded predicted values in discharges of a length reaching 6 s. Although OP1.1 is characterized by short pulses, many of the diagnostics are already designed for quasi-steady state operation of 30 min discharges heated at 10 MW of ECRH. An overview of diagnostic performance for OP1.1 is given, including some highlights from the physics campaigns.
Digital holographic interferometry applied to the investigation of ignition process.
Pérez-Huerta, J S; Saucedo-Anaya, Tonatiuh; Moreno, I; Ariza-Flores, D; Saucedo-Orozco, B
2017-06-12
We use the digital holographic interferometry (DHI) technique to display the early ignition process for a butane-air mixture flame. Because such an event occurs in a short time (few milliseconds), a fast CCD camera is used to study the event. As more detail is required for monitoring the temporal evolution of the process, less light coming from the combustion is captured by the CCD camera, resulting in a deficient and underexposed image. Therefore, the CCD's direct observation of the combustion process is limited (down to 1000 frames per second). To overcome this drawback, we propose the use of DHI along with a high power laser in order to supply enough light to increase the speed capture, thus improving the visualization of the phenomenon in the initial moments. An experimental optical setup based on DHI is used to obtain a large sequence of phase maps that allows us to observe two transitory stages in the ignition process: a first explosion which slightly emits visible light, and a second stage induced by variations in temperature when the flame is emerging. While the last stage can be directly monitored by the CCD camera, the first stage is hardly detected by direct observation, and DHI clearly evidences this process. Furthermore, our method can be easily adapted for visualizing other types of fast processes.
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Dust measurements in tokamaks (invited).
Rudakov, D L; Yu, J H; Boedo, J A; Hollmann, E M; Krasheninnikov, S I; Moyer, R A; Muller, S H; Pigarov, A Yu; Rosenberg, M; Smirnov, R D; West, W P; Boivin, R L; Bray, B D; Brooks, N H; Hyatt, A W; Wong, C P C; Roquemore, A L; Skinner, C H; Solomon, W M; Ratynskaia, S; Fenstermacher, M E; Groth, M; Lasnier, C J; McLean, A G; Stangeby, P C
2008-10-01
Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers, visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 microm in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C(2) dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudakov, D. L.; Yu, J. H.; Boedo, J. A.
Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers,more » visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 {mu}m in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C{sub 2} dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.« less
A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.
Su, Po-Chang; Shen, Ju; Xu, Wanxin; Cheung, Sen-Ching S; Luo, Ying
2018-01-15
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds.
A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks †
Shen, Ju; Xu, Wanxin; Luo, Ying
2018-01-01
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds. PMID:29342968
Krychowiak, M.
2016-10-27
Wendelstein 7-X, a superconducting optimized stellarator built in Greifswald/Germany, started its first plasmas with the last closed flux surface (LCFS) defined by 5 uncooled graphite limiters in December 2015. At the end of the 10 weeks long experimental campaign (OP1.1) more than 20 independent diagnostic systems were in operation, allowing detailed studies of many interesting plasma phenomena. For example, fast neutral gas manometers supported by video cameras (including one fast-frame camera with frame rates of tens of kHz) as well as visible cameras with different interference filters, with field of views covering all ten half-modules of the stellarator, discovered amore » MARFE-like radiation zone on the inboard side of machine module 4. This structure is presumably triggered by an inadvertent plasma-wall interaction in module 4 resulting in a high impurity influx that terminates some discharges by radiation cooling. The main plasma parameters achieved in OP1.1 exceeded predicted values in discharges of a length reaching 6 s. Although OP1.1 is characterized by short pulses, many of the diagnostics are already designed for quasi-steady state operation of 30 min discharges heated at 10 MW of ECRH. Finally, an overview of diagnostic performance for OP1.1 is given, including some highlights from the physics campaigns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krychowiak, M.
Wendelstein 7-X, a superconducting optimized stellarator built in Greifswald/Germany, started its first plasmas with the last closed flux surface (LCFS) defined by 5 uncooled graphite limiters in December 2015. At the end of the 10 weeks long experimental campaign (OP1.1) more than 20 independent diagnostic systems were in operation, allowing detailed studies of many interesting plasma phenomena. For example, fast neutral gas manometers supported by video cameras (including one fast-frame camera with frame rates of tens of kHz) as well as visible cameras with different interference filters, with field of views covering all ten half-modules of the stellarator, discovered amore » MARFE-like radiation zone on the inboard side of machine module 4. This structure is presumably triggered by an inadvertent plasma-wall interaction in module 4 resulting in a high impurity influx that terminates some discharges by radiation cooling. The main plasma parameters achieved in OP1.1 exceeded predicted values in discharges of a length reaching 6 s. Although OP1.1 is characterized by short pulses, many of the diagnostics are already designed for quasi-steady state operation of 30 min discharges heated at 10 MW of ECRH. Finally, an overview of diagnostic performance for OP1.1 is given, including some highlights from the physics campaigns.« less
The development of large-aperture test system of infrared camera and visible CCD camera
NASA Astrophysics Data System (ADS)
Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying
2015-10-01
Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-03-23
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-01-01
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690
Automatic visibility retrieval from thermal camera images
NASA Astrophysics Data System (ADS)
Dizerens, Céline; Ott, Beat; Wellig, Peter; Wunderle, Stefan
2017-10-01
This study presents an automatic visibility retrieval of a FLIR A320 Stationary Thermal Imager installed on a measurement tower on the mountain Lagern located in the Swiss Jura Mountains. Our visibility retrieval makes use of edges that are automatically detected from thermal camera images. Predefined target regions, such as mountain silhouettes or buildings with high thermal differences to the surroundings, are used to derive the maximum visibility distance that is detectable in the image. To allow a stable, automatic processing, our procedure additionally removes noise in the image and includes automatic image alignment to correct small shifts of the camera. We present a detailed analysis of visibility derived from more than 24000 thermal images of the years 2015 and 2016 by comparing them to (1) visibility derived from a panoramic camera image (VISrange), (2) measurements of a forward-scatter visibility meter (Vaisala FD12 working in the NIR spectra), and (3) modeled visibility values using the Thermal Range Model TRM4. Atmospheric conditions, mainly water vapor from European Center for Medium Weather Forecast (ECMWF), were considered to calculate the extinction coefficients using MODTRAN. The automatic visibility retrieval based on FLIR A320 images is often in good agreement with the retrieval from the systems working in different spectral ranges. However, some significant differences were detected as well, depending on weather conditions, thermal differences of the monitored landscape, and defined target size.
Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D.
Lasnier, C J; Allen, S L; Ellis, R E; Fenstermacher, M E; McLean, A G; Meyer, W H; Morris, K; Seppala, L G; Crabtree, K; Van Zeeland, M A
2014-11-01
An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in diverted and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. Demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.
Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D
Lasnier, Charles J.; Allen, Steve L.; Ellis, Ronald E.; ...
2014-08-26
An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in divertedmore » and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. As a result, demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.« less
High-frame-rate infrared and visible cameras for test range instrumentation
NASA Astrophysics Data System (ADS)
Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.
1995-09-01
Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.
High-performance camera module for fast quality inspection in industrial printing applications
NASA Astrophysics Data System (ADS)
Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.
Multiple-frame IR photo-recorder KIT-3M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, E; Wilkins, P; Nebeker, N
2006-05-15
This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less
Visible camera cryostat design and performance for the SuMIRe Prime Focus Spectrograph (PFS)
NASA Astrophysics Data System (ADS)
Smee, Stephen A.; Gunn, James E.; Golebiowski, Mirek; Hope, Stephen C.; Madec, Fabrice; Gabriel, Jean-Francois; Loomis, Craig; Le fur, Arnaud; Dohlen, Kjetil; Le Mignant, David; Barkhouser, Robert; Carr, Michael; Hart, Murdock; Tamura, Naoyuki; Shimono, Atsushi; Takato, Naruhisa
2016-08-01
We describe the design and performance of the SuMIRe Prime Focus Spectrograph (PFS) visible camera cryostats. SuMIRe PFS is a massively multi-plexed ground-based spectrograph consisting of four identical spectrograph modules, each receiving roughly 600 fibers from a 2394 fiber robotic positioner at the prime focus. Each spectrograph module has three channels covering wavelength ranges 380 nm - 640 nm, 640 nm - 955 nm, and 955 nm - 1.26 um, with the dispersed light being imaged in each channel by a f/1.07 vacuum Schmidt camera. The cameras are very large, having a clear aperture of 300 mm at the entrance window, and a mass of 280 kg. In this paper we describe the design of the visible camera cryostats and discuss various aspects of cryostat performance.
NASA Astrophysics Data System (ADS)
Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika
2015-09-01
In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.
Automatic fog detection for public safety by using camera images
NASA Astrophysics Data System (ADS)
Pagani, Giuliano Andrea; Roth, Martin; Wauben, Wiel
2017-04-01
Fog and reduced visibility have considerable impact on the performance of road, maritime, and aeronautical transportation networks. The impact ranges from minor delays to more serious congestions or unavailability of the infrastructure and can even lead to damage or loss of lives. Visibility is traditionally measured manually by meteorological observers using landmarks at known distances in the vicinity of the observation site. Nowadays, distributed cameras facilitate inspection of more locations from one remote monitoring center. The main idea is, however, still deriving the visibility or presence of fog by an operator judging the scenery and the presence of landmarks. Visibility sensors are also used, but they are rather costly and require regular maintenance. Moreover, observers, and in particular sensors, give only visibility information that is representative for a limited area. Hence the current density of visibility observations is insufficient to give detailed information on the presence of fog. Cameras are more and more deployed for surveillance and security reasons in cities and for monitoring traffic along main transportation ways. In addition to this primary use of cameras, we consider cameras as potential sensors to automatically identify low visibility conditions. The approach that we follow is to use machine learning techniques to determine the presence of fog and/or to make an estimation of the visibility. For that purpose a set of features are extracted from the camera images such as the number of edges, brightness, transmission of the image dark channel, fractal dimension. In addition to these image features, we also consider meteorological variables such as wind speed, temperature, relative humidity, and dew point as additional features to feed the machine learning model. The results obtained with a training and evaluation set consisting of 10-minute sampled images for two KNMI locations over a period of 1.5 years by using decision trees methods to classify the dense fog conditions (i.e., visibility below 250 meters) show promising results (in terms of accuracy and type I and II errors). We are currently extending the approach to images obtained with traffic-monitoring cameras along highways. This is a first step to reach a solution that is closer to an operational artificial intelligence application for automatic fog alarm signaling for public safety.
Overview of diagnostic implementation on Proto-MPEX at ORNL
NASA Astrophysics Data System (ADS)
Biewer, T. M.; Bigelow, T.; Caughman, J. B. O.; Fehling, D.; Goulding, R. H.; Gray, T. K.; Isler, R. C.; Martin, E. H.; Meitner, S.; Rapp, J.; Unterberg, E. A.; Dhaliwal, R. S.; Donovan, D.; Kafle, N.; Ray, H.; Shaw, G. C.; Showers, M.; Mosby, R.; Skeen, C.
2015-11-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) recently began operating with an expanded diagnostic set. Approximately 100 sightlines have been established, delivering the plasma light emission to a ``patch panel'' in the diagnostic room for distribution to a variety of instruments: narrow-band filter spectroscopy, Doppler spectroscopy, laser induced breakdown spectroscopy, optical emission spectroscopy, and Thomson scattering. Additional diagnostic systems include: IR camera imaging, in-vessel thermocouples, ex-vessel fluoroptic probes, fast pressure gauges, visible camera imaging, microwave interferometry, a retarding-field energy analyzer, rf-compensated and ``double'' Langmuir probes, and B-dot probes. A data collection and archival system has been initiated using the MDSplus format. This effort capitalizes on a combination of new and legacy diagnostic hardware at ORNL and was accomplished largely through student labor. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Cellular Neural Network for Real Time Image Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vagliasindi, G.; Arena, P.; Fortuna, L.
2008-03-12
Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information formore » plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)« less
Beam measurements using visible synchrotron light at NSLS2 storage ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Weixing, E-mail: chengwx@bnl.gov; Bacha, Bel; Singh, Om
2016-07-27
Visible Synchrotron Light Monitor (SLM) diagnostic beamline has been designed and constructed at NSLS2 storage ring, to characterize the electron beam profile at various machine conditions. Due to the excellent alignment, SLM beamline was able to see the first visible light when beam was circulating the ring for the first turn. The beamline has been commissioned for the past year. Besides a normal CCD camera to monitor the beam profile, streak camera and gated camera are used to measure the longitudinal and transverse profile to understand the beam dynamics. Measurement results from these cameras will be presented in this paper.more » A time correlated single photon counting system (TCSPC) has also been setup to measure the single bunch purity.« less
Leveraging CubeSat Technology to Address Nighttime Imagery Requirements over the Arctic
NASA Astrophysics Data System (ADS)
Pereira, J. J.; Mamula, D.; Caulfield, M.; Gallagher, F. W., III; Spencer, D.; Petrescu, E. M.; Ostroy, J.; Pack, D. W.; LaRosa, A.
2017-12-01
The National Oceanic and Atmospheric Administration (NOAA) has begun planning for the future operational environmental satellite system by conducting the NOAA Satellite Observing System Architecture (NSOSA) study. In support of the NSOSA study, NOAA is exploring how CubeSat technology funded by NASA can be used to demonstrate the ability to measure three-dimensional profiles of global temperature and water vapor. These measurements are critical for the National Weather Service's (NWS) weather prediction mission. NOAA is conducting design studies on Earth Observing Nanosatellites (EON) for microwave (EON-MW) and infrared (EON-IR) soundings, with MIT Lincoln Laboratory and NASA JPL, respectively. The next step is to explore the technology required for a CubeSat mission to address NWS nighttime imagery requirements over the Arctic. The concept is called EON-Day/Night Band (DNB). The DNB is a 0.5-0.9 micron channel currently on the operational Visible Infrared Imaging Radiometer Suite (VIIRS) instrument, which is part of the Suomi-National Polar-orbiting Partnership and Joint Polar Satellite System satellites. NWS has found DNB very useful during the long periods of darkness that occur during the Alaskan cold season. The DNB enables nighttime imagery products of fog, clouds, and sea ice. EON-DNB will leverage experiments carried out by The Aerospace Corporation's CUbesat MULtispectral Observation System (CUMULOS) sensor and other related work. CUMULOS is a DoD-funded demonstration of COTS camera technology integrated as a secondary mission on the JPL Integrated Solar Array and Reflectarray Antenna mission. CUMULOS is demonstrating a staring visible Si CMOS camera. The EON-DNB project will leverage proven, advanced compact visible lens and focal plane camera technologies to meet NWS user needs for nighttime visible imagery. Expanding this technology to an operational demonstration carries several areas of risk that need to be addressed prior to an operational mission. These include, but are not limited to: calibration, swath coverage, resolution, scene gain control, compact fast optical systems, downlink choices, and mission life. NOAA plans to conduct risk reduction efforts similar to those on EON-MW and EON-IR. This paper will explore EON-DNB risks and mitigation options.
NASA Astrophysics Data System (ADS)
Gilmore, Mark; Hsu, Scott
2015-11-01
The goal of the Plasma Liner eXperiment PLX-alpha at Los Alamos National Laboratory is to establish the viability of creating a spherically imploding plasma liner for MIF and HED applications, using a spherical array of supersonic plasma jets launched by innovative contoured-gap coaxial plasma guns. PLX- α experiments will focus in particular on establishing the ram pressure and uniformity scalings of partial and fully spherical plasma liners. In order to characterize these parameters experimentally, a suite of diagnostics is planned, including multi-camera fast imaging, a 16-channel visible interferometer (upgraded from 8 channels) with reconfigurable, fiber-coupled front end, and visible and VUV high-resolution and survey spectroscopy. Tomographic reconstruction and data fusion techniques will be used in conjunction with interferometry, imaging, and synthetic diagnostics from modeling to characterize liner uniformity in 3D. Diagnostic and data analysis design, implementation, and status will be presented. Supported by the Advanced Research Projects Agency - Energy - U.S. Department of Energy.
Improved Fast, Deep Record Length, Time-Resolved Visible Spectroscopy of Plasmas Using Fiber Grids
NASA Astrophysics Data System (ADS)
Brockington, S.; Case, A.; Cruz, E.; Williams, A.; Witherspoon, F. D.; Horton, R.; Klauser, R.; Hwang, D.
2017-10-01
HyperV Technologies is developing a fiber-coupled, deep record-length, low-light camera head for performing high time resolution spectroscopy on visible emission from plasma events. By coupling the output of a spectrometer to an imaging fiber bundle connected to a bank of amplified silicon photomultipliers, time-resolved spectroscopic imagers of 100 to 1,000 pixels can be constructed. A second generation prototype 32-pixel spectroscopic imager employing this technique was constructed and successfully tested at the University of California at Davis Compact Toroid Injection Experiment (CTIX). Pixel performance of 10 Megaframes/sec with record lengths of up to 256,000 frames ( 25.6 milliseconds) were achieved. Pixel resolution was 12 bits. Pixel pitch can be refined by using grids of 100 μm to 1000 μm diameter fibers. Experimental results will be discussed, along with future plans for this diagnostic. Work supported by USDOE SBIR Grant DE-SC0013801.
Continuous All-Sky Cloud Measurements: Cloud Fraction Analysis Based on a Newly Developed Instrument
NASA Astrophysics Data System (ADS)
Aebi, C.; Groebner, J.; Kaempfer, N.; Vuilleumier, L.
2017-12-01
Clouds play an important role in the climate system and are also a crucial parameter for the Earth's surface energy budget. Ground-based measurements of clouds provide data in a high temporal resolution in order to quantify its influence on radiation. The newly developed all-sky cloud camera at PMOD/WRC in Davos (Switzerland), the infrared cloud camera (IRCCAM), is a microbolometer sensitive in the 8 - 14 μm wavelength range. To get all-sky information the camera is located on top of a frame looking downward on a spherical gold-plated mirror. The IRCCAM has been measuring continuously (day and nighttime) with a time resolution of one minute in Davos since September 2015. To assess the performance of the IRCCAM, two different visible all-sky cameras (Mobotix Q24M and Schreder VIS-J1006), which can only operate during daytime, are installed in Davos. All three camera systems have different software for calculating fractional cloud coverage from images. Our study analyzes mainly the fractional cloud coverage of the IRCCAM and compares it with the fractional cloud coverage calculated from the two visible cameras. Preliminary results of the measurement accuracy of the IRCCAM compared to the visible camera indicate that 78 % of the data are within ± 1 octa and even 93 % within ± 2 octas. An uncertainty of 1-2 octas corresponds to the measurement uncertainty of human observers. Therefore, the IRCCAM shows similar performance in detection of cloud coverage as the visible cameras and the human observers, with the advantage that continuous measurements with high temporal resolution are possible.
Using VIS/NIR and IR spectral cameras for detecting and separating crime scene details
NASA Astrophysics Data System (ADS)
Kuula, Jaana; Pölönen, Ilkka; Puupponen, Hannu-Heikki; Selander, Tuomas; Reinikainen, Tapani; Kalenius, Tapani; Saari, Heikki
2012-06-01
Detecting invisible details and separating mixed evidence is critical for forensic inspection. If this can be done reliably and fast at the crime scene, irrelevant objects do not require further examination at the laboratory. This will speed up the inspection process and release resources for other critical tasks. This article reports on tests which have been carried out at the University of Jyväskylä in Finland together with the Central Finland Police Department and the National Bureau of Investigation for detecting and separating forensic details with hyperspectral technology. In the tests evidence was sought after at an assumed violent burglary scene with the use of VTT's 500-900 nm wavelength VNIR camera, Specim's 400- 1000 nm VNIR camera, and Specim's 1000-2500 nm SWIR camera. The tested details were dried blood on a ceramic plate, a stain of four types of mixed and absorbed blood, and blood which had been washed off a table. Other examined details included untreated latent fingerprints, gunshot residue, primer residue, and layered paint on small pieces of wood. All cameras could detect visible details and separate mixed paint. The SWIR camera could also separate four types of human and animal blood which were mixed in the same stain and absorbed into a fabric. None of the cameras could however detect primer residue, untreated latent fingerprints, or blood that had been washed off. The results are encouraging and indicate the need for further studies. The results also emphasize the importance of creating optimal imaging conditions into the crime scene for each kind of subjects and backgrounds.
NASA Astrophysics Data System (ADS)
Jylhä, Juha; Marjanen, Kalle; Rantala, Mikko; Metsäpuro, Petri; Visa, Ari
2006-09-01
Surveillance camera automation and camera network development are growing areas of interest. This paper proposes a competent approach to enhance the camera surveillance with Geographic Information Systems (GIS) when the camera is located at the height of 10-1000 m. A digital elevation model (DEM), a terrain class model, and a flight obstacle register comprise exploited auxiliary information. The approach takes into account spherical shape of the Earth and realistic terrain slopes. Accordingly, considering also forests, it determines visible and shadow regions. The efficiency arises out of reduced dimensionality in the visibility computation. Image processing is aided by predicting certain advance features of visible terrain. The features include distance from the camera and the terrain or object class such as coniferous forest, field, urban site, lake, or mast. The performance of the approach is studied by comparing a photograph of Finnish forested landscape with the prediction. The predicted background is well-fitting, and potential knowledge-aid for various purposes becomes apparent.
[INVITED] Evaluation of process observation features for laser metal welding
NASA Astrophysics Data System (ADS)
Tenner, Felix; Klämpfl, Florian; Nagulin, Konstantin Yu.; Schmidt, Michael
2016-06-01
In the present study we show how fast the fluid dynamics change when changing the laser power for different feed rates during laser metal welding. By the use of two high-speed cameras and a data acquisition system we conclude how fast we have to image the process to measure the fluid dynamics with a very high certainty. Our experiments show that not all process features which can be measured during laser welding do represent the process behavior similarly well. Despite the good visibility of the vapor plume the monitoring of its movement is less suitable as an input signal for a closed-loop control. The features measured inside the keyhole show a good correlation with changes of process parameters. Due to its low noise, the area of the keyhole opening is well suited as an input signal for a closed-loop control of the process.
Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera
NASA Astrophysics Data System (ADS)
Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.
2016-08-01
Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.
In-vessel visible inspection system on KSTAR
NASA Astrophysics Data System (ADS)
Chung, Jinil; Seo, D. C.
2008-08-01
To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
Calibration and verification of thermographic cameras for geometric measurements
NASA Astrophysics Data System (ADS)
Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.
2011-03-01
Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better results for the Nec camera.
Coincidence ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen
2014-12-01
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.
Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras
NASA Astrophysics Data System (ADS)
Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi
1997-04-01
Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.
A high resolution IR/visible imaging system for the W7-X limiter
NASA Astrophysics Data System (ADS)
Wurden, G. A.; Stephey, L. A.; Biedermann, C.; Jakubowski, M. W.; Dunn, J. P.; Gamradt, M.
2016-11-01
A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphite tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (˜1-4.5 MW/m2), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO's can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.
A high resolution IR/visible imaging system for the W7-X limiter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, G. A., E-mail: wurden@lanl.gov; Dunn, J. P.; Stephey, L. A.
A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphitemore » tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (∼1–4.5 MW/m{sup 2}), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO’s can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.« less
Voss with video camera in Service Module
2001-04-08
ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.
Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2017-05-08
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.
The appearance and propagation of filaments in the private flux region in Mega Amp Spherical Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, J. R.; Fishpool, G. M.; Thornton, A. J.
2015-09-15
The transport of particles via intermittent filamentary structures in the private flux region (PFR) of plasmas in the MAST tokamak has been investigated using a fast framing camera recording visible light emission from the volume of the lower divertor, as well as Langmuir probes and IR thermography monitoring particle and power fluxes to plasma-facing surfaces in the divertor. The visible camera data suggest that, in the divertor volume, fluctuations in light emission above the X-point are strongest in the scrape-off layer (SOL). Conversely, in the region below the X-point, it is found that these fluctuations are strongest in the PFRmore » of the inner divertor leg. Detailed analysis of the appearance of these filaments in the camera data suggests that they are approximately circular, around 1–2 cm in diameter, but appear more elongated near the divertor target. The most probable toroidal quasi-mode number is between 2 and 3. These filaments eject plasma deeper into the private flux region, sometimes by the production of secondary filaments, moving at a speed of 0.5–1.0 km/s. Probe measurements at the inner divertor target suggest that the fluctuations in the particle flux to the inner target are strongest in the private flux region, and that the amplitude and distribution of these fluctuations are insensitive to the electron density of the core plasma, auxiliary heating and whether the plasma is single-null or double-null. It is found that the e-folding width of the time-average particle flux in the PFR decreases with increasing plasma current, but the fluctuations appear to be unaffected. At the outer divertor target, the fluctuations in particle and power fluxes are strongest in the SOL.« less
Fast imaging measurements and modeling of neutral and impurity density on C-2U
NASA Astrophysics Data System (ADS)
Granstedt, Erik; Deng, B.; Dettrick, S.; Gupta, D. K.; Osin, D.; Roche, T.; Zhai, K.; TAE Team
2016-10-01
The C-2U device employed neutral beam injection and end-biasing to sustain an advanced beam-driven Field-Reversed Configuration plasma for 5+ ms, beyond characteristic transport time-scales. Three high-speed, filtered cameras observed visible light emission from neutral hydrogen and impurities, as well as deuterium pellet ablation and compact-toroid injection which were used for auxiliary particle fueling. Careful vacuum practices and titanium gettering successfully reduced neutral recycling from the confinement vessel wall. As a result, a large fraction of the remaining neutrals originate from charge-exchange between the neutral beams and plasma ions. Measured H/D- α emission is used with DEGAS2 neutral particle modeling to reconstruct the strongly non-axissymmetric neutral distribution. This is then used in fast-ion modeling to more accurately estimate their charge-exchange loss rate. Oxygen emission due to electron-impact excitation and charge-exchange recombination has also been measured using fast imaging. Reconstructed emissivity of O4+ is localized on the outboard side of the core plasma near the estimated location of the separatrix inferred by external magnetic measurements. Tri Alpha Energy.
Tan, Tai Ho; Williams, Arthur H.
1985-01-01
An optical fiber-coupled detector visible streak camera plasma diagnostic apparatus. Arrays of optical fiber-coupled detectors are placed on the film plane of several types of particle, x-ray and visible spectrometers or directly in the path of the emissions to be measured and the output is imaged by a visible streak camera. Time and spatial dependence of the emission from plasmas generated from a single pulse of electromagnetic radiation or from a single particle beam burst can be recorded.
Tan, T.H.; Williams, A.H.
An optical fiber-coupled detector visible streak camera plasma diagnostic apparatus. Arrays of optical fiber-coupled detectors are placed on the film plane of several types of particle, x-ray and visible spectrometers or directly in the path of the emissions to be measured and the output is imaged by a visible streak camera. Time and spatial dependence of the emission from plasma generated from a single pulse of electromagnetic radiation or from a single particle beam burst can be recorded.
Coincidence ion imaging with a fast frame camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei
2014-12-15
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less
Advanced imaging research and development at DARPA
NASA Astrophysics Data System (ADS)
Dhar, Nibir K.; Dat, Ravi
2012-06-01
Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.
Measuring the density of a molecular cluster injector via visible emission from an electron beam.
Lundberg, D P; Kaita, R; Majeski, R; Stotler, D P
2010-10-01
A method to measure the density distribution of a dense hydrogen gas jet is presented. A Mach 5.5 nozzle is cooled to 80 K to form a flow capable of molecular cluster formation. A 250 V, 10 mA electron beam collides with the jet and produces H(α) emission that is viewed by a fast camera. The high density of the jet, several 10(16) cm(-3), results in substantial electron depletion, which attenuates the H(α) emission. The attenuated emission measurement, combined with a simplified electron-molecule collision model, allows us to determine the molecular density profile via a simple iterative calculation.
Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †
Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi
2016-01-01
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781
Calibration Target for Curiosity Arm Camera
2012-09-10
This view of the calibration target for the MAHLI camera aboard NASA Mars rover Curiosity combines two images taken by that camera during Sept. 9, 2012. Part of Curiosity left-front and center wheels and a patch of Martian ground are also visible.
NASA Astrophysics Data System (ADS)
O'Keefe, Eoin S.
2005-10-01
As thermal imaging technology matures and ownership costs decrease, there is a trend to equip a greater proportion of airborne surveillance vehicles used by security and defence forces with both visible band and thermal infrared cameras. These cameras are used for tracking vehicles on the ground, to aid in pursuit of villains in vehicles and on foot, while also assisting in the direction and co-ordination of emergency service vehicles as the occasion arises. These functions rely on unambiguous identification of police and the other emergency service vehicles. In the visible band this is achieved by dark markings with high contrast (light) backgrounds on the roof of vehicles. When there is no ambient lighting, for example at night, thermal imaging is used to track both vehicles and people. In the thermal IR, the visible markings are not obvious. At the wavelength thermal imagers operate, either 3-5 microns or 8-12 microns, the dark and light coloured materials have similar low reflectivity. To maximise the usefulness of IR airborne surveillance, a method of passively and unobtrusively marking vehicles concurrently in the visible and thermal infrared is needed. In this paper we discuss the design, application and operation of some vehicle and personnel marking materials and show airborne IR and visible imagery of materials in use.
NASA Astrophysics Data System (ADS)
Lee, Kyuhang; Ko, Jinseok; Wi, Hanmin; Chung, Jinil; Seo, Hyeonjin; Jo, Jae Heung
2018-06-01
The visible TV system used in the Korea Superconducting Tokamak Advanced Research device has been equipped with a periscope to minimize the damage on its CCD pixels from neutron radiation. The periscope with more than 2.3 m in overall length has been designed for the visible camera system with its semi-diagonal field of view as wide as 30° and its effective focal length as short as 5.57 mm. The design performance of the periscope includes the modulation transfer function greater than 0.25 at 68 cycles/mm with low distortion. The installed periscope system has confirmed the image qualities as designed and also as comparable as those from its predecessor but with far less probabilities of neutral damages on the camera.
Edge Turbulence Imaging in Alcator C-Mod
NASA Astrophysics Data System (ADS)
Zweben, Stewart J.
2001-10-01
This talk will describe measurements and modeling of the 2-D structure of edge turbulence in Alcator C-Mod. The radial vs. poloidal structure was measured using Gas Puff Imaging (GPI) (R. Maqueda et al, RSI 72, 931 (2001), J. Terry et al, J. Nucl. Materials 290-293, 757 (2001)), in which the visible light emitted by an edge neutral gas puff (generally D or He) is viewed along the local magnetic field by a fast-gated video camera. Strong fluctuations are observed in the gas cloud light emission when the camera is gated at ~2 microsec exposure time per frame. The structure of these fluctuations is highly turbulent with a typical radial and poloidal scale of ≈1 cm, and often with local maxima in the scrape-off layer (i.e. ``blobs"). Video clips and analyses of these images will be presented along with their variation in different plasma regimes. The local time dependence of edge turbulence is measured using high-speed photodiodes viewing the gas puff emission, a scanning Langmuir probe, and also with a Princeton Scientific Instruments ultra-fast framing camera, which can make 2-D images the gas puff at up to 200,000 frames/sec. Probe measurements show that the strong turbulence region moves to the separatrix as the density limit is approached, which may be connected to the density limit (B. LaBombard et al., Phys. Plasmas 8 2107 (2001)). Comparisons of this C-Mod turbulence data will be made with results of simulations from the Drift-Ballooning Mode (DBM) (B.N. Rogers et al, Phys. Rev. Lett. 20 4396 (1998))and Non-local Edge Turbulence (NLET) codes.
Into the blue: AO science with MagAO in the visible
NASA Astrophysics Data System (ADS)
Close, Laird M.; Males, Jared R.; Follette, Katherine B.; Hinz, Phil; Morzinski, Katie; Wu, Ya-Lin; Kopon, Derek; Riccardi, Armando; Esposito, Simone; Puglisi, Alfio; Pinna, Enrico; Xompero, Marco; Briguglio, Runa; Quiros-Pacheco, Fernando
2014-08-01
We review astronomical results in the visible (λ<1μm) with adaptive optics. Other than a brief period in the early 1990s, there has been little astronomical science done in the visible with AO until recently. The most productive visible AO system to date is our 6.5m Magellan telescope AO system (MagAO). MagAO is an advanced Adaptive Secondary system at the Magellan 6.5m in Chile. This secondary has 585 actuators with < 1 msec response times (0.7 ms typically). We use a pyramid wavefront sensor. The relatively small actuator pitch (~23 cm/subap) allows moderate Strehls to be obtained in the visible (0.63-1.05 microns). We use a CCD AO science camera called "VisAO". On-sky long exposures (60s) achieve <30mas resolutions, 30% Strehls at 0.62 microns (r') with the VisAO camera in 0.5" seeing with bright R < 8 mag stars. These relatively high visible wavelength Strehls are made possible by our powerful combination of a next generation ASM and a Pyramid WFS with 378 controlled modes and 1000 Hz loop frequency. We'll review the key steps to having good performance in the visible and review the exciting new AO visible science opportunities and refereed publications in both broad-band (r,i,z,Y) and at Halpha for exoplanets, protoplanetary disks, young stars, and emission line jets. These examples highlight the power of visible AO to probe circumstellar regions/spatial resolutions that would otherwise require much larger diameter telescopes with classical infrared AO cameras.
Design of a Remote Infrared Images and Other Data Acquisition Station for outdoor applications
NASA Astrophysics Data System (ADS)
Béland, M.-A.; Djupkep, F. B. D.; Bendada, A.; Maldague, X.; Ferrarini, G.; Bison, P.; Grinzato, E.
2013-05-01
The Infrared Images and Other Data Acquisition Station enables a user, who is located inside a laboratory, to acquire visible and infrared images and distances in an outdoor environment with the help of an Internet connection. This station can acquire data using an infrared camera, a visible camera, and a rangefinder. The system can be used through a web page or through Python functions.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-03-16
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.
Design and Calibration of a Dispersive Imaging Spectrometer Adaptor for a Fast IR Camera on NSTX-U
NASA Astrophysics Data System (ADS)
Reksoatmodjo, Richard; Gray, Travis; Princeton Plasma Physics Laboratory Team
2017-10-01
A dispersive spectrometer adaptor was designed, constructed and calibrated for use on a fast infrared camera employed to measure temperatures on the lower divertor tiles of the NSTX-U tokamak. This adaptor efficiently and evenly filters and distributes long-wavelength infrared photons between 8.0 and 12.0 microns across the 128x128 pixel detector of the fast IR camera. By determining the width of these separated wavelength bands across the camera detector, and then determining the corresponding average photon count for each photon wavelength, a very accurate measurement of the temperature, and thus heat flux, of the divertor tiles can be calculated using Plank's law. This approach of designing an exterior dispersive adaptor for the fast IR camera allows accurate temperature measurements to be made of materials with unknown emissivity. Further, the relative simplicity and affordability of this adaptor design provides an attractive option over more expensive, slower, dispersive IR camera systems. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No. DE-AC02-09CH11466.
Coincidence electron/ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin
2015-05-01
A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.
Chrominance watermark for mobile applications
NASA Astrophysics Data System (ADS)
Reed, Alastair; Rogers, Eliot; James, Dan
2010-01-01
Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.
Spectral measurements of muzzle flash with multispectral and hyperspectral sensor
NASA Astrophysics Data System (ADS)
Kastek, M.; Dulski, R.; Trzaskawka, P.; Piątkowski, T.; Polakowski, H.
2011-08-01
The paper presents some practical aspects of the measurements of muzzle flash signatures. Selected signatures of sniper shot in typical scenarios has been presented. Signatures registered during all phases of muzzle flash were analyzed. High precision laboratory measurements were made in a special ballistic laboratory and as a result several flash patterns were registered. The field measurements of a muzzle flash were also performed. During the tests several infrared cameras were used, including the measurement class devices with high accuracy and frame rates. The registrations were made in NWIR, SWIR and LWIR spectral bands simultaneously. An ultra fast visual camera was also used for visible spectra registration. Some typical infrared shot signatures were presented. Beside the cameras, the LWIR imaging spectroradiometer HyperCam was also used during the laboratory experiments and the field tests. The signatures collected by the HyperCam device were useful for the determination of spectral characteristics of the muzzle flash, whereas the analysis of thermal images registered during the tests provided the data on temperature distribution in the flash area. As a result of the measurement session the signatures of several types handguns, machine guns and sniper rifles were obtained which will be used in the development of passive infrared systems for sniper detection.
NASA Astrophysics Data System (ADS)
Close, Laird M.; Males, Jared R.; Kopon, Derek A.; Gasho, Victor; Follette, Katherine B.; Hinz, Phil; Morzinski, Katie; Uomoto, Alan; Hare, Tyson; Riccardi, Armando; Esposito, Simone; Puglisi, Alfio; Pinna, Enrico; Busoni, Lorenzo; Arcidiacono, Carmelo; Xompero, Marco; Briguglio, Runa; Quiros-Pacheco, Fernando; Argomedo, Javier
2012-07-01
The heart of the 6.5 Magellan AO system (MagAO) is a 585 actuator adaptive secondary mirror (ASM) with <1 msec response times (0.7 ms typically). This adaptive secondary will allow low emissivity and high-contrast AO science. We fabricated a high order (561 mode) pyramid wavefront sensor (similar to that now successfully used at the Large Binocular Telescope). The relatively high actuator count (and small projected ~23 cm pitch) allows moderate Strehls to be obtained by MagAO in the “visible” (0.63-1.05 μm). To take advantage of this we have fabricated an AO CCD science camera called "VisAO". Complete “end-to-end” closed-loop lab tests of MagAO achieve a solid, broad-band, 37% Strehl (122 nm rms) at 0.76 μm (i’) with the VisAO camera in 0.8” simulated seeing (13 cm ro at V) with fast 33 mph winds and a 40 m Lo locked on R=8 mag artificial star. These relatively high visible wavelength Strehls are enabled by our powerful combination of a next generation ASM and a Pyramid WFS with 400 controlled modes and 1000 Hz sample speeds (similar to that used successfully on-sky at the LBT). Currently only the VisAO science camera is used for lab testing of MagAO, but this high level of measured performance (122 nm rms) promises even higher Strehls with our IR science cameras. On bright (R=8 mag) stars we should achieve very high Strehls (>70% at H) in the IR with the existing MagAO Clio2 (λ=1-5.3 μm) science camera/coronagraph or even higher (~98% Strehl) the Mid-IR (8-26 microns) with the existing BLINC/MIRAC4 science camera in the future. To eliminate non-common path vibrations, dispersions, and optical errors the VisAO science camera is fed by a common path advanced triplet ADC and is piggy-backed on the Pyramid WFS optical board itself. Also a high-speed shutter can be used to block periods of poor correction. The entire system passed CDR in June 2009, and we finished the closed-loop system level testing phase in December 2011. Final system acceptance (“pre-ship” review) was passed in February 2012. In May 2012 the entire AO system is was successfully shipped to Chile and fully tested/aligned. It is now in storage in the Magellan telescope clean room in anticipation of “First Light” scheduled for December 2012. An overview of the design, attributes, performance, and schedule for the Magellan AO system and its two science cameras are briefly presented here.
NASA Astrophysics Data System (ADS)
Ehrhart, Matthias; Lienhart, Werner
2017-09-01
The importance of automated prism tracking is increasingly triggered by the rising automation of total station measurements in machine control, monitoring and one-person operation. In this article we summarize and explain the different techniques that are used to coarsely search a prism, to precisely aim at a prism, and to identify whether the correct prism is tracked. Along with the state-of-the-art review, we discuss and experimentally evaluate possible improvements based on the image data of an additional wide-angle camera which is available for many total stations today. In cases in which the total station's fine aiming module loses the prism, the tracked object may still be visible to the wide-angle camera because of its larger field of view. The theodolite angles towards the target can then be derived from its image coordinates which facilitates a fast reacquisition of the prism. In experimental measurements we demonstrate that our image-based approach for the coarse target search is 4 to 10-times faster than conventional approaches.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-01-01
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783
Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera
NASA Astrophysics Data System (ADS)
Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu
2016-09-01
We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.
Robust Behavior Recognition in Intelligent Surveillance Environments.
Batchuluun, Ganbayar; Kim, Yeong Gon; Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2016-06-30
Intelligent surveillance systems have been studied by many researchers. These systems should be operated in both daytime and nighttime, but objects are invisible in images captured by visible light camera during the night. Therefore, near infrared (NIR) cameras, thermal cameras (based on medium-wavelength infrared (MWIR), and long-wavelength infrared (LWIR) light) have been considered for usage during the nighttime as an alternative. Due to the usage during both daytime and nighttime, and the limitation of requiring an additional NIR illuminator (which should illuminate a wide area over a great distance) for NIR cameras during the nighttime, a dual system of visible light and thermal cameras is used in our research, and we propose a new behavior recognition in intelligent surveillance environments. Twelve datasets were compiled by collecting data in various environments, and they were used to obtain experimental results. The recognition accuracy of our method was found to be 97.6%, thereby confirming the ability of our method to outperform previous methods.
NASA Astrophysics Data System (ADS)
Iglesias, F. A.; Feller, A.; Nagaraju, K.; Solanki, S. K.
2016-05-01
Context. Remote sensing of weak and small-scale solar magnetic fields is of utmost relevance when attempting to respond to a number of important open questions in solar physics. This requires the acquisition of spectropolarimetric data with high spatial resolution (~10-1 arcsec) and low noise (10-3 to 10-5 of the continuum intensity). The main limitations to obtain these measurements from the ground, are the degradation of the image resolution produced by atmospheric seeing and the seeing-induced crosstalk (SIC). Aims: We introduce the prototype of the Fast Solar Polarimeter (FSP), a new ground-based, high-cadence polarimeter that tackles the above-mentioned limitations by producing data that are optimally suited for the application of post-facto image restoration, and by operating at a modulation frequency of 100 Hz to reduce SIC. Methods: We describe the instrument in depth, including the fast pnCCD camera employed, the achromatic modulator package, the main calibration steps, the effects of the modulation frequency on the levels of seeing-induced spurious signals, and the effect of the camera properties on the image restoration quality. Results: The pnCCD camera reaches 400 fps while keeping a high duty cycle (98.6%) and very low noise (4.94 e- rms). The modulator is optimized to have high (>80%) total polarimetric efficiency in the visible spectral range. This allows FSP to acquire 100 photon-noise-limited, full-Stokes measurements per second. We found that the seeing induced signals that are present in narrow-band, non-modulated, quiet-sun measurements are (a) lower than the noise (7 × 10-5) after integrating 7.66 min, (b) lower than the noise (2.3 × 10-4) after integrating 1.16 min and (c) slightly above the noise (4 × 10-3) after restoring case (b) by means of a multi-object multi-frame blind deconvolution. In addition, we demonstrate that by using only narrow-band images (with low S/N of 13.9) of an active region, we can obtain one complete set of high-quality restored measurements about every 2 s.
Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing
NASA Technical Reports Server (NTRS)
Crooke, Julie A.
2003-01-01
The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.
A Fisheries Application of a Dual-Frequency Identification Sonar Acoustic Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moursund, Russell A.; Carlson, Thomas J.; Peters, Rock D.
2003-06-01
The uses of an acoustic camera in fish passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The Dual-Frequency Identification Sonar (DIDSON) is a high-resolution imaging sonar that obtains near video-quality images for the identification of objects underwater. Developed originally for the Navy by the University of Washington?s Applied Physics Laboratory, it bridges the gap between existing fisheries assessment sonar and optical systems. Traditional fisheries assessment sonars detect targets at long ranges but cannot record the shape of targets. The images within 12 m of this acoustic camera are so clear that one canmore » see fish undulating as they swim and can tell the head from the tail in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system is composed of 96 beams over a 29-degree field of view. This high resolution and a fast frame rate allow the acoustic camera to produce near video-quality images of objects through time. This technology redefines many of the traditional limitations of sonar for fisheries and aquatic ecology. Images can be taken of fish in confined spaces, close to structural or surface boundaries, and in the presence of entrained air. The targets themselves can be visualized in real time. The DIDSON can be used where conventional underwater cameras would be limited in sampling range to < 1 m by low light levels and high turbidity, and where traditional sonar would be limited by the confined sample volume. Results of recent testing at The Dalles Dam, on the lower Columbia River in Oregon, USA, are shown.« less
Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan
NASA Astrophysics Data System (ADS)
Pichette, Julien; Charle, Wouter; Lambrechts, Andy
2017-02-01
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
Non-flickering 100 m RGB visible light communication transmission based on a CMOS image sensor.
Chow, Chi-Wai; Shiu, Ruei-Jie; Liu, Yen-Chun; Liu, Yang; Yeh, Chien-Hung
2018-03-19
We demonstrate a non-flickering 100 m long-distance RGB visible light communication (VLC) transmission based on a complementary-metal-oxide-semiconductor (CMOS) camera. Experimental bit-error rate (BER) measurements under different camera ISO values and different transmission distances are evaluated. Here, we also experimentally reveal that the rolling shutter effect- (RSE) based VLC system cannot work at long distance transmission, and the under-sampled modulation- (USM) based VLC system is a good choice.
Perez-Mendez, V.
1997-01-21
A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.
Perez-Mendez, Victor
1997-01-01
A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.
Mitigation of Atmospheric Effects on Imaging Systems
2004-03-31
focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted
NASA Astrophysics Data System (ADS)
Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin
2018-01-01
ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.
MS Walheim poses with a Hasselblad camera on the flight deck of Atlantis during STS-110
2002-04-08
STS110-E-5017 (8 April 2002) --- Astronaut Rex J. Walheim, STS-110 mission specialist, holds a camera on the aft flight deck of the Space Shuttle Atlantis. A blue and white Earth is visible through the overhead windows of the orbiter. The image was taken with a digital still camera.
Stargazing at 'Husband Hill Observatory' on Mars
NASA Technical Reports Server (NTRS)
2005-01-01
NASA's Mars Exploration Rover Spirit continues to take advantage of extra solar energy by occasionally turning its cameras upward for night sky observations. Most recently, Spirit made a series of observations of bright star fields from the summit of 'Husband Hill' in Gusev Crater on Mars. Scientists use the images to assess the cameras' sensitivity and to search for evidence of nighttime clouds or haze. The image on the left is a computer simulation of the stars in the constellation Orion. The next three images are actual views of Orion captured with Spirit's panoramic camera during exposures of 10, 30, and 60 seconds. Because Spirit is in the southern hemisphere of Mars, Orion appears upside down compared to how it would appear to viewers in the Northern Hemisphere of Earth. 'Star trails' in the longer exposures are a result of the planet's rotation. The faintest stars visible in the 60-second exposure are about as bright as the faintest stars visible with the naked eye from Earth (about magnitude 6 in astronomical terms). The Orion Nebula, famous as a nursery of newly forming stars, is also visible in these images. Bright streaks in some parts of the images aren't stars or meteors or unidentified flying objects, but are caused by solar and galactic cosmic rays striking the camera's detector. Spirit acquired these images with the panoramic camera on Martian day, or sol, 632 (Oct. 13, 2005) at around 45 minutes past midnight local time, using the camera's broadband filter (wavelengths of 739 nanometers plus or minus 338 nanometers).Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall
NASA Astrophysics Data System (ADS)
Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith
2013-05-01
The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.
ERIC Educational Resources Information Center
Tanner-Smith, Emily E.; Fisher, Benjamin W.
2015-01-01
Many U.S. schools use visible security measures (security cameras, metal detectors, security personnel) in an effort to keep schools safe and promote adolescents' academic success. This study examined how different patterns of visible security utilization were associated with U.S. middle and high school students' academic performance, attendance,…
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
NASA Astrophysics Data System (ADS)
Gouverneur, B.; Verstockt, S.; Pauwels, E.; Han, J.; de Zeeuw, P. M.; Vermeiren, J.
2012-10-01
Various visible and infrared cameras have been tested for the early detection of wildfires to protect archeological treasures. This analysis was possible thanks to the EU Firesense project (FP7-244088). Although visible cameras are low cost and give good results during daytime for smoke detection, they fall short under bad visibility conditions. In order to improve the fire detection probability and reduce the false alarms, several infrared bands are tested ranging from the NIR to the LWIR. The SWIR and the LWIR band are helpful to locate the fire through smoke if there is a direct Line Of Sight. The Emphasis is also put on the physical and the electro-optical system modeling for forest fire detection at short and longer ranges. The fusion in three bands (Visible, SWIR, LWIR) is discussed at the pixel level for image enhancement and for fire detection.
NASA Astrophysics Data System (ADS)
Zelazny, Amy; Benson, Robert; Deegan, John; Walsh, Ken; Schmidt, W. David; Howe, Russell
2013-06-01
We describe the benefits to camera system SWaP-C associated with the use of aspheric molded glasses and optical polymers in the design and manufacture of optical components and elements. Both camera objectives and display eyepieces, typical for night vision man-portable EO/IR systems, are explored. We discuss optical trade-offs, system performance, and cost reductions associated with this approach in both visible and non-visible wavebands, specifically NIR and LWIR. Example optical models are presented, studied, and traded using this approach.
A target detection multi-layer matched filter for color and hyperspectral cameras
NASA Astrophysics Data System (ADS)
Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.
2018-05-01
In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.
The Visible Imaging System (VIS) for the Polar Spacecraft
NASA Technical Reports Server (NTRS)
Frank, L. A.; Sigwarth, J. B.; Craven, J. D.; Cravens, J. P.; Dolan, J. S.; Dvorsky, M. R.; Hardebeck, P. K.; Harvey, J. D.; Muller, D. W.
1995-01-01
The Visible Imaging System (VIS) is a set of three low-light-level cameras to be flown on the POLAR spacecraft of the Global Geospace Science (GGS) program which is an element of the International Solar-Terrestrial Physics (ISTP) campaign. Two of these cameras share primary and some secondary optics and are designed to provide images of the nighttime auroral oval at visible wavelengths. A third camera is used to monitor the directions of the fields-of-view of these sensitive auroral cameras with respect to sunlit Earth. The auroral emissions of interest include those from N+2 at 391.4 nm, 0 I at 557.7 and 630.0 nm, H I at 656.3 nm, and 0 II at 732.0 nm. The two auroral cameras have different spatial resolutions. These resolutions are about 10 and 20 km from a spacecraft altitude of 8 R(sub e). The time to acquire and telemeter a 256 x 256-pixel image is about 12 s. The primary scientific objectives of this imaging instrumentation, together with the in-situ observations from the ensemble of ISTP spacecraft, are (1) quantitative assessment of the dissipation of magnetospheric energy into the auroral ionosphere, (2) an instantaneous reference system for the in-situ measurements, (3) development of a substantial model for energy flow within the magnetosphere, (4) investigation of the topology of the magnetosphere, and (5) delineation of the responses of the magnetosphere to substorms and variable solar wind conditions.
Fast Fiber-Coupled Imaging Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas
HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less
Design and realization of an AEC&AGC system for the CCD aerial camera
NASA Astrophysics Data System (ADS)
Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun
2015-08-01
An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.
Flow visualization by mobile phone cameras
NASA Astrophysics Data System (ADS)
Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.
2016-06-01
Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.
Imaging of turbulent structures and tomographic reconstruction of TORPEX plasma emissivity
NASA Astrophysics Data System (ADS)
Iraji, D.; Furno, I.; Fasoli, A.; Theiler, C.
2010-12-01
In the TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], a simple magnetized plasma device, low frequency electrostatic fluctuations associated with interchange waves, are routinely measured by means of extensive sets of Langmuir probes. To complement the electrostatic probe measurements of plasma turbulence and study of plasma structures smaller than the spatial resolution of probes array, a nonperturbative direct imaging system has been developed on TORPEX, including a fast framing Photron-APX-RS camera and an image intensifier unit. From the line-integrated camera images, we compute the poloidal emissivity profile of the plasma by applying a tomographic reconstruction technique using a pixel method and solving an overdetermined set of equations by singular value decomposition. This allows comparing statistical, spectral, and spatial properties of visible light radiation with electrostatic fluctuations. The shape and position of the time-averaged reconstructed plasma emissivity are observed to be similar to those of the ion saturation current profile. In the core plasma, excluding the electron cyclotron and upper hybrid resonant layers, the mean value of the plasma emissivity is observed to vary with (Te)α(ne)β, in which α =0.25-0.7 and β =0.8-1.4, in agreement with collisional radiative model. The tomographic reconstruction is applied to the fast camera movie acquired with 50 kframes/s rate and 2 μs of exposure time to obtain the temporal evolutions of the emissivity fluctuations. Conditional average sampling is also applied to visualize and measure sizes of structures associated with the interchange mode. The ω-time and the two-dimensional k-space Fourier analysis of the reconstructed emissivity fluctuations show the same interchange mode that is detected in the ω and k spectra of the ion saturation current fluctuations measured by probes. Small scale turbulent plasma structures can be detected and tracked in the reconstructed emissivity movies with the spatial resolution down to 2 cm, well beyond the spatial resolution of the probe array.
Stephey, L; Wurden, G A; Schmitz, O; Frerichs, H; Effenberg, F; Biedermann, C; Harris, J; König, R; Kornejew, P; Krychowiak, M; Unterberg, E A
2016-11-01
A combined IR and visible camera system [G. A. Wurden et al., "A high resolution IR/visible imaging system for the W7-X limiter," Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and H α photon flux, and the filterscope system provided H α , H β , He-I, He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. The resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., "Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X," Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P .
Luminescence dynamics of bound exciton of hydrogen doped ZnO nanowires
Yoo, Jinkyoung; Yi, Gyu -Chul; Chon, Bonghwan; ...
2016-04-11
In this study, all-optical camera, converting X-rays into visible photons, is a promising strategy for high-performance X-ray imaging detector requiring high detection efficiency and ultrafast detector response time. Zinc oxide is a suitable material for all-optical camera due to its fast radiative recombination lifetime in sub-nanosecond regime and its radiation hardness. ZnO nanostructures have been considered as proper building blocks for ultrafast detectors with spatial resolution in sub-micrometer scale. To achieve remarkable enhancement of luminescence efficiency n-type doping in ZnO has been employed. However, luminescence dynamics of doped ZnO nanostructures have not been thoroughly investigated whereas undoped ZnO nanostructures havemore » been employed to study their luminescence dynamics. Here we report a study of luminescence dynamics of hydrogen doped ZnO nanowires obtained by hydrogen plasma treatment. Hydrogen doping in ZnO nanowires gives rise to significant increase in the near-band-edge emission of ZnO and decrease in averaged photoluminescence lifetime from 300 to 140 ps at 10 K. The effects of hydrogen doping on the luminescent characteristics of ZnO nanowires were changed by hydrogen doping process variables.« less
The application of high-speed photography in z-pinch high-temperature plasma diagnostics
NASA Astrophysics Data System (ADS)
Wang, Kui-lu; Qiu, Meng-tong; Hei, Dong-wei
2007-01-01
This invited paper is presented to discuss the application of high speed photography in z-pinch high temperature plasma diagnostics in recent years in Northwest Institute of Nuclear Technology in concentrative mode. The developments and applications of soft x-ray framing camera, soft x-ray curved crystal spectrometer, optical framing camera, ultraviolet four-frame framing camera and ultraviolet-visible spectrometer are introduced.
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
NASA Astrophysics Data System (ADS)
Harvey, Nate
2016-08-01
Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.
Development of a 3-D visible limiter imaging system for the HSX stellarator
NASA Astrophysics Data System (ADS)
Buelo, C.; Stephey, L.; Anderson, F. S. B.; Eisert, D.; Anderson, D. T.
2017-12-01
A visible camera diagnostic has been developed to study the Helically Symmetric eXperiment (HSX) limiter plasma interaction. A straight line view from the camera location to the limiter was not possible due to the complex 3D stellarator geometry of HSX, so it was necessary to insert a mirror/lens system into the plasma edge. A custom support structure for this optical system tailored to the HSX geometry was designed and installed. This system holds the optics tube assembly at the required angle for the desired view to both minimize system stress and facilitate robust and repeatable camera positioning. The camera system has been absolutely calibrated and using Hα and C-III filters can provide hydrogen and carbon photon fluxes, which through an S/XB coefficient can be converted into particle fluxes. The resulting measurements have been used to obtain the characteristic penetration length of hydrogen and C-III species. The hydrogen λiz value shows reasonable agreement with the value predicted by a 1D penetration length calculation.
[Evaluation of Iris Morphology Viewed through Stromal Edematous Corneas by Infrared Camera].
Kobayashi, Masaaki; Morishige, Naoyuki; Morita, Yukiko; Yamada, Naoyuki; Kobayashi, Motomi; Sonoda, Koh-Hei
2016-02-01
We reported that the application of infrared camera enables us to observe iris morphology in Peters' anomaly through edematous corneas. To observe the iris morphology in bullous keratopathy or failure grafts with an infrared camera. Eleven bullous keratopathy or failure grafts subjects (6 men and 5 women, mean age ± SD; 72.7 ± 13.0 years old) were enrolled in this study. The iris morphology was observed by applying visible light mode and near infrared light mode of infrared camera (MeibomPen). The detectability of pupil shapes, iris patterns and presence of iridectomy was evaluated. Infrared mode observation enabled us to detect the pupil shapes in 11 out of 11 cases, iris patterns in 3 out of 11 cases, and presence of iridetomy in 9 out of 11 cases although visible light mode observation could not detect any iris morphological changes. Applying infrared optics was valuable for observation of the iris morphology through stromal edematous corneas.
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-27
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.
High-contrast imaging in the cloud with klipReduce and Findr
NASA Astrophysics Data System (ADS)
Haug-Baltzell, Asher; Males, Jared R.; Morzinski, Katie M.; Wu, Ya-Lin; Merchant, Nirav; Lyons, Eric; Close, Laird M.
2016-08-01
Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loève image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.
Lunar UV-visible-IR mapping interferometric spectrometer
NASA Technical Reports Server (NTRS)
Smith, W. Hayden; Haskin, L.; Korotev, R.; Arvidson, R.; Mckinnon, W.; Hapke, B.; Larson, S.; Lucey, P.
1992-01-01
Ultraviolet-visible-infrared mapping digital array scanned interferometers for lunar compositional surveys was developed. The research has defined a no-moving-parts, low-weight and low-power, high-throughput, and electronically adaptable digital array scanned interferometer that achieves measurement objectives encompassing and improving upon all the requirements defined by the LEXSWIG for lunar mineralogical investigation. In addition, LUMIS provides a new, important, ultraviolet spectral mapping, high-spatial-resolution line scan camera, and multispectral camera capabilities. An instrument configuration optimized for spectral mapping and imaging of the lunar surface and provide spectral results in support of the instrument design are described.
An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes
Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel
2010-01-01
In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations. PMID:22315523
Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel
2010-01-01
In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations.
Easily Accessible Camera Mount
NASA Technical Reports Server (NTRS)
Chalson, H. E.
1986-01-01
Modified mount enables fast alinement of movie cameras in explosionproof housings. Screw on side and readily reached through side door of housing. Mount includes right-angle drive mechanism containing two miter gears that turn threaded shaft. Shaft drives movable dovetail clamping jaw that engages fixed dovetail plate on camera. Mechanism alines camera in housing and secures it. Reduces installation time by 80 percent.
LIFTING THE VEIL OF DUST TO REVEAL THE SECRETS OF SPIRAL GALAXIES
NASA Technical Reports Server (NTRS)
2002-01-01
Astronomers have combined information from the NASA Hubble Space Telescope's visible- and infrared-light cameras to show the hearts of four spiral galaxies peppered with ancient populations of stars. The top row of pictures, taken by a ground-based telescope, represents complete views of each galaxy. The blue boxes outline the regions observed by the Hubble telescope. The bottom row represents composite pictures from Hubble's visible- and infrared-light cameras, the Wide Field and Planetary Camera 2 (WFPC2) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Astronomers combined views from both cameras to obtain the true ages of the stars surrounding each galaxy's bulge. The Hubble telescope's sharper resolution allows astronomers to study the intricate structure of a galaxy's core. The galaxies are ordered by the size of their bulges. NGC 5838, an 'S0' galaxy, is dominated by a large bulge and has no visible spiral arms; NGC 7537, an 'Sbc' galaxy, has a small bulge and loosely wound spiral arms. Astronomers think that the structure of NGC 7537 is very similar to our Milky Way. The galaxy images are composites made from WFPC2 images taken with blue (4445 Angstroms) and red (8269 Angstroms) filters, and NICMOS images taken in the infrared (16,000 Angstroms). They were taken in June, July, and August of 1997. Credits for the ground-based images: Allan Sandage (The Observatories of the Carnegie Institution of Washington) and John Bedke (Computer Sciences Corporation and the Space Telescope Science Institute) Credits for WFPC2 and NICMOS composites: NASA, ESA, and Reynier Peletier (University of Nottingham, United Kingdom)
Reasoning About Visibility in Mirrors: A Comparison Between a Human Observer and a Camera.
Bertamini, Marco; Soranzo, Alessandro
2018-01-01
Human observers make errors when predicting what is visible in a mirror. This is true for perception with real mirrors as well as for reasoning about mirrors shown in diagrams. We created an illustration of a room, a top-down view, with a mirror on a wall and objects (nails) on the opposite wall. The task was to select which nails were visible in the mirror from a given position (viewpoint). To study the importance of the social nature of the viewpoint, we divided the sample ( N = 108) in two groups. One group ( n = 54) were tested with a scene in which there was the image of a person. The other group ( n = 54) were tested with the same scene but with a camera replacing the person. Participants were instructed to think about what would be captured by a camera on a tripod. This manipulation tests the effect of social perspective-taking in reasoning about mirrors. As predicted, performance on the task shows an overestimation of what can be seen in a mirror and a bias to underestimate the role of the different viewpoints, that is, a tendency to treat the mirror as if it captures information independently of viewpoint. In terms of the comparison between person and camera, there were more errors for the camera, suggesting an advantage for evaluating a human viewpoint as opposed to an artificial viewpoint. We suggest that social mechanisms may be involved in perspective-taking in reasoning rather than in automatic attention allocation.
Fast, deep record length, time-resolved visible spectroscopy of plasmas using fiber grids
NASA Astrophysics Data System (ADS)
Brockington, Samuel; Case, Andrew; Cruz, Edward; Witherspoon, F. Douglas; Horton, Robert; Klauser, Ruth; Hwang, D. Q.
2016-10-01
HyperV Technologies is developing a fiber-coupled, deep-record-length, low-light camera head for performing high time resolution spectroscopy on visible emission from plasma events. New solid-state Silicon Photo-Multiplier (SiPM) chips are capable of single photon event detection and high speed data acquisition. By coupling the output of a spectrometer to an imaging fiber bundle connected to a bank of amplified SiPMs, time-resolved spectroscopic imagers of 100 to 1,000 pixels can be constructed. Target pixel performance is 10 Megaframes/sec with record lengths of up to 256,000 frames yielding 25.6 milliseconds of record at10 Megasamples/sec resolution. Pixel resolutions of 8 to 12 bits are pos- sible. Pixel pitch can be refined by using grids of 100 μm to 1000 μm diameter fibers. A prototype 32-pixel spectroscopic imager employing this technique was constructed and successfully tested at the University of California at Davis Compact Toroid Injection Experiment (CTIX) as a full demonstration of the concept. Experimental results will be dis-cussed, along with future plans for the Phase 2 project, and potential applications to plasma experiments . Work supported by USDOE SBIR Grant DE-SC0013801.
Automatic Detection of Diseased Tomato Plants Using Thermal and Stereo Visible Light Images
Raza, Shan-e-Ahmed; Prince, Gillian; Clarkson, John P.; Rajpoot, Nasir M.
2015-01-01
Accurate and timely detection of plant diseases can help mitigate the worldwide losses experienced by the horticulture and agriculture industries each year. Thermal imaging provides a fast and non-destructive way of scanning plants for diseased regions and has been used by various researchers to study the effect of disease on the thermal profile of a plant. However, thermal image of a plant affected by disease has been known to be affected by environmental conditions which include leaf angles and depth of the canopy areas accessible to the thermal imaging camera. In this paper, we combine thermal and visible light image data with depth information and develop a machine learning system to remotely detect plants infected with the tomato powdery mildew fungus Oidium neolycopersici. We extract a novel feature set from the image data using local and global statistics and show that by combining these with the depth information, we can considerably improve the accuracy of detection of the diseased plants. In addition, we show that our novel feature set is capable of identifying plants which were not originally inoculated with the fungus at the start of the experiment but which subsequently developed disease through natural transmission. PMID:25861025
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
Phase Curves of Nix and Hydra from the New Horizons Imaging Cameras
NASA Astrophysics Data System (ADS)
Verbiscer, Anne J.; Porter, Simon B.; Buratti, Bonnie J.; Weaver, Harold A.; Spencer, John R.; Showalter, Mark R.; Buie, Marc W.; Hofgartner, Jason D.; Hicks, Michael D.; Ennico-Smith, Kimberly; Olkin, Catherine B.; Stern, S. Alan; Young, Leslie A.; Cheng, Andrew; (The New Horizons Team
2018-01-01
NASA’s New Horizons spacecraft’s voyage through the Pluto system centered on 2015 July 14 provided images of Pluto’s small satellites Nix and Hydra at viewing angles unattainable from Earth. Here, we present solar phase curves of the two largest of Pluto’s small moons, Nix and Hydra, observed by the New Horizons LOng Range Reconnaissance Imager and Multi-spectral Visible Imaging Camera, which reveal the scattering properties of their icy surfaces in visible light. Construction of these solar phase curves enables comparisons between the photometric properties of Pluto’s small moons and those of other icy satellites in the outer solar system. Nix and Hydra have higher visible albedos than those of other resonant Kuiper Belt objects and irregular satellites of the giant planets, but not as high as small satellites of Saturn interior to Titan. Both Nix and Hydra appear to scatter visible light preferentially in the forward direction, unlike most icy satellites in the outer solar system, which are typically backscattering.
The use of near-infrared photography to image fired bullets and cartridge cases.
Stein, Darrell; Yu, Jorn Chi Chung
2013-09-01
An imaging technique that is capable of reducing glare, reflection, and shadows can greatly assist the process of toolmarks comparison. In this work, a camera with near-infrared (near-IR) photographic capabilities was fitted with an IR filter, mounted to a stereomicroscope, and used to capture images of toolmarks on fired bullets and cartridge cases. Fluorescent, white light-emitting diode (LED), and halogen light sources were compared for use with the camera. Test-fired bullets and cartridge cases from different makes and models of firearms were photographed under either near-IR or visible light. With visual comparisons, near-IR images and visible light images were comparable. The use of near-IR photography did not reveal more details and could not effectively eliminate reflections and glare associated with visible light photography. Near-IR photography showed little advantages in manual examination of fired evidence when it was compared with visible light (regular) photography. © 2013 American Academy of Forensic Sciences.
1996-01-01
used to locate and characterize a magnetic dipole source, and this finding accelerated the development of superconducting tensor gradiometers for... superconducting magnetic field gradiometer, two-color infrared camera, synthetic aperture radar, and a visible spectrum camera. The combination of these...Pieter Hoekstra, Blackhawk GeoSciences ......................................... 68 Prediction for UXO Shape and Orientation Effects on Magnetic
Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...
PhenoCam Dataset v1.0: Vegetation Phenology from Digital Camera Imagery, 2000-2015
USDA-ARS?s Scientific Manuscript database
This data set provides a time series of vegetation phenological observations for 133 sites across diverse ecosystems of North America and Europe from 2000-2015. The phenology data were derived from conventional visible-wavelength automated digital camera imagery collected through the PhenoCam Networ...
NASA Astrophysics Data System (ADS)
Anton, Rainer
2011-04-01
Using a 50cm Cassegrain in Namibia, recordings of double and multiple stars were made with a fast CCD camera and a notebook computer. From superpositions of "lucky images", measurements of 149 systems were obtained and compared with literature data. B/W and color images of some remarkable systems are also presented.
NASA Astrophysics Data System (ADS)
Anton, Rainer
2010-07-01
Using a 10" Newtonian and a fast CCD camera, recordings of double and multiple stars were made at high frame rates with a notebook computer. From superpositions of "lucky images", measurements of 139 systems were obtained and compared with literature data. B/w and color images of some noteworthy systems are also presented.
Use of cameras for monitoring visibility impairment
NASA Astrophysics Data System (ADS)
Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie
2018-02-01
Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
Stephey, L.; Wurden, G. A.; Schmitz, O.; ...
2016-08-08
A combined IR and visible camera system [G. A. Wurden et al., “A high resolution IR/visible imaging system for the W7-X limiter,” Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and Hα photon flux, and the filterscope system provided H α, H β, He-I,more » He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. Finally, the resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., “Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X,” Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P.« less
Broadband image sensor array based on graphene-CMOS integration
NASA Astrophysics Data System (ADS)
Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank
2017-06-01
Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.
High-frame rate multiport CCD imager and camera
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.
1993-01-01
A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.
Field trials for determining the visible and infrared transmittance of screening smoke
NASA Astrophysics Data System (ADS)
Sánchez Oliveros, Carmen; Santa-María Sánchez, Guillermo; Rosique Pérez, Carlos
2009-09-01
In order to evaluate the concealment capability of smoke, the Countermeasures Laboratory of the Institute of Technology "Marañosa" (ITM) has done a set of tests for measuring the transmittances of multispectral smoke tins in several bands of the electromagnetic spectrum. The smoke composition based on red phosphorous has been developed and patented by this laboratory as a part of a projectile development. The smoke transmittance was measured by means of thermography as well as spectroradiometry. Black bodies and halogen lamps were used as infrared and visible source of radiation. The measurements were carried out in June of 2008 at the Marañosa field (Spain) with two MWIR cameras, two LWIR cameras, one CCD visible camera, one CVF IR spectroradiometer covering the interval 1.5 to 14 microns and one array silicon based spectroradiometer for the 0.2 to 1.1 μm spectra. The transmittance and dimensions of the smoke screen were characterized in the visible band, MWIR (3 - 5 μm and LWIR (8 - 12 μm) regions. The size of the screen was about 30 meters wide and 5 meters high. The transmittances in the IR bands were about 0.3 and better than 0.1 in the visible one. The screens showed to be effective over the time of persistence for all of the tests. The results obtained from the imaging and non-imaging systems were in good accordance. The meteorological conditions during tests such as the wind speed are determinant for the use of this kind of optical countermeasures.
On-ground and in-orbit characterisation plan for the PLATO CCD normal cameras
NASA Astrophysics Data System (ADS)
Gow, J. P. D.; Walton, D.; Smith, A.; Hailey, M.; Curry, P.; Kennedy, T.
2017-11-01
PLAnetary Transits and Ocillations (PLATO) is the third European Space Agency (ESA) medium class mission in ESA's cosmic vision programme due for launch in 2026. PLATO will carry out high precision un-interrupted photometric monitoring in the visible band of large samples of bright solar-type stars. The primary mission goal is to detect and characterise terrestrial exoplanets and their systems with emphasis on planets orbiting in the habitable zone, this will be achieved using light curves to detect planetary transits. PLATO uses a novel multi- instrument concept consisting of 26 small wide field cameras The 26 cameras are made up of a telescope optical unit, four Teledyne e2v CCD270s mounted on a focal plane array and connected to a set of Front End Electronics (FEE) which provide CCD control and readout. There are 2 fast cameras with high read-out cadence (2.5 s) for magnitude ~ 4-8 stars, being developed by the German Aerospace Centre and 24 normal (N) cameras with a cadence of 25 s to monitor stars with a magnitude greater than 8. The N-FEEs are being developed at University College London's Mullard Space Science Laboratory (MSSL) and will be characterised along with the associated CCDs. The CCDs and N-FEEs will undergo rigorous on-ground characterisation and the performance of the CCDs will continue to be monitored in-orbit. This paper discusses the initial development of the experimental arrangement, test procedures and current status of the N-FEE. The parameters explored will include gain, quantum efficiency, pixel response non-uniformity, dark current and Charge Transfer Inefficiency (CTI). The current in-orbit characterisation plan is also discussed which will enable the performance of the CCDs and their associated N-FEE to be monitored during the mission, this will include measurements of CTI giving an indication of the impact of radiation damage in the CCDs.
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
Double Star Measurements at the Southern Sky with 50 cm Reflectors and Fast CCD Cameras in 2012
NASA Astrophysics Data System (ADS)
Anton, Rainer
2014-07-01
A Cassegrain and a Ritchey-Chrétien reflector, both with 50 cm aperture, were used in Namibia for recordings of double stars with fast CCD cameras and a notebook computer. From superposition of "lucky images", measurements of 39 double and multiple systems were obtained and compared with literature data. Occasional deviations are discussed. Images of some remarkable systems are also presented.
Fusion of thermal- and visible-band video for abandoned object detection
NASA Astrophysics Data System (ADS)
Beyan, Cigdem; Yigit, Ahmet; Temizel, Alptekin
2011-07-01
Timely detection of packages that are left unattended in public spaces is a security concern, and rapid detection is important for prevention of potential threats. Because constant surveillance of such places is challenging and labor intensive, automated abandoned-object-detection systems aiding operators have started to be widely used. In many studies, stationary objects, such as people sitting on a bench, are also detected as suspicious objects due to abandoned items being defined as items newly added to the scene and remained stationary for a predefined time. Therefore, any stationary object results in an alarm causing a high number of false alarms. These false alarms could be prevented by classifying suspicious items as living and nonliving objects. In this study, a system for abandoned object detection that aids operators surveilling indoor environments such as airports, railway or metro stations, is proposed. By analysis of information from a thermal- and visible-band camera, people and the objects left behind can be detected and discriminated as living and nonliving, reducing the false-alarm rate. Experiments demonstrate that using data obtained from a thermal camera in addition to a visible-band camera also increases the true detection rate of abandoned objects.
Augmented reality in laser laboratories
NASA Astrophysics Data System (ADS)
Quercioli, Franco
2018-05-01
Laser safety glasses block visibility of the laser light. This is a big nuisance when a clear view of the beam path is required. A headset made up of a smartphone and a viewer can overcome this problem. The user looks at the image of the real world on the cellphone display, captured by its rear camera. An unimpeded and safe sight of the laser beam is then achieved. If the infrared blocking filter of the smartphone camera is removed, the spectral sensitivity of the CMOS image sensor extends in the near infrared region up to 1100 nm. This substantial improvement widens the usability of the device to many laser systems for industrial and medical applications, which are located in this spectral region. The paper describes this modification of a phone camera to extend its sensitivity beyond the visible and make a true augmented reality laser viewer.
Auto-converging stereo cameras for 3D robotic tele-operation
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Aycock, Todd; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
Optimal design of an earth observation optical system with dual spectral and high resolution
NASA Astrophysics Data System (ADS)
Yan, Pei-pei; Jiang, Kai; Liu, Kai; Duan, Jing; Shan, Qiusha
2017-02-01
With the increasing demand of the high-resolution remote sensing images by military and civilians, Countries around the world are optimistic about the prospect of higher resolution remote sensing images. Moreover, design a visible/infrared integrative optic system has important value in earth observation. Because visible system can't identify camouflage and recon at night, so we should associate visible camera with infrared camera. An earth observation optical system with dual spectral and high resolution is designed. The paper mainly researches on the integrative design of visible and infrared optic system, which makes the system lighter and smaller, and achieves one satellite with two uses. The working waveband of the system covers visible, middle infrared (3-5um). Dual waveband clear imaging is achieved with dispersive RC system. The focal length of visible system is 3056mm, F/# is 10.91. And the focal length of middle infrared system is 1120mm, F/# is 4. In order to suppress the middle infrared thermal radiation and stray light, the second imaging system is achieved and the narcissus phenomenon is analyzed. The system characteristic is that the structure is simple. And the especial requirements of the Modulation Transfer Function (MTF), spot, energy concentration, and distortion etc. are all satisfied.
Optical design of space cameras for automated rendezvous and docking systems
NASA Astrophysics Data System (ADS)
Zhu, X.
2018-05-01
Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.
In-line interferometer for broadband near-field scanning optical spectroscopy.
Brauer, Jens; Zhan, Jinxin; Chimeh, Abbas; Korte, Anke; Lienau, Christoph; Gross, Petra
2017-06-26
We present and investigate a novel approach towards broad-bandwidth near-field scanning optical spectroscopy based on an in-line interferometer for homodyne mixing of the near field and a reference field. In scattering-type scanning near-field optical spectroscopy, the near-field signal is usually obscured by a large amount of unwanted background scattering from the probe shaft and the sample. Here we increase the light reflected from the sample by a semi-transparent gold layer and use it as a broad-bandwidth, phase-stable reference field to amplify the near-field signal in the visible and near-infrared spectral range. We experimentally demonstrate that this efficiently suppresses the unwanted background signal in monochromatic near-field measurements. For rapid acquisition of complete broad-bandwidth spectra we employ a monochromator and a fast line camera. Using this fast acquisition of spectra and the in-line interferometer we demonstrate the measurement of pure near-field spectra. The experimental observations are quantitatively explained by analytical expressions for the measured optical signals, based on Fourier decomposition of background and near field. The theoretical model and in-line interferometer together form an important step towards broad-bandwidth near-field scanning optical spectroscopy.
Narrow band vacuum ultraviolet radiation, produced by fast conical discharge
NASA Astrophysics Data System (ADS)
Antsiferov, P. S.; Dorokhin, L. A.; Koshelev, K. N.
2018-04-01
The article presents the experimental study of discharges in a conical cavity, filled with Ar at pressure 80 Pa. The electrical current driver (inductive storage with plasma erosion opening switch) supplies to the load electrical current pulse with growth rate about 1012 A s‑1 and maximal value 30–40 kA. The convergent conical shock wave starts from the inner surface of the discharge cavity and collapses in ‘zippering’ mode. The pin hole camera imaging with MCP detector (time resolution 5 ns) have demonstrated the appearance of effectively fast moving compact plasma with visible velocity v = (1.5 ± 0.14) × 107 cm s‑1. Plasma emits narrow band radiation in the spectral range of Rydberg series transitions of Ar VII, Ar VIII with quantum number up to n = 9 (wavelength about 11 nm). The intensity of radiation is comparable with the total plasma emission in the range 10–50 nm. Charge exchange between multiply charged Ar ions and cold Ar atoms of working gas is proposed as the possible mechanism of the origin of the radiation.
NASA Astrophysics Data System (ADS)
Le, Nam-Tuan
2017-05-01
Copyright protection and information security are two most considered issues of digital data following the development of internet and computer network. As an important solution for protection, watermarking technology has become one of the challenged roles in industry and academic research. The watermarking technology can be classified by two categories: visible watermarking and invisible watermarking. With invisible technique, there is an advantage on user interaction because of the visibility. By applying watermarking for communication, it will be a challenge and a new direction for communication technology. In this paper we will propose one new research on communication technology using optical camera communications (OCC) based invisible watermarking. Beside the analysis on performance of proposed system, we also suggest the frame structure of PHY and MAC layer for IEEE 802.15.7r1 specification which is a revision of visible light communication (VLC) standardization.
SFDT-1 Camera Pointing and Sun-Exposure Analysis and Flight Performance
NASA Technical Reports Server (NTRS)
White, Joseph; Dutta, Soumyo; Striepe, Scott
2015-01-01
The Supersonic Flight Dynamics Test (SFDT) vehicle was developed to advance and test technologies of NASA's Low Density Supersonic Decelerator (LDSD) Technology Demonstration Mission. The first flight test (SFDT-1) occurred on June 28, 2014. In order to optimize the usefulness of the camera data, analysis was performed to optimize parachute visibility in the camera field of view during deployment and inflation and to determine the probability of sun-exposure issues with the cameras given the vehicle heading and launch time. This paper documents the analysis, results and comparison with flight video of SFDT-1.
Double Star Measurements at the Southern Sky with a 50 cm Reflector and a Fast CCD Camera in 2014
NASA Astrophysics Data System (ADS)
Anton, Rainer
2015-04-01
A Ritchey-Chrétien reflector with 50 cm aperture was used in Namibia for recordings of double stars with a fast CCD camera and a notebook computer. From superposition of "lucky images", measurements of 91 pairings in 79 double and multiple systems were obtained and compared with literature data. Occasional deviations are discussed. Some images of noteworthy systems are also presented.
Investigating plasma viscosity with fast framing photography in the ZaP-HD Flow Z-Pinch experiment
NASA Astrophysics Data System (ADS)
Weed, Jonathan Robert
The ZaP-HD Flow Z-Pinch experiment investigates the stabilizing effect of sheared axial flows while scaling toward a high-energy-density laboratory plasma (HEDLP > 100 GPa). Stabilizing flows may persist until viscous forces dissipate a sheared flow profile. Plasma viscosity is investigated by measuring scale lengths in turbulence intentionally introduced in the plasma flow. A boron nitride turbulence-tripping probe excites small scale length turbulence in the plasma, and fast framing optical cameras are used to study time-evolved turbulent structures and viscous dissipation. A Hadland Imacon 790 fast framing camera is modified for digital image capture, but features insufficient resolution to study turbulent structures. A Shimadzu HPV-X camera captures the evolution of turbulent structures with great spatial and temporal resolution, but is unable to resolve the anticipated Kolmogorov scale in ZaP-HD as predicted by a simplified pinch model.
Invisible marker based augmented reality system
NASA Astrophysics Data System (ADS)
Park, Hanhoon; Park, Jong-Il
2005-07-01
Augmented reality (AR) has recently gained significant attention. The previous AR techniques usually need a fiducial marker with known geometry or objects of which the structure can be easily estimated such as cube. Placing a marker in the workspace of the user can be intrusive. To overcome this limitation, we present an AR system using invisible markers which are created/drawn with an infrared (IR) fluorescent pen. Two cameras are used: an IR camera and a visible camera, which are positioned in each side of a cold mirror so that their optical centers coincide with each other. We track the invisible markers using IR camera and visualize AR in the view of visible camera. Additional algorithms are employed for the system to have a reliable performance in the cluttered background. Experimental results are given to demonstrate the viability of the proposed system. As an application of the proposed system, the invisible marker can act as a Vision-Based Identity and Geometry (VBIG) tag, which can significantly extend the functionality of RFID. The invisible tag is the same as RFID in that it is not perceivable while more powerful in that the tag information can be presented to the user by direct projection using a mobile projector or by visualizing AR on the screen of mobile PDA.
Two Moons and the Pleiades from Mars
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Inverted image of two moons and the Pleiades from Mars Taking advantage of extra solar energy collected during the day, NASA's Mars Exploration Rover Spirit recently settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. In this view, the Pleiades, a star cluster also known as the 'Seven Sisters,' is visible in the lower left corner. The bright star Aldebaran and some of the stars in the constellation Taurus are visible on the right. Spirit acquired this image the evening of martian day, or sol, 590 (Aug. 30, 2005). The image on the right provides an enhanced-contrast view with annotation. Within the enhanced halo of light is an insert of an unsaturated view of Phobos taken a few images later in the same sequence. On Mars, Phobos would be easily visible to the naked eye at night, but would be only about one-third as large as the full Moon appears from Earth. Astronauts staring at Phobos from the surface of Mars would notice its oblong, potato-like shape and that it moves quickly against the background stars. Phobos takes only 7 hours, 39 minutes to complete one orbit of Mars. That is so fast, relative to the 24-hour-and-39-minute sol on Mars (the length of time it takes for Mars to complete one rotation), that Phobos rises in the west and sets in the east. Earth's moon, by comparison, rises in the east and sets in the west. The smaller martian moon, Deimos, takes 30 hours, 12 minutes to complete one orbit of Mars. That orbital period is longer than a martian sol, and so Deimos rises, like most solar system moons, in the east and sets in the west. Scientists will use images of the two moons to better map their orbital positions, learn more about their composition, and monitor the presence of nighttime clouds or haze. Spirit took the five images that make up this composite with the panoramic camera, using the camera's broadband filter, which was designed specifically for acquiring images under low-light conditions.Determining fast orientation changes of multi-spectral line cameras from the primary images
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2012-01-01
Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.
Lunar Reconnaissance Orbiter Camera (LROC) instrument overview
Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.
2010-01-01
The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.
Vacuum compatible miniature CCD camera head
Conder, Alan D.
2000-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
Movable Cameras And Monitors For Viewing Telemanipulator
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1993-01-01
Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.
A real-time camera calibration system based on OpenCV
NASA Astrophysics Data System (ADS)
Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng
2015-07-01
Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.
ERIC Educational Resources Information Center
Fisher, Diane K.; Novati, Alexander
2009-01-01
On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…
NASA Astrophysics Data System (ADS)
Donovan, D. C.; Buchenauer, D. A.; Watkins, J. G.; Leonard, A. W.; Lasnier, C. J.; Stangeby, P. C.
2011-10-01
The sheath power transmission factor (SPTF) is examined in DIII-D with a new IR camera, a more thermally robust Langmuir probe array, fast thermocouples, and a unique probe configuration on the Divertor Materials Evaluation System (DiMES). Past data collected from the fixed Langmuir Probes and Infrared Camera on DIII-D have indicated a SPTF near 1 at the strike point. Theory indicates that the SPTF should be approximately 7 and cannot be less than 5. SPTF values are calculated using independent measurements from the IR camera and fast thermocouples. Experiments have been performed with varying levels of electron cyclotron heating and neutral beam power. The ECH power does not involve fast ions, so the SPTF can be calculated and compared to previous experiments to determine the extent to which fast ions may be influencing the SPTF measurements, and potentially offer insight into the disagreement with the theory. Work supported in part by US DOE under DE-AC04-94AL85000, DE-FC02-04ER54698, and DE-AC52-07NA27344.
FAST CHOPPER BUILDING, TRA665. CAMERA FACING NORTH. NOTE BRICKEDIN WINDOW ...
FAST CHOPPER BUILDING, TRA-665. CAMERA FACING NORTH. NOTE BRICKED-IN WINDOW ON RIGHT SIDE (BELOW PAINTED NUMERALS "665"). SLIDING METAL DOOR ON COVERED RAIL AT UPPER LEVEL. SHELTERED ENTRANCE TO STEEL SHIELDING DOOR. DOOR INTO MTR SERVICE BUILDING, TRA-635, STANDS OPEN. MTR BEHIND CHOPPER BUILDING. INL NEGATIVE NO. HD42-1. Mike Crane, Photographer, 3/2004 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Imaging of turbulent structures and tomographic reconstruction of TORPEX plasma emissivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iraji, D.; Furno, I.; Fasoli, A.
In the TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], a simple magnetized plasma device, low frequency electrostatic fluctuations associated with interchange waves, are routinely measured by means of extensive sets of Langmuir probes. To complement the electrostatic probe measurements of plasma turbulence and study of plasma structures smaller than the spatial resolution of probes array, a nonperturbative direct imaging system has been developed on TORPEX, including a fast framing Photron-APX-RS camera and an image intensifier unit. From the line-integrated camera images, we compute the poloidal emissivity profile of the plasma by applying a tomographic reconstruction technique usingmore » a pixel method and solving an overdetermined set of equations by singular value decomposition. This allows comparing statistical, spectral, and spatial properties of visible light radiation with electrostatic fluctuations. The shape and position of the time-averaged reconstructed plasma emissivity are observed to be similar to those of the ion saturation current profile. In the core plasma, excluding the electron cyclotron and upper hybrid resonant layers, the mean value of the plasma emissivity is observed to vary with (T{sub e}){sup {alpha}}(n{sub e}){sup {beta}}, in which {alpha}=0.25-0.7 and {beta}=0.8-1.4, in agreement with collisional radiative model. The tomographic reconstruction is applied to the fast camera movie acquired with 50 kframes/s rate and 2 {mu}s of exposure time to obtain the temporal evolutions of the emissivity fluctuations. Conditional average sampling is also applied to visualize and measure sizes of structures associated with the interchange mode. The {omega}-time and the two-dimensional k-space Fourier analysis of the reconstructed emissivity fluctuations show the same interchange mode that is detected in the {omega} and k spectra of the ion saturation current fluctuations measured by probes. Small scale turbulent plasma structures can be detected and tracked in the reconstructed emissivity movies with the spatial resolution down to 2 cm, well beyond the spatial resolution of the probe array.« less
NASA Astrophysics Data System (ADS)
Huang, Hua-Wei; Zhang, Yang
2008-08-01
An attempt has been made to characterize the colour spectrum of methane flame under various burning conditions using RGB and HSV colour models instead of resolving the real physical spectrum. The results demonstrate that each type of flame has its own characteristic distribution in both the RGB and HSV space. It has also been observed that the averaged B and G values in the RGB model represent well the CH* and C*2 emission of methane premixed flame. Theses features may be utilized for flame measurement and monitoring. The great advantage of using a conventional camera for monitoring flame properties based on the colour spectrum is that it is readily available, easy to interface with a computer, cost effective and has certain spatial resolution. Furthermore, it has been demonstrated that a conventional digital camera is able to image flame not only in the visible spectrum but also in the infrared. This feature is useful in avoiding the problem of image saturation typically encountered in capturing the very bright sooty flames. As a result, further digital imaging processing and quantitative information extraction is possible. It has been identified that an infrared image also has its own distribution in both the RGB and HSV colour space in comparison with a flame image in the visible spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conder, A.; Mummolo, F. J.
The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.
Printed circuit board for a CCD camera head
Conder, Alan D.
2002-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
Wave and Current Observations in a Tidal Inlet Using GPS Drifter Buoys
2013-03-01
right panel). ............17 Figure 10. DWR-G external sensor configuration (left panel). GT-31 GPS receiver is visible on the bottom left. Two GoPro ...receiver is visible on the bottom left. Two GoPro cameras are attached to the top of the buoy. DWR-G internal sensor configuration (right panel
Performance analysis and enhancement for visible light communication using CMOS sensors
NASA Astrophysics Data System (ADS)
Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Fang, Liangtao; Liu, Xiaowei; Chen, Yingcong
2018-03-01
Complementary Metal-Oxide-Semiconductor (CMOS) sensors are widely used in mobile-phone and cameras. Hence, it is attractive if these camera can be used as the receivers of visible light communication (VLC). Using the rolling shutter mechanism can increase the data rate of VLC based on CMOS camera, and different techniques have been proposed to improve the demodulation of the rolling shutter mechanism. However, these techniques are too complexity. In this work, we demonstrate and analyze the performance of the VLC link using CMOS camera for different LED luminaires for the first time in our knowledge. Experimental evaluation to compare their bit-error-rate (BER) performances and demodulation are also performed, and it can be summarized that just need to change the LED luminaire with more uniformity light output, the blooming effect would not exist; which not only can reduce the complexity of the demodulation but also enhance the communication quality. In addition, we propose and demonstrate to use contrast limited adaptive histogram equalization to extend the transmission distance and mitigate the influence of the background noise. And the experimental results show that the BER can be decreased by an order of magnitude by using the proposed method.
Method and apparatus for calibrating a display using an array of cameras
NASA Technical Reports Server (NTRS)
Johnson, Michael J. (Inventor); Chen, Chung-Jen (Inventor); Chandrasekhar, Rajesh (Inventor)
2001-01-01
The present invention overcomes many of the disadvantages of the prior art by providing a display that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, the present invention provides one or more cameras to capture an image that is projected on a display screen. In one embodiment, the one or more cameras are placed on the same side of the screen as the projectors. In another embodiment, an array of cameras is provided on either or both sides of the screen for capturing a number of adjacent and/or overlapping capture images of the screen. In either of these embodiments, the resulting capture images are processed to identify any non-desirable characteristics including any visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and/or other visible artifacts.
NASA Astrophysics Data System (ADS)
Sangiorgi, Pierluca; Capalbi, Milvia; Gimenes, Renato; La Rosa, Giovanni; Russo, Francesco; Segreto, Alberto; Sottile, Giuseppe; Catalano, Osvaldo
2016-07-01
The purpose of this contribution is to present the current status of the software architecture of the ASTRI SST-2M Cherenkov Camera. The ASTRI SST-2M telescope is an end-to-end prototype for the Small Size Telescope of the Cherenkov Telescope Array. The ASTRI camera is an innovative instrument based on SiPM detectors and has several internal hardware components. In this contribution we will give a brief description of the hardware components of the camera of the ASTRI SST-2M prototype and of their interconnections. Then we will present the outcome of the software architectural design process that we carried out in order to identify the main structural components of the camera software system and the relationships among them. We will analyze the architectural model that describes how the camera software is organized as a set of communicating blocks. Finally, we will show where these blocks are deployed in the hardware components and how they interact. We will describe in some detail, the physical communication ports and external ancillary devices management, the high precision time-tag management, the fast data collection and the fast data exchange between different camera subsystems, and the interfacing with the external systems.
Engineer's drawing of Skylab 4 Far Ultraviolet Electronographic camera
1973-11-19
S73-36910 (November 1973) --- An engineer's drawing of the Skylab 4 Far Ultraviolet Electronographic camera (Experiment S201). Arrows point to various features and components of the camera. As the Comet Kohoutek streams through space at speeds of 100,000 miles per hour, the Skylab 4 crewmen will use the S201 UV camera to photograph features of the comet not visible from the Earth's surface. While the comet is some distance from the sun, the camera will be pointed through the scientific airlock in the wall of the Skylab space station Orbital Workshop (OWS). By using a movable mirror system built for the Ultraviolet Stellar Astronomy (S019) Experiment and rotating the space station, the S201 camera will be able to photograph the comet around the side of the space station. Photo credit: NASA
A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.
Qian, Shuo; Sheng, Yang
2011-11-01
Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.
A state observer for using a slow camera as a sensor for fast control applications
NASA Astrophysics Data System (ADS)
Gahleitner, Reinhard; Schagerl, Martin
2013-03-01
This contribution concerns about a problem that often arises in vision based control, when a camera is used as a sensor for fast control applications, or more precisely, when the sample rate of the control loop is higher than the frame rate of the camera. In control applications for mechanical axes, e.g. in robotics or automated production, a camera and some image processing can be used as a sensor to detect positions or angles. The sample time in these applications is typically in the range of a few milliseconds or less and this demands the use of a camera with a high frame rate up to 1000 fps. The presented solution is a special state observer that can work with a slower and therefore cheaper camera to estimate the state variables at the higher sample rate of the control loop. To simplify the image processing for the determination of positions or angles and make it more robust, some LED markers are applied to the plant. Simulation and experimental results show that the concept can be used even if the plant is unstable like the inverted pendulum.
Mary, a Pipeline to Aid Discovery of Optical Transients
NASA Astrophysics Data System (ADS)
Andreoni, I.; Jacobs, C.; Hegarty, S.; Pritchard, T.; Cooke, J.; Ryder, S.
2017-09-01
The ability to quickly detect transient sources in optical images and trigger multi-wavelength follow up is key for the discovery of fast transients. These include events rare and difficult to detect such as kilonovae, supernova shock breakout, and `orphan' Gamma-ray Burst afterglows. We present the Mary pipeline, a (mostly) automated tool to discover transients during high-cadenced observations with the Dark Energy Camera at Cerro Tololo Inter-American Observatory (CTIO). The observations are part of the `Deeper Wider Faster' programme, a multi-facility, multi-wavelength programme designed to discover fast transients, including counterparts to Fast Radio Bursts and gravitational waves. Our tests of the Mary pipeline on Dark Energy Camera images return a false positive rate of 2.2% and a missed fraction of 3.4% obtained in less than 2 min, which proves the pipeline to be suitable for rapid and high-quality transient searches. The pipeline can be adapted to search for transients in data obtained with imagers other than Dark Energy Camera.
NASA Astrophysics Data System (ADS)
Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.
2013-12-01
Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that allow its cost to remain low even with its increased functionality. Also, a new control software was also developed to ensure that the two cameras are triggered simultaneously. This is a major requirement that affects the final uncertainty of the measurements due to the constant movement of the clouds in the sky. Since accurate orientation of the cameras can be a very demanding task in field deployments, an automated calibration procedure has been developed, that removes the need for an accurate alignment. It consists on photographing the stars, which do not exhibit parallax due to the long distances involved, and deducing the inherent misalignments of the two cameras. The known misalignments are then used to correct the cloud photos. These developments will be described in the detail, along with an uncertainty analysis of the measurement setup. Measurements of cloud base height and atmospheric visibility will be presented and compared with measurements from other in-situ instruments. This work was supported by FCT project PTDC/CTE-ATM/115833/2009 and Program COMPETE FCOMP-01-0124-FEDER-014508
A neutron camera system for MAST.
Cecconello, M; Turnyanskiy, M; Conroy, S; Ericsson, G; Ronchi, E; Sangaroon, S; Akers, R; Fitzgerald, I; Cullen, A; Weiszflog, M
2010-10-01
A prototype neutron camera has been developed and installed at MAST as part of a feasibility study for a multichord neutron camera system with the aim to measure the spatial and time resolved 2.45 MeV neutron emissivity profile. Liquid scintillators coupled to a fast digitizer are used for neutron/gamma ray digital pulse shape discrimination. The preliminary results obtained clearly show the capability of this diagnostic to measure neutron emissivity profiles with sufficient time resolution to study the effect of fast ion loss and redistribution due to magnetohydrodynamic activity. A minimum time resolution of 2 ms has been achieved with a modest 1.5 MW of neutral beam injection heating with a measured neutron count rate of a few 100 kHz.
Two Perspectives on Forest Fire
NASA Technical Reports Server (NTRS)
2002-01-01
Multi-angle Imaging Spectroradiometer (MISR) images of smoke plumes from wildfires in western Montana acquired on August 14, 2000. A portion of Flathead Lake is visible at the top, and the Bitterroot Range traverses the images. The left view is from MISR's vertical-viewing (nadir) camera. The right view is from the camera that looks forward at a steep angle (60 degrees). The smoke location and extent are far more visible when seen at this highly oblique angle. However, vegetation is much darker in the forward view. A brown burn scar is located nearly in the exact center of the nadir image, while in the high-angle view it is shrouded in smoke. Also visible in the center and upper right of the images, and more obvious in the clearer nadir view, are checkerboard patterns on the surface associated with land ownership boundaries and logging. Compare these images with the high resolution infrared imagery captured nearby by Landsat 7 half an hour earlier. Images by NASA/GSFC/JPL, MISR Science Team.
Chasing Down Gravitational Wave Sources with the Dark Energy Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annis, Jim; Soares-Santos, Marcelle
On August 17, 2017, scientists using the Dark Energy Camera tracked down the first visible counterpart to a gravitational wave signal ever spotted by astronomers. Using data provided by the LIGO and Virgo collaborations, scientists embarked on a quest for the unknown, and discovered a new wonder of the universe. Includes interviews with Fermilab’s Jim Annis and Brandeis University’s Marcelle Soares-Santos.
NASA Astrophysics Data System (ADS)
Westall, F.; Hofmann, B.; Brack, A.
2004-03-01
Microbial mats from early terrestrial environments can be macroscopically visible and represent excellent analogues in the search for life on Mars. Tests using the Beagle 2 camera show that they can be observed by in situ instrumentation.
View of Saudi Arabia and north eastern Africa from the Apollo 17 spacecraft
1972-12-09
AS17-148-22718 (7-19 Dec. 1972) --- This excellent view of Saudi Arabia and the north eastern portion of the African continent was photographed by the Apollo 17 astronauts with a hand-held camera on their trans-lunar coast toward man's last lunar visit. Egypt, Sudan, Ethiopia are some of the African nations are visible. Iran, Iraq, Jordan are not so clearly visible because of cloud cover and their particular location in the picture. India is dimly visible at right of frame. The Red Sea is seen entirely in this one single frame, a rare occurrence in Apollo photography or any photography taken from manned spacecraft. The Gulf of Suez, the Dead Sea, Gulf of Aden, Persian Gulf and Gulf of Oman are also visible. This frame is one of 169 frames on film magazine NN carried aboard Apollo 17, all of which are SO368 (color) film. A 250mm lens on a 70mm Hasselblad camera recorded the image, one of 92 taken during the trans-lunar coast. Note AS17-148-22727 (also magazine NN) for an excellent full Earth picture showing the entire African continent.
CubeSat Nighttime Earth Observations
NASA Astrophysics Data System (ADS)
Pack, D. W.; Hardy, B. S.; Longcore, T.
2017-12-01
Satellite monitoring of visible emissions at night has been established as a useful capability for environmental monitoring and mapping the global human footprint. Pioneering work using Defense Meteorological Support Program (DMSP) sensors has been followed by new work using the more capable Visible Infrared Imaging Radiometer Suite (VIIRS). Beginning in 2014, we have been investigating the ability of small visible light cameras on CubeSats to contribute to nighttime Earth science studies via point-and-stare imaging. This paper summarizes our recent research using a common suite of simple visible cameras on several AeroCube satellites to carry out nighttime observations of urban areas and natural gas flares, nighttime weather (including lighting), and fishing fleet lights. Example results include: urban image examples, the utility of color imagery, urban lighting change detection, and multi-frame sequences imaging nighttime weather and large ocean areas with extensive fishing vessel lights. Our results show the potential for CubeSat sensors to improve monitoring of urban growth, light pollution, energy usage, the urban-wildland interface, the improvement of electrical power grids in developing countries, light-induced fisheries, and oil industry flare activity. In addition to orbital results, the nighttime imaging capabilities of new CubeSat sensors scheduled for launch in October 2017 are discussed.
Unattended real-time re-establishment of visibility in high dynamic range video and stills
NASA Astrophysics Data System (ADS)
Abidi, B.
2014-05-01
We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.
Space imaging measurement system based on fixed lens and moving detector
NASA Astrophysics Data System (ADS)
Akiyama, Akira; Doshida, Minoru; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu
2006-08-01
We have developed the Space Imaging Measurement System based on the fixed lens and fast moving detector to the control of the autonomous ground vehicle. The space measurement is the most important task in the development of the autonomous ground vehicle. In this study we move the detector back and forth along the optical axis at the fast rate to measure the three-dimensional image data. This system is just appropriate to the autonomous ground vehicle because this system does not send out any optical energy to measure the distance and keep the safety. And we use the digital camera of the visible ray range. Therefore it gives us the cost reduction of the three-dimensional image data acquisition with respect to the imaging laser system. We can combine many pieces of the narrow space imaging measurement data to construct the wide range three-dimensional data. This gives us the improvement of the image recognition with respect to the object space. To develop the fast movement of the detector, we build the counter mass balance in the mechanical crank system of the Space Imaging Measurement System. And then we set up the duct to prevent the optical noise due to the ray not coming through lens. The object distance is derived from the focus distance which related to the best focused image data. The best focused image data is selected from the image of the maximum standard deviation in the standard deviations of series images.
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2011-01-01
A selection of hands-on experiments from different fields of physics, which happen too fast for the eye or video cameras to properly observe and analyse the phenomena, is presented. They are recorded and analysed using modern high speed cameras. Two types of cameras were used: the first were rather inexpensive consumer products such as Casio…
Rapid-Response Low Infrared Emission Broadband Ultrathin Plasmonic Light Absorber
Tagliabue, Giulia; Eghlidi, Hadi; Poulikakos, Dimos
2014-01-01
Plasmonic nanostructures can significantly advance broadband visible-light absorption, with absorber thicknesses in the sub-wavelength regime, much thinner than conventional broadband coatings. Such absorbers have inherently very small heat capacity, hence a very rapid response time, and high light power-to-temperature sensitivity. Additionally, their surface emissivity can be spectrally tuned to suppress infrared thermal radiation. These capabilities make plasmonic absorbers promising candidates for fast light-to-heat applications, such as radiation sensors. Here we investigate the light-to-heat conversion properties of a metal-insulator-metal broadband plasmonic absorber, fabricated as a free-standing membrane. Using a fast IR camera, we show that the transient response of the absorber has a characteristic time below 13 ms, nearly one order of magnitude lower than a similar membrane coated with a commercial black spray. Concurrently, despite the small thickness, due to the large absorption capability, the achieved absorbed light power-to-temperature sensitivity is maintained at the level of a standard black spray. Finally, we show that while black spray has emissivity similar to a black body, the plasmonic absorber features a very low infra-red emissivity of almost 0.16, demonstrating its capability as selective coating for applications with operating temperatures up to 400°C, above which the nano-structure starts to deform. PMID:25418040
NASA Technical Reports Server (NTRS)
Monford, Leo G. (Inventor)
1990-01-01
Improved techniques are provided for alignment of two objects. The present invention is particularly suited for three-dimensional translation and three-dimensional rotational alignment of objects in outer space. A camera 18 is fixedly mounted to one object, such as a remote manipulator arm 10 of the spacecraft, while the planar reflective surface 30 is fixed to the other object, such as a grapple fixture 20. A monitor 50 displays in real-time images from the camera, such that the monitor displays both the reflected image of the camera and visible markings on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm 10 manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.
Improved docking alignment system
NASA Technical Reports Server (NTRS)
Monford, Leo G. (Inventor)
1988-01-01
Improved techniques are provided for the alignment of two objects. The present invention is particularly suited for 3-D translation and 3-D rotational alignment of objects in outer space. A camera is affixed to one object, such as a remote manipulator arm of the spacecraft, while the planar reflective surface is affixed to the other object, such as a grapple fixture. A monitor displays in real-time images from the camera such that the monitor displays both the reflected image of the camera and visible marking on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).
SHOK—The First Russian Wide-Field Optical Camera in Space
NASA Astrophysics Data System (ADS)
Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.
2018-02-01
Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.
A fast algorithm for computer aided collimation gamma camera (CACAO)
NASA Astrophysics Data System (ADS)
Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Franck, D.; Pihet, P.; Ballongue, P.
2000-08-01
The computer aided collimation gamma camera is aimed at breaking down the resolution sensitivity trade-off of the conventional parallel hole collimator. It uses larger and longer holes, having an added linear movement at the acquisition sequence. A dedicated algorithm including shift and sum, deconvolution, parabolic filtering and rotation is described. Examples of reconstruction are given. This work shows that a simple and fast algorithm, based on a diagonal dominant approximation of the problem can be derived. Its gives a practical solution to the CACAO reconstruction problem.
Design study for a 16x zoom lens system for visible surveillance camera
NASA Astrophysics Data System (ADS)
Vella, Anthony; Li, Heng; Zhao, Yang; Trumper, Isaac; Gandara-Montano, Gustavo A.; Xu, Di; Nikolov, Daniel K.; Chen, Changchen; Brown, Nicolas S.; Guevara-Torres, Andres; Jung, Hae Won; Reimers, Jacob; Bentley, Julie
2015-09-01
*avella@ur.rochester.edu Design study for a 16x zoom lens system for visible surveillance camera Anthony Vella*, Heng Li, Yang Zhao, Isaac Trumper, Gustavo A. Gandara-Montano, Di Xu, Daniel K. Nikolov, Changchen Chen, Nicolas S. Brown, Andres Guevara-Torres, Hae Won Jung, Jacob Reimers, Julie Bentley The Institute of Optics, University of Rochester, Wilmot Building, 275 Hutchison Rd, Rochester, NY, USA 14627-0186 ABSTRACT High zoom ratio zoom lenses have extensive applications in broadcasting, cinema, and surveillance. Here, we present a design study on a 16x zoom lens with 4 groups (including two internal moving groups), designed for, but not limited to, a visible spectrum surveillance camera. Fifteen different solutions were discovered with nearly diffraction limited performance, using PNPX or PNNP design forms with the stop located in either the third or fourth group. Some interesting patterns and trends in the summarized results include the following: (a) in designs with such a large zoom ratio, the potential of locating the aperture stop in the front half of the system is limited, with ray height variations through zoom necessitating a very large lens diameter; (b) in many cases, the lens zoom motion has significant freedom to vary due to near zero total power in the middle two groups; and (c) we discuss the trade-offs between zoom configuration, stop location, packaging factors, and zoom group aberration sensitivity.
Mars Exploration Rover engineering cameras
Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.
2003-01-01
NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.
Enhancing Close-Up Image Based 3d Digitisation with Focus Stacking
NASA Astrophysics Data System (ADS)
Kontogianni, G.; Chliverou, R.; Koutsoudis, A.; Pavlidis, G.; Georgopoulos, A.
2017-08-01
The 3D digitisation of small artefacts is a very complicated procedure because of their complex morphological feature structures, concavities, rich decorations, high frequency of colour changes in texture, increased accuracy requirements etc. Image-based methods present a low cost, fast and effective alternative because laser scanning does not meet the accuracy requirements in general. A shallow Depth of Field (DoF) affects the image-based 3D reconstruction and especially the point matching procedure. This is visible not only in the total number of corresponding points but also in the resolution of the produced 3D model. The extension of the DoF is a very important task that should be incorporated in the data collection to attain a better quality of the image set and a better 3D model. An extension of the DoF can be achieved with many methods and especially with the use of the focus stacking technique. In this paper, the focus stacking technique was tested in a real-world experiment to digitise a museum artefact in 3D. The experiment conditions include the use of a full frame camera equipped with a normal lens (50mm), with the camera being placed close to the object. The artefact has already been digitised with a structured light system and that model served as the reference model in which 3D models were compared and the results were presented.
NICMOS PEERS INTO HEART OF DYING STAR
NASA Technical Reports Server (NTRS)
2002-01-01
The Egg Nebula, also known as CRL 2688, is shown on the left as it appears in visible light with the Hubble Space Telescope's Wide Field and Planetary Camera 2 (WFPC2) and on the right as it appears in infrared light with Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Since infrared light is invisible to humans, the NICMOS image has been assigned colors to distinguish different wavelengths: blue corresponds to starlight reflected by dust particles, and red corresponds to heat radiation emitted by hot molecular hydrogen. Objects like the Egg Nebula are helping astronomers understand how stars like our Sun expel carbon and nitrogen -- elements crucial for life -- into space. Studies on the Egg Nebula show that these dying stars eject matter at high speeds along a preferred axis and may even have multiple jet-like outflows. The signature of the collision between this fast-moving material and the slower outflowing shells is the glow of hydrogen molecules captured in the NICMOS image. The distance between the tip of each jet is approximately 200 times the diameter of our solar system (out to Pluto's orbit). Credits: Rodger Thompson, Marcia Rieke, Glenn Schneider, Dean Hines (University of Arizona); Raghvendra Sahai (Jet Propulsion Laboratory); NICMOS Instrument Definition Team; and NASA Image files in GIF and JPEG format and captions may be accessed on the Internet via anonymous ftp from ftp.stsci.edu in /pubinfo.
Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang
2016-01-01
Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312
Piecewise-Planar StereoScan: Sequential Structure and Motion using Plane Primitives.
Raposo, Carolina; Antunes, Michel; P Barreto, Joao
2017-08-09
The article describes a pipeline that receives as input a sequence of stereo images, and outputs the camera motion and a Piecewise-Planar Reconstruction (PPR) of the scene. The pipeline, named Piecewise-Planar StereoScan (PPSS), works as follows: the planes in the scene are detected for each stereo view using semi-dense depth estimation; the relative pose is computed by a new closed-form minimal algorithm that only uses point correspondences whenever plane detections do not fully constrain the motion; the camera motion and the PPR are jointly refined by alternating between discrete optimization and continuous bundle adjustment; and, finally, the detected 3D planes are segmented in images using a new framework that handles low texture and visibility issues. PPSS is extensively validated in indoor and outdoor datasets, and benchmarked against two popular point-based SfM pipelines. The experiments confirm that plane-based visual odometry is resilient to situations of small image overlap, poor texture, specularity, and perceptual aliasing where the fast LIBVISO2 pipeline fails. The comparison against VisualSfM+CMVS/PMVS shows that, for a similar computational complexity, PPSS is more accurate and provides much more compelling and visually pleasant 3D models. These results strongly suggest that plane primitives are an advantageous alternative to point correspondences for applications of SfM and 3D reconstruction in man-made environments.
10 CFR 9.103 - General provisions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... reasonable space and adequate visibility and acoustics, for public observation. No additional right to... electronic recording equipment and cameras requires the advance written approval of the Secretary. [42 FR...
10 CFR 9.103 - General provisions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... reasonable space and adequate visibility and acoustics, for public observation. No additional right to... electronic recording equipment and cameras requires the advance written approval of the Secretary. [42 FR...
10 CFR 9.103 - General provisions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... reasonable space and adequate visibility and acoustics, for public observation. No additional right to... electronic recording equipment and cameras requires the advance written approval of the Secretary. [42 FR...
10 CFR 9.103 - General provisions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... reasonable space and adequate visibility and acoustics, for public observation. No additional right to... electronic recording equipment and cameras requires the advance written approval of the Secretary. [42 FR...
NASA Astrophysics Data System (ADS)
Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard
2017-06-01
In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.
NASA Astrophysics Data System (ADS)
Sanchez-Lavega, Agustin; Rojas, J.; Hueso, R.; Perez-Hoyos, S.; de Bilbao, L.; Murga, G.; Ariño, J.; Mendikoa, I.
2012-10-01
PlanetCam is a two-channel fast-acquisition and low-noise camera designed for a multispectral study of the atmospheres of the planets (Venus, Mars, Jupiter, Saturn, Uranus and Neptune) and the satellite Titan at high temporal and spatial resolutions simultaneously invisible (0.4-1 μm) and NIR (1-2.5 μm) channels. This is accomplished by means of a dichroic beam splitter that separates both beams directing them into two different detectors. Each detector has filter wheels corresponding to the characteristic absorption bands of each planetary atmosphere. Images are acquired and processed using the “lucky imaging” technique in which several thousand images of the same object are obtained in a short time interval, coregistered and ordered in terms of image quality to reconstruct a high-resolution ideally diffraction limited image of the object. Those images will be also calibrated in terms of intensity and absolute reflectivity. The camera will be tested at the 50.2 cm telescope of the Aula EspaZio Gela (Bilbao) and then commissioned at the 1.05 m at Pic-duMidi Observatory (Franca) and at the 1.23 m telescope at Calar Alto Observatory in Spain. Among the initially planned research targets are: (1) The vertical structure of the clouds and hazes in the planets and their scales of variability; (2) The meteorology, dynamics and global winds and their scales of variability in the planets. PlanetCam is also expected to perform studies of other Solar System and astrophysical objects. Acknowledgments: This work was supported by the Spanish MICIIN project AYA2009-10701 with FEDER funds, by Grupos Gobierno Vasco IT-464-07 and by Universidad País Vasco UPV/EHU through program UFI11/55.
STS-28 Columbia, OV-102, MS Brown uses ARRIFLEX camera on aft flight deck
1989-08-13
STS028-17-033 (August 1989) --- Astronaut Mark N. Brown, STS-28 mission specialist, pauses from a session of motion-picture photography conducted through one of the aft windows on the flight deck of the Earth-orbiting Space Shuttle Columbia. He is using an Arriflex camera. The horizon of the blue and white appearing Earth and its airglow are visible in the background.
NASA Astrophysics Data System (ADS)
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.
2006-01-01
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.
Fast simulation of yttrium-90 bremsstrahlung photons with GATE.
Rault, Erwann; Staelens, Steven; Van Holen, Roel; De Beenhouwer, Jan; Vandenberghe, Stefaan
2010-06-01
Multiple investigators have recently reported the use of yttrium-90 (90Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging for the dosimetry of targeted radionuclide therapies. Because Monte Carlo (MC) simulations are useful for studying SPECT imaging, this study investigates the MC simulation of 90Y bremsstrahlung photons in SPECT. To overcome the computationally expensive simulation of electrons, the authors propose a fast way to simulate the emission of 90Y bremsstrahlung photons based on prerecorded bremsstrahlung photon probability density functions (PDFs). The accuracy of bremsstrahlung photon simulation is evaluated in two steps. First, the validity of the fast bremsstrahlung photon generator is checked. To that end, fast and analog simulations of photons emitted from a 90Y point source in a water phantom are compared. The same setup is then used to verify the accuracy of the bremsstrahlung photon simulations, comparing the results obtained with PDFs generated from both simulated and measured data to measurements. In both cases, the energy spectra and point spread functions of the photons detected in a scintillation camera are used. Results show that the fast simulation method is responsible for a 5% overestimation of the low-energy fluence (below 75 keV) of the bremsstrahlung photons detected using a scintillation camera. The spatial distribution of the detected photons is, however, accurately reproduced with the fast method and a computational acceleration of approximately 17-fold is achieved. When measured PDFs are used in the simulations, the simulated energy spectrum of photons emitted from a point source of 90Y in a water phantom and detected in a scintillation camera closely approximates the measured spectrum. The PSF of the photons imaged in the 50-300 keV energy window is also accurately estimated with a 12.4% underestimation of the full width at half maximum and 4.5% underestimation of the full width at tenth maximum. Despite its limited accuracy, the fast bremsstrahlung photon generator is well suited for the simulation of bremsstrahlung photons emitted in large homogeneous organs, such as the liver, and detected in a scintillation camera. The computational acceleration makes it very useful for future investigations of 90Y bremsstrahlung SPECT imaging.
A poloidal section neutron camera for MAST upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sangaroon, S.; Weiszflog, M.; Cecconello, M.
2014-08-21
The Mega Ampere Spherical Tokamak Upgrade (MAST Upgrade) is intended as a demonstration of the physics viability of the Spherical Tokamak (ST) concept and as a platform for contributing to ITER/DEMO physics. Concerning physics exploitation, MAST Upgrade plasma scenarios can contribute to the ITER Tokamak physics particularly in the field of fast particle behavior and current drive studies. At present, MAST is equipped with a prototype neutron camera (NC). On the basis of the experience and results from previous experimental campaigns using the NC, the conceptual design of a neutron camera upgrade (NC Upgrade) is being developed. As part ofmore » the MAST Upgrade, the NC Upgrade is considered a high priority diagnostic since it would allow studies in the field of fast ions and current drive with good temporal and spatial resolution. In this paper, we explore an optional design with the camera array viewing the poloidal section of the plasma from different directions.« less
Overview of image processing tools to extract physical information from JET videos
NASA Astrophysics Data System (ADS)
Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET
2014-11-01
In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the automatic detection of MARFE (multifaceted asymmetric radiation from the edge) occurrences, which precede disruptions in density limit discharges. An original spot detection method has been developed for large surveys of videos in JET, and for the assessment of the long term trends in their evolution. The analysis of JET IR videos, recorded during JET operation with the ITER-like wall, allows the retrieval of data and hence correlation of the evolution of spots properties with macroscopic events, in particular series of intentional disruptions.
NASA Astrophysics Data System (ADS)
Shi, Yuejiang; Fu, Jia; Li, Jiahong; Yang, Yu; Wang, Fudi; Li, Yingying; Zhang, Wei; Wan, Baonian; Chen, Zhongyong
2010-03-01
The synchrotron radiation originated from the energetic runaway electrons has been measured by a visible complementary metal oxide semiconductor camera working in the wavelength ranges of 380-750 nm in the Experimental Advanced Superconducting Tokamak [H. Q. Liu et al., Plasma Phys. Contr. Fusion 49, 995 (2007)]. With a tangential viewing into the plasma in the direction of electron approach on the equatorial plane, the synchrotron radiation from the energetic runaway electrons was measured in full poloidal cross section. The synchrotron radiation diagnostics provides a direct pattern of the runaway beam inside the plasma. The energy and pitch angle of runaway electrons have been obtained according to the synchrotron radiation pattern. A stable shell shape of synchrotron radiation has been observed in a few runaway discharges.
Detection of unmanned aerial vehicles using a visible camera system.
Hu, Shuowen; Goldman, Geoffrey H; Borel-Donohue, Christoph C
2017-01-20
Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.
Opto-mechanical system design of test system for near-infrared and visible target
NASA Astrophysics Data System (ADS)
Wang, Chunyan; Zhu, Guodong; Wang, Yuchao
2014-12-01
Guidance precision is the key indexes of the guided weapon shooting. The factors of guidance precision including: information processing precision, control system accuracy, laser irradiation accuracy and so on. The laser irradiation precision is an important factor. This paper aimed at the demand of the precision test of laser irradiator,and developed the laser precision test system. The system consists of modified cassegrain system, the wide range CCD camera, tracking turntable and industrial PC, and makes visible light and near infrared target imaging at the same time with a Near IR camera. Through the analysis of the design results, when it exposures the target of 1000 meters that the system measurement precision is43mm, fully meet the needs of the laser precision test.
NASA Astrophysics Data System (ADS)
Göhler, Benjamin; Lutzmann, Peter
2016-10-01
In this paper, the potential capability of short-wavelength infrared laser gated-viewing for penetrating the pyrotechnic effects smoke and light/heat has been investigated by evaluating data from conducted field trials. The potential of thermal infrared cameras for this purpose has also been considered and the results have been compared to conventional visible cameras as benchmark. The application area is the use in soccer stadiums where pyrotechnics are illegally burned in dense crowds of people obstructing visibility of stadium safety staff and police forces into the involved section of the stadium. Quantitative analyses have been carried out to identify sensor performances. Further, qualitative image comparisons have been presented to give impressions of image quality during the disruptive effects of burning pyrotechnics.
Challenges and solutions for high performance SWIR lens design
NASA Astrophysics Data System (ADS)
Gardner, M. C.; Rogers, P. J.; Wilde, M. F.; Cook, T.; Shipton, A.
2016-10-01
Shortwave infrared (SWIR) cameras are becoming increasingly attractive due to the improving size, resolution and decreasing prices of InGaAs focal plane arrays (FPAs). The rapid development of competitively priced HD performance SWIR cameras has not been matched in SWIR imaging lenses with the result that the lens is now more likely to be the limiting factor in imaging quality than the FPA. Adapting existing lens designs from the visible region by re-coating for SWIR will improve total transmission but diminished image quality metrics such as MTF, and in particular large field angle performance such as vignetting, field curvature and distortion are serious consequences. To meet this challenge original SWIR solutions are presented including a wide field of view fixed focal length lens for commercial machine vision (CMV) and a wide angle, small, lightweight defence lens and their relevant design considerations discussed. Issues restricting suitable glass types will be examined. The index and dispersion properties at SWIR wavelengths can differ significantly from their visible values resulting in unusual glass combinations when matching doublet elements. Materials chosen simultaneously allow athermalization of the design as well as containing matched CTEs in the elements of doublets. Recently, thinned backside-illuminated InGaAs devices have made Vis.SWIR cameras viable. The SWIR band is sufficiently close to the visible that the same constituent materials can be used for AR coatings covering both bands. Keeping the lens short and mass low can easily result in high incidence angles which in turn complicates coating design, especially when extended beyond SWIR into the visible band. This paper also explores the potential performance of wideband Vis.SWIR AR coatings.
Design of Dual-Road Transportable Portal Monitoring System for Visible Light and Gamma-Ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Cunningham, Mark F; Goddard Jr, James Samuel
2010-01-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they entermore » and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third alignment camera for motion compensation and are mounted on a 50 deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.« less
Design of dual-road transportable portal monitoring system for visible light and gamma-ray imaging
NASA Astrophysics Data System (ADS)
Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Bradley, E. Craig; Chesser, J.; Marchant, W.
2010-04-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they enter and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third "alignment" camera for motion compensation and are mounted on a 50' deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.
NASA Astrophysics Data System (ADS)
Do, Trong Hop; Yoo, Myungsik
2018-01-01
This paper proposes a vehicle positioning system using LED street lights and two rolling shutter CMOS sensor cameras. In this system, identification codes for the LED street lights are transmitted to camera-equipped vehicles through a visible light communication (VLC) channel. Given that the camera parameters are known, the positions of the vehicles are determined based on the geometric relationship between the coordinates of the LEDs in the images and their real world coordinates, which are obtained through the LED identification codes. The main contributions of the paper are twofold. First, the collinear arrangement of the LED street lights makes traditional camera-based positioning algorithms fail to determine the position of the vehicles. In this paper, an algorithm is proposed to fuse data received from the two cameras attached to the vehicles in order to solve the collinearity problem of the LEDs. Second, the rolling shutter mechanism of the CMOS sensors combined with the movement of the vehicles creates image artifacts that may severely degrade the positioning accuracy. This paper also proposes a method to compensate for the rolling shutter artifact, and a high positioning accuracy can be achieved even when the vehicle is moving at high speeds. The performance of the proposed positioning system corresponding to different system parameters is examined by conducting Matlab simulations. Small-scale experiments are also conducted to study the performance of the proposed algorithm in real applications.
Zheng, Z. Q.; Yao, J. D.; Wang, B.; Yang, G. W.
2015-01-01
In recent years, owing to the significant applications of health monitoring, wearable electronic devices such as smart watches, smart glass and wearable cameras have been growing rapidly. Gas sensor is an important part of wearable electronic devices for detecting pollutant, toxic, and combustible gases. However, in order to apply to wearable electronic devices, the gas sensor needs flexible, transparent, and working at room temperature, which are not available for traditional gas sensors. Here, we for the first time fabricate a light-controlling, flexible, transparentand working at room-temperature ethanol gas sensor by using commercial ZnO nanoparticles. The fabricated sensor not only exhibits fast and excellent photoresponse, but also shows high sensing response to ethanol under UV irradiation. Meanwhile, its transmittance exceeds 62% in the visible spectral range, and the sensing performance keeps the same even bent it at a curvature angle of 90o. Additionally, using commercial ZnO nanoparticles provides a facile and low-cost route to fabricate wearable electronic devices. PMID:26076705
Zheng, Z Q; Yao, J D; Wang, B; Yang, G W
2015-06-16
In recent years, owing to the significant applications of health monitoring, wearable electronic devices such as smart watches, smart glass and wearable cameras have been growing rapidly. Gas sensor is an important part of wearable electronic devices for detecting pollutant, toxic, and combustible gases. However, in order to apply to wearable electronic devices, the gas sensor needs flexible, transparent, and working at room temperature, which are not available for traditional gas sensors. Here, we for the first time fabricate a light-controlling, flexible, transparent, and working at room-temperature ethanol gas sensor by using commercial ZnO nanoparticles. The fabricated sensor not only exhibits fast and excellent photoresponse, but also shows high sensing response to ethanol under UV irradiation. Meanwhile, its transmittance exceeds 62% in the visible spectral range, and the sensing performance keeps the same even bent it at a curvature angle of 90(o). Additionally, using commercial ZnO nanoparticles provides a facile and low-cost route to fabricate wearable electronic devices.
Flight demonstration of a milliarcsecond pointing system for direct exoplanet imaging.
Mendillo, Christopher B; Chakrabarti, Supriya; Cook, Timothy A; Hicks, Brian A; Lane, Benjamin F
2012-10-10
We present flight results from the optical pointing control system onboard the Planetary Imaging Concept Testbed Using a Rocket Experiment (PICTURE) sounding rocket. PICTURE (NASA mission number: 36.225 UG) was launched on 8 October 2011, from White Sands Missile Range. It attempted to directly image the exozodiacal dust disk of ϵ Eridani (K2V, 3.22 pc) down to an inner radius of 1.5 AU using a visible nulling coronagraph. The rocket attitude control system (ACS) provided 627 milliarcsecond (mas) RMS body pointing (~2'' peak-to-valley). The PICTURE fine pointing system (FPS) successfully stabilized the telescope beam to 5.1 mas (0.02λ/D) RMS using an angle tracker camera and fast steering mirror. This level of pointing stability is comparable to that of the Hubble Space Telescope. We present the hardware design of the FPS, a description of the limiting noise sources and a power spectral density analysis of the FPS and rocket ACS in-flight performance.
Initial operation of the Lockheed Martin T4B experiment
NASA Astrophysics Data System (ADS)
Garrett, M. L.; Blinzer, A.; Ebersohn, F.; Gucker, S.; Heinrich, J.; Lohff, C.; McGuire, T.; Montecalvo, N.; Raymond, A.; Rhoads, J.; Ross, P.; Sommers, B.; Strandberg, E.; Sullivan, R.; Walker, J.
2017-10-01
The T4B experiment is a linear, encapsulated ring cusp confinement device, designed to develop a physics and technology basis for a follow-on high beta (β 1) machine. The experiment consists of 13 magnetic field coils (11 external, 2 internal), to produce a series of on-axis field nulls surrounded by modest magnetic fields of up to 0.3 T. The primary plasma source used on T4B is a lanthanum hexaboride (LaB6) cathode, capable of coupling over 100 kW into the plasma. Initial testing focused on commissioning of components and integration of diagnostics. Diagnostics include both long and short wavelength interferometry, bolometry, visible and X-ray spectroscopy, Langmuir and B-dot probes, Thomson scattering, flux loops, and fast camera imagery. Low energy discharges were used to begin validation of physics models and simulation efforts. Following the initial machine check-out, neutral beam injection (NBI) was integrated onto the device. Detailed results will be presented. 2017 Lockheed Martin Corporation. All Rights Reserved.
2015-05-11
From a distance Saturn seems to exude an aura of serenity and peace. In spite of this appearance, Saturn is an active and dynamic world. Its atmosphere is a fast-moving and turbulent place with wind speeds in excess of 1,100 miles per hour (1,800 km per hour) in places. The lack of a solid surface to create drag means that there are fewer features to slow down the wind than on a planet like Earth. Mimas, to the upper-right of Saturn, has been brightened by a factor of 2 for visibility. In this view, Cassini was at a subspacecraft latitude of 19 degrees North. The image was taken with the Cassini spacecraft wide-angle camera on Feb. 4, 2015 using a spectral filter centered at 752 nanometers, in the near-infrared portion of the spectrum. The view was obtained at a distance of approximately 1.6 million miles (2.5 million kilometers) from Saturn. Image scale is 96 miles (150 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/pia18314
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oz, E.; Myers, C. E.; Yamada, M.
2011-07-19
The stability properties of partial toroidal flux ropes are studied in detail in the laboratory, motivated by ubiquitous arched magnetic structures found on the solar surface. The flux ropes studied here are magnetized arc discharges formed between two electrodes in the Magnetic Reconnection Experiment (MRX) [Yamada et al., Phys. Plasmas, 4, 1936 (1997)]. The three dimensional evolution of these flux ropes is monitored by a fast visible light framing camera, while their magnetic structure is measured by a variety of internal magnetic probes. The flux ropes are consistently observed to undergo large-scale oscillations as a result of an external kinkmore » instability. Using detailed scans of the plasma current, the guide field strength, and the length of the flux rope, we show that the threshold for kink stability is governed by the Kruskal-Shafranov limit for a flux rope that is held fixed at both ends (i.e., qa = 1).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oz, E.; Myers, C. E.; Yamada, M.
2011-10-15
The stability properties of partial-toroidal flux ropes are studied in detail in the laboratory, motivated by ubiquitous arched magnetic structures found on the solar surface. The flux ropes studied here are magnetized arc discharges formed between two electrodes in the Magnetic Reconnection Experiment (MRX) [Yamada et al., Phys. Plasmas 4, 1936 (1997)]. The three dimensional evolution of these flux ropes is monitored by a fast visible light framing camera, while their magnetic structure is measured by a variety of internal magnetic probes. The flux ropes are consistently observed to undergo large-scale oscillations as a result of an external kink instability.more » Using detailed scans of the plasma current, the guide field strength, and the length of the flux rope, we show that the threshold for kink stability is governed by the Kruskal-Shafranov limit for a flux rope that is held fixed at both ends (i.e., q{sub a} = 1).« less
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory
NASA Technical Reports Server (NTRS)
Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.
2005-01-01
Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.
NASA Astrophysics Data System (ADS)
Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi
2011-03-01
Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.
Initial Demonstration of 9-MHz Framing Camera Rates on the FAST UV Drive Laser Pulse Trains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A. H.; Edstrom Jr., D.; Ruan, J.
2016-10-09
We report the configuration of a Hamamatsu C5680 streak camera as a framing camera to record transverse spatial information of green-component laser micropulses at 3- and 9-MHz rates for the first time. The latter is near the time scale of the ~7.5-MHz revolution frequency of the Integrable Optics Test Accelerator (IOTA) ring and its expected synchroton radiation source temporal structure. The 2-D images are recorded with a Gig-E readout CCD camera. We also report a first proof of principle with an OTR source using the linac streak camera in a semi-framing mode.
Thermographic measurements of high-speed metal cutting
NASA Astrophysics Data System (ADS)
Mueller, Bernhard; Renz, Ulrich
2002-03-01
Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.
Fischer, Andreas; Kupsch, Christian; Gürtler, Johannes; Czarske, Jürgen
2015-09-21
Non-intrusive fast 3d measurements of volumetric velocity fields are necessary for understanding complex flows. Using high-speed cameras and spectroscopic measurement principles, where the Doppler frequency of scattered light is evaluated within the illuminated plane, each pixel allows one measurement and, thus, planar measurements with high data rates are possible. While scanning is one standard technique to add the third dimension, the volumetric data is not acquired simultaneously. In order to overcome this drawback, a high-speed light field camera is proposed for obtaining volumetric data with each single frame. The high-speed light field camera approach is applied to a Doppler global velocimeter with sinusoidal laser frequency modulation. As a result, a frequency multiplexing technique is required in addition to the plenoptic refocusing for eliminating the crosstalk between the measurement planes. However, the plenoptic refocusing is still necessary in order to achieve a large refocusing range for a high numerical aperture that minimizes the measurement uncertainty. Finally, two spatially separated measurement planes with 25×25 pixels each are simultaneously acquired with a measurement rate of 0.5 kHz with a single high-speed camera.
A near-Infrared SETI Experiment: Alignment and Astrometric precision
NASA Astrophysics Data System (ADS)
Duenas, Andres; Maire, Jerome; Wright, Shelley; Drake, Frank D.; Marcy, Geoffrey W.; Siemion, Andrew; Stone, Remington P. S.; Tallis, Melisa; Treffers, Richard R.; Werthimer, Dan
2016-06-01
Beginning in March 2015, a Near-InfraRed Optical SETI (NIROSETI) instrument aiming to search for fast nanosecond laser pulses, has been commissioned on the Nickel 1m-telescope at Lick Observatory. The NIROSETI instrument makes use of an optical guide camera, SONY ICX694 CCD from PointGrey, to align our selected sources into two 200µm near-infrared Avalanche Photo Diodes (APD) with a field-of-view of 2.5"x2.5" each. These APD detectors operate at very fast bandwidths and are able to detect pulse widths extending down into the nanosecond range. Aligning sources onto these relatively small detectors requires characterizing the guide camera plate scale, static optical distortion solution, and relative orientation with respect to the APD detectors. We determined the guide camera plate scale as 55.9+- 2.7 milli-arcseconds/pixel and magnitude limit of 18.15mag (+1.07/-0.58) in V-band. We will present the full distortion solution of the guide camera, orientation, and our alignment method between the camera and the two APDs, and will discuss target selection within the NIROSETI observational campaign, including coordination with Breakthrough Listen.
Staking out Curiosity Landing Site
2012-08-09
The geological context for the landing site of NASA Curiosity rover is visible in this image mosaic obtained by the High-Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.
Night Vision and Electro-Optics Technology Transfer, 1972-1981
1981-09-15
Lixiscope offers potential applications as: a handheld instrument for dental radiography giving real-time ,1servation in orthodontic procedures; a portable...laboratory are described below. There are however, no hard and fast rules. The laboratory’s experimentation with different films, brackets , cameras and...good single lens reflex camera; an exvosure meter; a tripod; and a custom-built bracket to mate the camera and intensifier (Figure 2-1). Figure 2-1
HandSight: Supporting Everyday Activities through Touch-Vision
2015-10-01
switches between IR and RGB o Large, low resolution, and fixed focal length > 1ft • Raspberry PI NoIR: https://www.raspberrypi.org/products/ pi -noir...camera/ o Raspberry Pi NoIR camera with external visible light filters o Good image quality, manually adjustable focal length, small, programmable 11...purpose and scope of the research. 2. KEYWORDS: Provide a brief list of keywords (limit to 20 words). 3. ACCOMPLISHMENTS: The PI is reminded that
LOFT complex, camera facing west. Mobile entry (TAN624) is position ...
LOFT complex, camera facing west. Mobile entry (TAN-624) is position next to containment building (TAN-650). Shielded roadway entrance in view just below and to right of stack. Borated water tank has been covered with weather shelter and is no longer visible. ANP hangar (TAN-629) in view beyond LOFT. Date: 1974. INEEL negative no. 74-4191 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
A visible light imaging device for cardiac rate detection with reduced effect of body movement
NASA Astrophysics Data System (ADS)
Jiang, Xiaotian; Liu, Ming; Zhao, Yuejin
2014-09-01
A visible light imaging system to detect human cardiac rate is proposed in this paper. A color camera and several LEDs, acting as lighting source, were used to avoid the interference of ambient light. From people's forehead, the cardiac rate could be acquired based on photoplethysmography (PPG) theory. The template matching method was used after the capture of video. The video signal was discomposed into three signal channels (RGB) and the region of interest was chosen to take the average gray value. The green channel signal could provide an excellent waveform of pulse wave on the account of green lights' absorptive characteristics of blood. Through the fast Fourier transform, the cardiac rate was exactly achieved. But the research goal was not just to achieve the cardiac rate accurately. With the template matching method, the effects of body movement are reduced to a large extent, therefore the pulse wave can be detected even while people are in the moving state and the waveform is largely optimized. Several experiments are conducted on volunteers, and the results are compared with the ones gained by a finger clamped pulse oximeter. The contrast results between these two ways are exactly agreeable. This method to detect the cardiac rate and the pulse wave largely reduces the effects of body movement and can probably be widely used in the future.
Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung
2018-01-01
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets. PMID:29748495
Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung
2018-05-10
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.
DC drive system for cine/pulse cameras
NASA Technical Reports Server (NTRS)
Gerlach, R. H.; Sharpsteen, J. T.; Solheim, C. D.; Stoap, L. J.
1977-01-01
Camera-drive functions are separated mechanically into two groups which are driven by two separate dc brushless motors. First motor, a 90 deg stepper, drives rotating shutter; second electronically commutated motor drives claw and film transport. Shutter is made of one piece but has two openings for slow and fast exposures.
Fast and robust curve skeletonization for real-world elongated objects
USDA-ARS?s Scientific Manuscript database
These datasets were generated for calibrating robot-camera systems. In an extension, we also considered the problem of calibrating robots with more than one camera. These datasets are provided as a companion to the paper, "Solving the Robot-World Hand-Eye(s) Calibration Problem with Iterative Meth...
Earth Observations taken by the Expedition 10 crew
2005-01-17
ISS010-E-13680 (17 January 2005) --- The border of Galveston and Brazoria Counties in Texas is visible in this electronic still camera's image, as photographed by the Expedition 10 crew onboard the International Space Station. Polly Ranch, near Friendswood, is visible west of Interstate Highway 45 (right side). FM528 goes horizontally through the middle, and FM518 runs vertically through frame center, with the two roads intersecting near Friendswood.
A drone detection with aircraft classification based on a camera array
NASA Astrophysics Data System (ADS)
Liu, Hao; Qu, Fangchao; Liu, Yingjian; Zhao, Wei; Chen, Yitong
2018-03-01
In recent years, because of the rapid popularity of drones, many people have begun to operate drones, bringing a range of security issues to sensitive areas such as airports and military locus. It is one of the important ways to solve these problems by realizing fine-grained classification and providing the fast and accurate detection of different models of drone. The main challenges of fine-grained classification are that: (1) there are various types of drones, and the models are more complex and diverse. (2) the recognition test is fast and accurate, in addition, the existing methods are not efficient. In this paper, we propose a fine-grained drone detection system based on the high resolution camera array. The system can quickly and accurately recognize the detection of fine grained drone based on hd camera.
September 2006 Monthly Report- ITER Visible/IRTV Optical Design Scoping Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lasnier, C
LLNL received a request from the US ITER organization to perform a scoping study of optical design for visible/IR camera systems for the 6 upper ports of ITER. A contract was put in place and the LLNL account number was opened July 19, 2006. A kickoff meeting was held at LLNL July 26. The principal work under the contract is being performed by Lynn Seppala (optical designer), Kevin Morris (mechanical designer), Max Fenstermacher (visible cameras), Mathias Groth (assisting with visible cameras), and Charles Lasnier (IR cameras and Principal Investigator), all LLNL employees. Kevin Morris has imported ITER CAD files andmore » developed a simplified 3D view of the ITER tokamak with upper ports, which he used to determine the optimum viewing angle from an upper port to see the outer target. He also determined the minimum angular field of view needed to see the largest possible coverage of the outer target. We examined the CEA-Cadarache report on their optical design for ITER visible/IRTV equatorial ports. We found that the resolution was diffraction-limited by the 5-mm aperture through the tile. Lynn Seppala developed a similar front-end design for an upper port but with a larger 6-inch-diameter beam. This allows the beam to pass through the port plug and port interspace without further focusing optics until outside the bioshield. This simplifies the design as well as eliminating a requirement for complex relay lenses in the port interspace. The focusing optics are all mirrors, which allows the system to handle light from 0.4 {micro}m to 5 {micro}m wavelength without chromatic aberration. The window material chosen is sapphire, as in the CEA design. Sapphire has good transmission in the desired wavelengths up to 4.8 {micro}m, as well as good mechanical strength. We have verified that sapphire windows of the needed size are commercially available. The diffraction-limited resolution permitted by the 5 mm aperture falls short of the ITER specification value but is well-matched to the resolution of current detectors. A large increase in resolution would require a similar increase in the linear pixel count on a detector. However, we cannot increase the aperture much without affecting the image quality. Lynn Seppala is writing a memo detailing the resolution trade-offs. Charles Lasnier is calculating the radiated power, which will fall on the detector in order to estimate signal-to-noise ratio and maximum frame rate. The signal will be reduced by the fact that the outer target plates are tungsten, which radiates less than carbon at the same temperature. The tungsten will also reflect radiation from the carbon tiles private flux dome, which will radiate efficiently although at a lower temperature than the target plates. The analysis will include estimates of these effects. Max Fenstermacher is investigating the intensity of line emission that will be emitted in the visible band, in order to predict signal-to-noise ratio and maximum frame rate for the visible camera. Andre Kukushkin has modeling results that will give local emission of deuterium and carbon lines. Line integrals of the emission must be done to produce the emitted intensity. The model is not able to handle tungsten and beryllium so we will only be able to estimate deuterium and carbon emission. Total costs as of September 30, 2006 are $87,834.43. Manpower was 0.58 FTE's in August, 1.48 in August, and 1.56 in September.« less
2001 Mars Odyssey Images Earth (Visible and Infrared)
NASA Technical Reports Server (NTRS)
2001-01-01
2001 Mars Odyssey's Thermal Emission Imaging System (THEMIS) acquired these images of the Earth using its visible and infrared cameras as it left the Earth. The visible image shows the thin crescent viewed from Odyssey's perspective. The infrared image was acquired at exactly the same time, but shows the entire Earth using the infrared's 'night-vision' capability. Invisible light the instrument sees only reflected sunlight and therefore sees nothing on the night side of the planet. In infrared light the camera observes the light emitted by all regions of the Earth. The coldest ground temperatures seen correspond to the nighttime regions of Antarctica; the warmest temperatures occur in Australia. The low temperature in Antarctica is minus 50 degrees Celsius (minus 58 degrees Fahrenheit); the high temperature at night in Australia 9 degrees Celsius(48.2 degrees Fahrenheit). These temperatures agree remarkably well with observed temperatures of minus 63 degrees Celsius at Vostok Station in Antarctica, and 10 degrees Celsius in Australia. The images were taken at a distance of 3,563,735 kilometers (more than 2 million miles) on April 19,2001 as the Odyssey spacecraft left Earth.
Pattern recognition applied to infrared images for early alerts in fog
NASA Astrophysics Data System (ADS)
Boucher, Vincent; Marchetti, Mario; Dumoulin, Jean; Cord, Aurélien
2014-09-01
Fog conditions are the cause of severe car accidents in western countries because of the poor induced visibility. Its forecast and intensity are still very difficult to predict by weather services. Infrared cameras allow to detect and to identify objects in fog while visibility is too low for eye detection. Over the past years, the implementation of cost effective infrared cameras on some vehicles has enabled such detection. On the other hand pattern recognition algorithms based on Canny filters and Hough transformation are a common tool applied to images. Based on these facts, a joint research program between IFSTTAR and Cerema has been developed to study the benefit of infrared images obtained in a fog tunnel during its natural dissipation. Pattern recognition algorithms have been applied, specifically on road signs which shape is usually associated to a specific meaning (circular for a speed limit, triangle for an alert, …). It has been shown that road signs were detected early enough in images, with respect to images in the visible spectrum, to trigger useful alerts for Advanced Driver Assistance Systems.
1. GENERAL VIEW OF SLC3W SHOWING SOUTH FACE AND EAST ...
1. GENERAL VIEW OF SLC-3W SHOWING SOUTH FACE AND EAST SIDE OF A-FRAME MOBILE SERVICE TOWER (MST). MST IN SERVICE POSITION OVER LAUNCHER AND FLAME BUCKET. CABLE TRAYS BETWEEN LAUNCH OPERATIONS BUILDING (BLDG. 763) AND SLC-3W IN FOREGROUND. LIQUID OXYGEN APRON VISIBLE IMMEDIATELY EAST (RIGHT) OF MST; FUEL APRON VISIBLE IMMEDIATELY WEST (LEFT) OF MST. A PORTION OF THE FLAME BUCKET VISIBLE BELOW THE SOUTH FACE OF THE MST. CAMERA TOWERS VISIBLE EAST OF MST BETWEEN ROAD AND CABLE TRAY, AND SOUTH OF MST NEAR LEFT MARGIN OF PHOTOGRAPH. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 West, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
2012-08-20
With the addition of four high-resolution Navigation Camera, or Navcam, images, taken on Aug. 18 Sol 12, Curiosity 360-degree landing-site panorama now includes the highest point on Mount Sharp visible from the rover.
A GRAND VIEW OF THE BIRTH OF 'HEFTY' STARS - 30 DORADUS NEBULA MONTAGE
NASA Technical Reports Server (NTRS)
2002-01-01
This picture, taken in visible light with the Hubble Space Telescope's Wide Field and Planetary Camera 2 (WFPC2), represents a sweeping view of the 30 Doradus Nebula. But Hubble's infrared camera - the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) - has probed deeper into smaller regions of this nebula to unveil the stormy birth of massive stars. The montages of images in the upper left and upper right represent this deeper view. Each square in the montages is 15.5 light-years (19 arcseconds) across. The brilliant cluster R136, containing dozens of very massive stars, is at the center of this image. The infrared and visible-light views reveal several dust pillars that point toward R136, some with bright stars at their tips. One of them, at left in the visible-light image, resembles a fist with an extended index finger pointing directly at R136. The energetic radiation and high-speed material emitted by the massive stars in R136 are responsible for shaping the pillars and causing the heads of some of them to collapse, forming new stars. The infrared montage at upper left is enlarged in an accompanying image. Credits for NICMOS montages: NASA/Nolan Walborn (Space Telescope Science Institute, Baltimore, Md.) and Rodolfo Barba' (La Plata Observatory, La Plata, Argentina) Credits for WFPC2 image: NASA/John Trauger (Jet Propulsion Laboratory, Pasadena, Calif.) and James Westphal (California Institute of Technology, Pasadena, Calif.)
An electrically tunable plenoptic camera using a liquid crystal microlens array.
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
An electrically tunable plenoptic camera using a liquid crystal microlens array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Yu; School of Automation, Huazhong University of Science and Technology, Wuhan 430074; Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074
2015-05-15
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated withmore » an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.« less
NASA Technical Reports Server (NTRS)
Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)
1998-01-01
A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.
An electrically tunable plenoptic camera using a liquid crystal microlens array
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
Group Delay Tracking with the Sydney University Stellar Interferometer
NASA Astrophysics Data System (ADS)
Lawson, Peter R.
1994-08-01
The Sydney University Stellar Interferometer (SUSI) is a long baseline optical interferometer, located at the Paul Wild Observatory near Narrabri, in northern New South Wales, Australia. It is designed to measure stellar angular diameters using light collected from a pair of siderostats, with 11 fixed siderostats giving separations between 5 and 640 m. Apertures smaller than Fried's coherence length, r_0, are used and active tilt-compensation is employed. This ensures that when the beams are combined in the pupil plane the wavefronts are parallel. Fringes are detected when the optical path-difference between the arriving wavefronts is less than tne coherence length of light used for the observation. While observing a star it is necessary to compensate for the changes in pathlength due to the earth's rotation. It is also highly desirable to compensate for path changes due to the effects of atmospheric turbulence. Tracking the path-difference permits an accurate calibration of the fringe visibility, allows larger bandwidths to be used, and therefore improves the sensitivity of the instrument. I describe a fringe tracking system which I developed for SUSI, based on group delay tracking with a PAPA (Precision Analog Photon Address) detector. The method uses short exposure images of fringes, 1-10 ms, detected in the dispersed spectra of the combined starlight. The number of fringes across a fixed bandwidth of channeled spectrum is directly proportional to the path-difference between the arriving wavefronts. A Fast Fourier Transform, implemented in hardware, is used to calculate the spatial power spectrum of the fringes, thereby locating the delay. The visibility loss due to a non-constant fringe spacing on the detector is investigated, and the improvements obtained from rebinning the photon data are shown. The low light level limitations of group delay tracking are determined theoretically with emphasis on the probability of tracking error, rather than the signal-to-noise ratio. Experimental results from both laboratory studies and stellar observations are presented. These show the first closed-loop operation of a fringe tracking system based on observations of group delay with a stellar interferometer. The Sydney University PAPA camera, a photon counting array detector developed for use in this work, is also described. The design principles of the PAPA camera are outlined and the potential sources of image artifacts are identified. The artifacts arise from the use of optical encoding with Gray coded masks, and teh new camera is distinguished by its mask-plate, which was designed to overcome artifacts due to vignetting. Nw lens mounts are also presented which permit a simplified optical alignment without the need for tilt-plates. The performance of the camera is described. (SECTION: Dissertation Summaries)
C-RED one: ultra-high speed wavefront sensing in the infrared made possible
NASA Astrophysics Data System (ADS)
Gach, J.-L.; Feautrier, Philippe; Stadler, Eric; Greffe, Timothee; Clop, Fabien; Lemarchand, Stéphane; Carmignani, Thomas; Boutolleau, David; Baker, Ian
2016-07-01
First Light Imaging's CRED-ONE infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. We will show the performances of the camera, its main features and compare them to other high performance wavefront sensing cameras like OCAM2 in the visible and in the infrared. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.
NASA Astrophysics Data System (ADS)
Heldmann, Jennifer L.; Lamb, Justin; Asturias, Daniel; Colaprete, Anthony; Goldstein, David B.; Trafton, Laurence M.; Varghese, Philip L.
2015-07-01
The LCROSS (Lunar Crater Observation and Sensing Satellite) impacted the Cabeus crater near the lunar South Pole on 9 October 2009 and created an impact plume that was observed by the LCROSS Shepherding Spacecraft. Here we analyze data from the ultraviolet-visible spectrometer and visible context camera aboard the spacecraft. We use these data to constrain a numerical model to understand the physical evolution of the resultant plume. The UV-visible light curve peaks in brightness 18 s after impact and then decreases in radiance but never returns to the pre-impact radiance value for the ∼4 min of observation by the Shepherding Spacecraft. The blue:red spectral ratio increases in the first 10 s, decreases over the following 50 s, remains constant for approximately 150 s, and then begins to increase again ∼180 s after impact. Constraining the modeling results with spacecraft observations, we conclude that lofted dust grains remained suspended above the lunar surface for the entire 250 s of observation after impact. The impact plume was composed of both a high angle spike and low angle plume component. Numerical modeling is used to evaluate the relative effects of various plume parameters to further constrain the plume properties when compared with the observational data. Dust particle sizes lofted above the lunar surface were micron to sub-micron in size. Water ice particles were also contained within the ejecta cloud and simultaneously photo-dissociated and sublimated after reaching sunlight.
GETTING TO THE HEART OF A GALAXY
NASA Technical Reports Server (NTRS)
2002-01-01
This collage of images in visible and infrared light reveals how the barred spiral galaxy NGC 1365 is feeding material into its central region, igniting massive star birth and probably causing its bulge of stars to grow. The material also is fueling a black hole in the galaxy's core. A galaxy's bulge is a central, football-shaped structure composed of stars, gas, and dust. The black-and-white image in the center, taken by a ground-based telescope, displays the entire galaxy. But the telescope's resolution is not powerful enough to reveal the flurry of activity in the galaxy's hub. The blue box in the galaxy's central region outlines the area observed by the NASA Hubble Space Telescope's visible-light camera, the Wide Field and Planetary Camera 2 (WFPC2). The red box pinpoints a narrower view taken by the Hubble telescope's infrared camera, the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). A barred spiral is characterized by a lane of stars, gas, and dust slashing across a galaxy's central region. It has a small bulge that is dominated by a disk of material. The spiral arms begin at both ends of the bar. The bar is funneling material into the hub, which triggers star formation and feeds the bulge. The visible-light picture at upper left is a close-up view of the galaxy's hub. The bright yellow orb is the nucleus. The dark material surrounding the orb is gas and dust that is being funneled into the central region by the bar. The blue regions pinpoint young star clusters. In the infrared image at lower right, the Hubble telescope penetrates the dust seen in the WFPC2 picture to reveal more clusters of young stars. The bright blue dots represent young star clusters; the brightest of the red dots are young star clusters enshrouded in dust and visible only in the infrared image. The fainter red dots are older star clusters. The WFPC2 image is a composite of three filters: near-ultraviolet (3327 Angstroms), visible (5552 Angstroms), and near-infrared (8269 Angstroms). The NICMOS image, taken at a wavelength of 16,000 Angstroms, was combined with the visible and near-infrared wavelengths taken by WFPC2. The WFPC2 image was taken in January 1996; the NICMOS data were taken in April 1998. Credits for the ground-based image: Allan Sandage (The Observatories of the Carnegie Institution of Washington) and John Bedke (Computer Sciences Corporation and the Space Telescope Science Institute) Credits for the WFPC2 image: NASA and John Trauger (Jet Propulsion Laboratory) Credits for the NICMOS image: NASA, ESA, and C. Marcella Carollo (Columbia University)
Potential for application of an acoustic camera in particle tracking velocimetry.
Wu, Fu-Chun; Shao, Yun-Chuan; Wang, Chi-Kuei; Liou, Jim
2008-11-01
We explored the potential and limitations for applying an acoustic camera as the imaging instrument of particle tracking velocimetry. The strength of the acoustic camera is its usability in low-visibility environments where conventional optical cameras are ineffective, while its applicability is limited by lower temporal and spatial resolutions. We conducted a series of experiments in which acoustic and optical cameras were used to simultaneously image the rotational motion of tracer particles, allowing for a comparison of the acoustic- and optical-based velocities. The results reveal that the greater fluctuations associated with the acoustic-based velocities are primarily attributed to the lower temporal resolution. The positive and negative biases induced by the lower spatial resolution are balanced, with the positive ones greater in magnitude but the negative ones greater in quantity. These biases reduce with the increase in the mean particle velocity and approach minimum as the mean velocity exceeds the threshold value that can be sensed by the acoustic camera.
Cheetah: A high frame rate, high resolution SWIR image camera
NASA Astrophysics Data System (ADS)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram; ...
2017-11-07
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
SeaVipers- Computer Vision and Inertial Position/Reference Sensor System (CVIPRSS)
2015-08-01
uses an Inertial Measurement Unit (IMU) to detect changes in roll , pitch, and yaw (x-, y-, and z-axis movement). We use a 9DOF Razor IMU from SparkFun... inertial measurement unit (IMU) and cameras that are hardware synchronized to provide close coupling. Several fast food companies, Internet giants like...light cameras [32]. 4.1.4 Inertial Measurement Unit To assist the PTU in video stabilization for the camera and aiming the rangefinder, Sea- Vipers
Tofte, Josef N; Westerlind, Brian O; Martin, Kevin D; Guetschow, Brian L; Uribe-Echevarria, Bastián; Rungprai, Chamnanni; Phisitkul, Phinit
2017-03-01
To validate the knee, shoulder, and virtual Fundamentals of Arthroscopic Training (FAST) modules on a virtual arthroscopy simulator via correlations with arthroscopy case experience and postgraduate year. Orthopaedic residents and faculty from one institution performed a standardized sequence of knee, shoulder, and FAST modules to evaluate baseline arthroscopy skills. Total operation time, camera path length, and composite total score (metric derived from multiple simulator measurements) were compared with case experience and postgraduate level. Values reported are Pearson r; alpha = 0.05. 35 orthopaedic residents (6 per postgraduate year), 2 fellows, and 3 faculty members (2 sports, 1 foot and ankle), including 30 male and 5 female residents, were voluntarily enrolled March to June 2015. Knee: training year correlated significantly with year-averaged knee composite score, r = 0.92, P = .004, 95% confidence interval (CI) = 0.84, 0.96; operation time, r = -0.92, P = .004, 95% CI = -0.96, -0.84; and camera path length, r = -0.97, P = .0004, 95% CI = -0.98, -0.93. Knee arthroscopy case experience correlated significantly with composite score, r = 0.58, P = .0008, 95% CI = 0.27, 0.77; operation time, r = -0.54, P = .002, 95% CI = -0.75, -0.22; and camera path length, r = -0.62, P = .0003, 95% CI = -0.8, -0.33. Shoulder: training year correlated strongly with average shoulder composite score, r = 0.90, P = .006, 95% CI = 0.81, 0.95; operation time, r = -0.94, P = .001, 95% CI = -0.97, -0.89; and camera path length, r = -0.89, P = .007, 95% CI = -0.95, -0.80. Shoulder arthroscopy case experience correlated significantly with average composite score, r = 0.52, P = .003, 95% CI = 0.2, 0.74; strongly with operation time, r = -0.62, P = .0002, 95% CI = -0.8, -0.33; and camera path length, r = -0.37, P = .044, 95% CI = -0.64, -0.01, by training year. FAST: training year correlated significantly with 3 combined FAST activity average composite scores, r = 0.81, P = .0279, 95% CI = 0.65, 0.90; operation times, r = -0.86, P = .012, 95% CI = -0.93, -0.74; and camera path lengths, r = -0.85, P = .015, 95% CI = -0.92, -0.72. Total arthroscopy cases performed did not correlate significantly with overall FAST performance. We found significant correlations between both training year and knee and shoulder arthroscopy experience when compared with performance as measured by composite score, camera path length, and operation time during a simulated diagnostic knee and shoulder arthroscopy, respectively. Three FAST activities demonstrated significant correlations with training year but not arthroscopy case experience as measured by composite score, camera path length, and operation time. We attempt to validate an arthroscopy simulator that could be used to supplement arthroscopy skills training for orthopaedic residents. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tang, Fanjie; Ma, Shuqing; Yang, Ling; Du, Chuanyao; Tang, Yingjie
2016-10-01
According to Koschmieder's law, a mathematical model of contrast between a single black object and the sky background is established. Based on this principle, we built a black target visiometer system using a photograph of a black object taken with an industrial camera, that has a relatively simple structure and automated operation. In this study, three commercial visibility instruments-a forward scatter meter (CJB-3A) and two atmospheric transmission meters (LT31, VM100)-were compared to the black target visiometer system. Our results show that, within visibility ranges of up to 10 km, 1) all of the instruments agree well at low visibility and agree poorly at a visibility exceeding 5 km; 2) the forward scattering instrument has high bias at low visibility because particle absorption is not included; and 3) the best agreement with the black target method was obtained with the simple transmissometer rather than the forward scatter instrument or the hybrid transmissometer for a visibility range of up to 10 km.
Unusual Light in Dark Space Revealed by Los Alamos, NASA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smidt, Joseph
By looking at the dark spaces between visible galaxies and stars the NASA/JPL CIBER sounding rocket experiment has produced data that could redefine what constitutes a galaxy. CIBER, the Cosmic Infrared Background Experiment, is designed to understand the physics going on between visible stars and galaxies. The relatively small, sub-orbital rocket unloads a camera that snaps pictures of the night sky in near-infrared wavelengths, between 1.2 and 1.6 millionth of a meter. Scientists take the data and remove all the known visible stars and galaxies and quantify what is left.
Unusual Light in Dark Space Revealed by Los Alamos, NASA
Smidt, Joseph
2018-01-16
By looking at the dark spaces between visible galaxies and stars the NASA/JPL CIBER sounding rocket experiment has produced data that could redefine what constitutes a galaxy. CIBER, the Cosmic Infrared Background Experiment, is designed to understand the physics going on between visible stars and galaxies. The relatively small, sub-orbital rocket unloads a camera that snaps pictures of the night sky in near-infrared wavelengths, between 1.2 and 1.6 millionth of a meter. Scientists take the data and remove all the known visible stars and galaxies and quantify what is left.
Making 3D movies of Northern Lights
NASA Astrophysics Data System (ADS)
Hivon, Eric; Mouette, Jean; Legault, Thierry
2017-10-01
We describe the steps necessary to create three-dimensional (3D) movies of Northern Lights or Aurorae Borealis out of real-time images taken with two distant high-resolution fish-eye cameras. Astrometric reconstruction of the visible stars is used to model the optical mapping of each camera and correct for it in order to properly align the two sets of images. Examples of the resulting movies can be seen at http://www.iap.fr/aurora3d
2010-10-01
open nephron spanng surgery a single institution expenence. J Ural 2005; 174: 855 21 Bhayan• SB, Aha KH Pmto PA et al Laparoscopic partial...noninvasively assess laparoscopic intraoperative changes in renal tissue perfusion during and after warm ischemia. Materials and Methods: We analyzed select...TITLE AND SUBTITLE Visual Enhancement of Laparoscopic Partial Nephrectomy With 3-Charge Coupled Device Camera: Assessing Intraoperative Tissue
Sellers and Fossum on the end of the OBSS during EVA1 on STS-121 / Expedition 13 joint operations
2006-07-08
STS121-323-011 (8 July 2006) --- Astronauts Piers J. Sellers and Michael E. Fossum, STS-121 mission specialists, work in tandem on Space Shuttle Discovery's Remote Manipulator System/Orbiter Boom Sensor System (RMS/OBSS) during the mission's first scheduled session of extravehicular activity (EVA). Also visible on the OBSS are the Laser Dynamic Range Imager (LDRI), Intensified Television Camera (ITVC) and Laser Camera System (LCS).
Potential Utility of a 4K Consumer Camera for Surgical Education in Ophthalmology.
Ichihashi, Tsunetomo; Hirabayashi, Yutaka; Nagahara, Miyuki
2017-01-01
Purpose. We evaluated the potential utility of a cost-effective 4K consumer video system for surgical education in ophthalmology. Setting. Tokai University Hachioji Hospital, Tokyo, Japan. Design. Experimental study. Methods. The eyes that underwent cataract surgery, glaucoma surgery, vitreoretinal surgery, or oculoplastic surgery between February 2016 and April 2016 were recorded with 17.2 million pixels using a high-definition digital video camera (LUMIX DMC-GH4, Panasonic, Japan) and with 0.41 million pixels using a conventional analog video camera (MKC-501, Ikegami, Japan). Motion pictures of two cases for each surgery type were evaluated and classified as having poor, normal, or excellent visibility. Results. The 4K video system was easily installed by reading the instructions without technical expertise. The details of the surgical picture in the 4K system were highly improved over those of the conventional pictures, and the visual effects for surgical education were significantly improved. Motion pictures were stored for approximately 11 h with 512 GB SD memory. The total price of this system was USD 8000, which is a very low price compared with a commercial system. Conclusion. This 4K consumer camera was able to record and play back with high-definition surgical field visibility on the 4K monitor and is a low-cost, high-performing alternative for surgical facilities.
Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.
2006-01-01
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion
NASA Astrophysics Data System (ADS)
Chatterjee, Abhijit; Verma, Anurag
2016-05-01
The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.
Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio
2010-01-01
This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559
Global Ultraviolet Imaging Processing for the GGS Polar Visible Imaging System (VIS)
NASA Technical Reports Server (NTRS)
Frank, L. A.
1997-01-01
The Visible Imaging System (VIS) on Polar spacecraft of the NASA Goddard Space Flight Center was launched into orbit about Earth on February 24, 1996. Since shortly after launch, the Earth Camera subsystem of the VIS has been operated nearly continuously to acquire far ultraviolet, global images of Earth and its northern and southern auroral ovals. The only exceptions to this continuous imaging occurred for approximately 10 days at the times of the Polar spacecraft re-orientation maneuvers in October, 1996 and April, 1997. Since launch, approximately 525,000 images have been acquired with the VIS Earth Camera. The VIS instrument operational health continues to be excellent. Since launch, all systems have operated nominally with all voltages, currents, and temperatures remaining at nominal values. In addition, the sensitivity of the Earth Camera to ultraviolet light has remained constant throughout the operation period. Revised flight software was uploaded to the VIS in order to compensate for the spacecraft wobble. This is accomplished by electronic shuttering of the sensor in synchronization with the 6-second period of the wobble, thus recovering the original spatial resolution obtainable with the VIS Earth Camera. In addition, software patches were uploaded to make the VIS immune to signal dropouts that occur in the sliprings of the despun platform mechanism. These changes have worked very well. The VIS and in particular the VIS Earth Camera is fully operational and will continue to acquire global auroral images as the sun progresses toward solar maximum conditions after the turn of the century.
2013-07-17
These craters on Tharsis are first visible as new dark spots observed by NASA Mars Reconnaissance Orbiter Context Camera CTX, which can view much larger areas, and then imaged by HiRISE for a close-up look.
2000-11-21
This image is one of seven from the narrow-angle camera on NASA Cassini spacecraft assembled as a brief movie of cloud movements on Jupiter. The smallest features visible are about 500 kilometers about 300 miles across.
New Horizons Tracks an Asteroid
2007-04-02
The two pots in this image are a composite of two images of asteroid 2002 JF56 taken on June 11 and June 12, 2006, with the Multispectral Visible Imaging Camera component of the New Horizons Ralph imager.
2012-09-06
Tracks from the first drives of NASA Curiosity rover are visible in this image captured by the High-Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter. The rover is seen where the tracks end.
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.
2004-04-01
Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.
FAST CHOPPER BUILDING, TRA665. DETAIL OF STEEL DOOR ENTRY TO ...
FAST CHOPPER BUILDING, TRA-665. DETAIL OF STEEL DOOR ENTRY TO LOWER LEVEL. CAMERA FACING NORTH. INL NEGATIVE NO. HD42-1. Mike Crane, Photographer, 3/2004 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping
NASA Astrophysics Data System (ADS)
Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.
2016-06-01
High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.
Neuromorphic Event-Based 3D Pose Estimation
Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.
2016-01-01
Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
High speed Infrared imaging method for observation of the fast varying temperature phenomena
NASA Astrophysics Data System (ADS)
Moghadam, Reza; Alavi, Kambiz; Yuan, Baohong
With new improvements in high-end commercial R&D camera technologies many challenges have been overcome for exploring the high-speed IR camera imaging. The core benefits of this technology is the ability to capture fast varying phenomena without image blur, acquire enough data to properly characterize dynamic energy, and increase the dynamic range without compromising the number of frames per second. This study presents a noninvasive method for determining the intensity field of a High Intensity Focused Ultrasound Device (HIFU) beam using Infrared imaging. High speed Infrared camera was placed above the tissue-mimicking material that was heated by HIFU with no other sensors present in the HIFU axial beam. A MATLAB simulation code used to perform a finite-element solution to the pressure wave propagation and heat equations within the phantom and temperature rise to the phantom was computed. Three different power levels of HIFU transducers were tested and the predicted temperature increase values were within about 25% of IR measurements. The fundamental theory and methods developed in this research can be used to detect fast varying temperature phenomena in combination with the infrared filters.
Disentangling the outflow and protostars in HH 900 in the Carina Nebula
NASA Astrophysics Data System (ADS)
Reiter, Megan; Smith, Nathan; Kiminki, Megan M.; Bally, John; Anderson, Jay
2015-04-01
HH 900 is a peculiar protostellar outflow emerging from a small, tadpole-shaped globule in the Carina Nebula. Previous Hα imaging with Hubble Space Telescope (HST)/Advanced Camera for Surveys showed an ionized outflow with a wide opening angle that is distinct from the highly collimated structures typically seen in protostellar jets. We present new narrowband near-IR [Fe II] images taken with the Wide Field Camera 3 on the HST that reveal a remarkably different structure than Hα. In contrast to the unusual broad Hα outflow, the [Fe II] emission traces a symmetric, collimated bipolar jet with the morphology and kinematics that are more typical of protostellar jets. In addition, new Gemini adaptive optics images reveal near-IR H2 emission coincident with the Hα emission, but not the [Fe II]. Spectra of these three components trace three separate and distinct velocity components: (1) H2 from the slow, entrained molecular gas, (2) Hα from the ionized skin of the accelerating outflow sheath, and (3) [Fe II] from the fast, dense, and collimated protostellar jet itself. Together, these data require a driving source inside the dark globule that remains undetected behind a large column density of material. In contrast, Hα and H2 emission trace the broad outflow of material entrained by the jet, which is irradiated outside the globule. As it get dissociated and ionized, it remains visible for only a short time after it is dragged into the H II region.
Adaptive optics at the Subaru telescope: current capabilities and development
NASA Astrophysics Data System (ADS)
Guyon, Olivier; Hayano, Yutaka; Tamura, Motohide; Kudo, Tomoyuki; Oya, Shin; Minowa, Yosuke; Lai, Olivier; Jovanovic, Nemanja; Takato, Naruhisa; Kasdin, Jeremy; Groff, Tyler; Hayashi, Masahiko; Arimoto, Nobuo; Takami, Hideki; Bradley, Colin; Sugai, Hajime; Perrin, Guy; Tuthill, Peter; Mazin, Ben
2014-08-01
Current AO observations rely heavily on the AO188 instrument, a 188-elements system that can operate in natural or laser guide star (LGS) mode, and delivers diffraction-limited images in near-IR. In its LGS mode, laser light is transported from the solid state laser to the launch telescope by a single mode fiber. AO188 can feed several instruments: the infrared camera and spectrograph (IRCS), a high contrast imaging instrument (HiCIAO) or an optical integral field spectrograph (Kyoto-3DII). Adaptive optics development in support of exoplanet observations has been and continues to be very active. The Subaru Coronagraphic Extreme-AO (SCExAO) system, which combines extreme-AO correction with advanced coronagraphy, is in the commissioning phase, and will greatly increase Subaru Telescope's ability to image and study exoplanets. SCExAO currently feeds light to HiCIAO, and will soon be combined with the CHARIS integral field spectrograph and the fast frame MKIDs exoplanet camera, which have both been specifically designed for high contrast imaging. SCExAO also feeds two visible-light single pupil interferometers: VAMPIRES and FIRST. In parallel to these direct imaging activities, a near-IR high precision spectrograph (IRD) is under development for observing exoplanets with the radial velocity technique. Wide-field adaptive optics techniques are also being pursued. The RAVEN multi-object adaptive optics instrument was installed on Subaru telescope in early 2014. Subaru Telescope is also planning wide field imaging with ground-layer AO with the ULTIMATE-Subaru project.
Changing requirements and solutions for unattended ground sensors
NASA Astrophysics Data System (ADS)
Prado, Gervasio; Johnson, Robert
2007-10-01
Unattended Ground Sensors (UGS) were first used to monitor Viet Cong activity along the Ho Chi Minh Trail in the 1960's. In the 1980's, significant improvement in the capabilities of UGS became possible with the development of digital signal processors; this led to their use as fire control devices for smart munitions (for example: the Wide Area Mine) and later to monitor the movements of mobile missile launchers. In these applications, the targets of interest were large military vehicles with strong acoustic, seismic and magnetic signatures. Currently, the requirements imposed by new terrorist threats and illegal border crossings have changed the emphasis to the monitoring of light vehicles and foot traffic. These new requirements have changed the way UGS are used. To improve performance against targets with lower emissions, sensors are used in multi-modal arrangements. Non-imaging sensors (acoustic, seismic, magnetic and passive infrared) are now being used principally as activity sensors to cue imagers and remote cameras. The availability of better imaging technology has made imagers the preferred source of "actionable intelligence". Infrared cameras are now based on un-cooled detector-arrays that have made their application in UGS possible in terms of their cost and power consumption. Visible light imagers are also more sensitive extending their utility well beyond twilight. The imagers are equipped with sophisticated image processing capabilities (image enhancement, moving target detection and tracking, image compression). Various commercial satellite services now provide relatively inexpensive long-range communications and the Internet provides fast worldwide access to the data.
Deep-UV-sensitive high-frame-rate backside-illuminated CCD camera developments
NASA Astrophysics Data System (ADS)
Dawson, Robin M.; Andreas, Robert; Andrews, James T.; Bhaskaran, Mahalingham; Farkas, Robert; Furst, David; Gershstein, Sergey; Grygon, Mark S.; Levine, Peter A.; Meray, Grazyna M.; O'Neal, Michael; Perna, Steve N.; Proefrock, Donald; Reale, Michael; Soydan, Ramazan; Sudol, Thomas M.; Swain, Pradyumna K.; Tower, John R.; Zanzucchi, Pete
2002-04-01
New applications for ultra-violet imaging are emerging in the fields of drug discovery and industrial inspection. High throughput is critical for these applications where millions of drug combinations are analyzed in secondary screenings or high rate inspection of small feature sizes over large areas is required. Sarnoff demonstrated in1990 a back illuminated, 1024 X 1024, 18 um pixel, split-frame-transfer device running at > 150 frames per second with high sensitivity in the visible spectrum. Sarnoff designed, fabricated and delivered cameras based on these CCDs and is now extending this technology to devices with higher pixel counts and higher frame rates through CCD architectural enhancements. The high sensitivities obtained in the visible spectrum are being pushed into the deep UV to support these new medical and industrial inspection applications. Sarnoff has achieved measured quantum efficiencies > 55% at 193 nm, rising to 65% at 300 nm, and remaining almost constant out to 750 nm. Optimization of the sensitivity is being pursued to tailor the quantum efficiency for particular wavelengths. Characteristics of these high frame rate CCDs and cameras will be described and results will be presented demonstrating high UV sensitivity down to 150 nm.
NASA Technical Reports Server (NTRS)
Barnes, J. C. (Principal Investigator); Smallwood, M. D.; Cogan, J. L.
1975-01-01
The author has identified the following significant results. Of the four black and white S190A camera stations, snowcover is best defined in the two visible spectral bands, due in part to their better resolution. The overall extent of the snow can be mapped more precisely, and the snow within shadow areas is better defined in the visible bands. Of the two S190A color products, the aerial color photography is the better. Because of the contrast in color between snow and snow-free terrain and the better resolution, this product is concluded to be the best overall of the six camera stations for detecting and mapping snow. Overlapping frames permit stereo viewing, which aids in distinguishing clouds from the underlying snow. Because of the greater spatial resolution of the S190B earth terrain camera, areal snow extent can be mapped in greater detail than from the S190A photographs. The snow line elevation measured from the S190A and S190B photographs is reasonable compared to the meager ground truth data available.
Study of the effect of sawteeth on fast ions and neutron emission in MAST using a neutron camera
NASA Astrophysics Data System (ADS)
Cecconello, M.; Sperduti, A.; the MAST team
2018-05-01
The effect of the sawtooth instability on the confinement of fast ions on MAST, and the impact it has on the neutron emission, has been studied in detail using the TRANSP/NUBEAM codes coupled to a full orbit following code. The sawtooth models in TRANSP/NUBEAM indicate that, on MAST, passing and trapped fast ions are redistributed in approximately equal number and on a level that is consistent with the observations. It has not been possible to discriminate between the different sawtooth models since their predictions are all compatible with the neutron camera observations. Full orbit calculations of the fast ion motion have been used to estimate the characteristic time scales and energy thresholds that according to theoretical predictions govern the fast ions redistribution: no energy threshold for the redistribution for either passing and trapped fast ions was found. The characteristic times have, however, frequencies that are comparable with the frequencies of a m = 1, n = 1 perturbation and its harmonics with toroidal mode numbers n=2, \\ldots , 4, suggesting that on spherical tokamaks, in addition to the classical sawtooth-induced transport mechanisms of fast ions by attachment to the evolving perturbation and the associated E × B drift, a resonance mechanism between the m = 1 perturbation and the fast ions orbits might be at play.
Device for wavelength-selective imaging
Frangioni, John V.
2010-09-14
An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.
1992-03-01
construction were completed and data, "’dm blue prints and physical measurements, was entered concurrent with the coding of routines for data retrieval. While...desirable for that view to accurately reflect what a person (or camera) would see if they were to stand at the same point in the physical world. To... physical dimensions. A parallel projection does not perform this scaling and is therefore not suitable to our application. B. GENERAL PERSPECTIVE
2011-08-03
Ground-based astronomers will be playing a vital role in NASA Juno mission. Images from the amateur astronomy community are needed to help the JunoCam instrument team predict what features will be visible when the camera images are taken.
2009-11-03
Bright sunlight on Rhea shows off the cratered surface of Saturn second largest moon in this image captured by NASA Cassini Orbiter. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 21, 2009.
Phoenix Lander Amid Disappearing Spring Ice
2010-01-11
NASA Phoenix Mars Lander, its backshell and heatshield visible within this enhanced-color image of the Phoenix landing site taken on Jan. 6, 2010 by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.
High speed spectral measurements of IED detonation fireballs
NASA Astrophysics Data System (ADS)
Gordon, J. Motos; Spidell, Matthew T.; Pitz, Jeremey; Gross, Kevin C.; Perram, Glen P.
2010-04-01
Several homemade explosives (HMEs) were manufactured and detonated at a desert test facility. Visible and infrared signatures were collected using two Fourier transformspectrometers, two thermal imaging cameras, a radiometer, and a commercial digital video camera. Spectral emissions from the post-detonation combustion fireball were dominated by continuum radiation. The events were short-lived, decaying in total intensity by an order of magnitude within approximately 300ms after detonation. The HME detonation produced a dust cloud in the immediate area that surrounded and attenuated the emitted radiation from the fireball. Visible imagery revealed a dark particulate (soot) cloud within the larger surrounding dust cloud. The ejected dust clouds attenuated much of the radiation from the post-detonation combustion fireballs, thereby reducing the signal-to-noise ratio. The poor SNR at later times made it difficult to detect selective radiation from by-product gases on the time scale (~500ms) in which they have been observed in other HME detonations.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-01-01
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510
NASA Technical Reports Server (NTRS)
2004-01-01
The Mars Exploration Rover Opportunity finished observations of the prominent rock outcrop it has been studying during its 51 martian days, or sols, on Mars, and is currently on the hunt for new discoveries. This image from the rover's navigation camera atop its mast features Opportunity's lander--its temporary home for the six-month cruise to Mars. The rover's soil survey traverse plan involves arcing around its landing site, called the Challenger Memorial Station, and over the trench it made on sol 23. In this image, Opportunity is situated about 6.2 meters (about 20.3 feet) from the lander. Rover tracks zig-zag along the surface. Bounce marks and airbag retraction marks are visible around the lander. The calibration target or sundial, which both rover panoramic cameras use to verify the true colors and brightness of the red planet, is visible on the back end of the rover.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-03-20
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.
Understanding Visible Perception
NASA Technical Reports Server (NTRS)
2003-01-01
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Automatic multi-camera calibration for deployable positioning systems
NASA Astrophysics Data System (ADS)
Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan
2012-06-01
Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.
Schlieren imaging of loud sounds and weak shock waves in air near the limit of visibility
NASA Astrophysics Data System (ADS)
Hargather, Michael John; Settles, Gary S.; Madalis, Matthew J.
2010-02-01
A large schlieren system with exceptional sensitivity and a high-speed digital camera are used to visualize loud sounds and a variety of common phenomena that produce weak shock waves in the atmosphere. Frame rates varied from 10,000 to 30,000 frames/s with microsecond frame exposures. Sound waves become visible to this instrumentation at frequencies above 10 kHz and sound pressure levels in the 110 dB (6.3 Pa) range and above. The density gradient produced by a weak shock wave is examined and found to depend upon the profile and thickness of the shock as well as the density difference across it. Schlieren visualizations of weak shock waves from common phenomena include loud trumpet notes, various impact phenomena that compress a bubble of air, bursting a toy balloon, popping a champagne cork, snapping a wooden stick, and snapping a wet towel. The balloon burst, snapping a ruler on a table, and snapping the towel and a leather belt all produced readily visible shock-wave phenomena. In contrast, clapping the hands, snapping the stick, and the champagne cork all produced wave trains that were near the weak limit of visibility. Overall, with sensitive optics and a modern high-speed camera, many nonlinear acoustic phenomena in the air can be observed and studied.
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
The recent introduction of inexpensive high-speed cameras offers a new experimental approach to many simple but fast-occurring events in physics. In this paper, the authors present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects…
Yang, Hualei; Yang, Xi; Heskel, Mary; Sun, Shucun; Tang, Jianwu
2017-04-28
Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporal resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). We found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.
NASA Astrophysics Data System (ADS)
Gonzaga, S.; et al.
2011-03-01
ACS was designed to provide a deep, wide-field survey capability from the visible to near-IR using the Wide Field Camera (WFC), high resolution imaging from the near-UV to near-IR with the now-defunct High Resolution Camera (HRC), and solar-blind far-UV imaging using the Solar Blind Camera (SBC). The discovery efficiency of ACS's Wide Field Channel (i.e., the product of WFC's field of view and throughput) is 10 times greater than that of WFPC2. The failure of ACS's CCD electronics in January 2007 brought a temporary halt to CCD imaging until Servicing Mission 4 in May 2009, when WFC functionality was restored. Unfortunately, the high-resolution optical imaging capability of HRC was not recovered.
A reaction-diffusion-based coding rate control mechanism for camera sensor networks.
Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki
2010-01-01
A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.
Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera
NASA Astrophysics Data System (ADS)
Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.
2017-10-01
Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.
Martian Terrain Near Curiosity Precipice Target
2016-12-06
This view from the Navigation Camera (Navcam) on the mast of NASA's Curiosity Mars rover shows rocky ground within view while the rover was working at an intended drilling site called "Precipice" on lower Mount Sharp. The right-eye camera of the stereo Navcam took this image on Dec. 2, 2016, during the 1,537th Martian day, or sol, of Curiosity's work on Mars. On the previous sol, an attempt to collect a rock-powder sample with the rover's drill ended before drilling began. This led to several days of diagnostic work while the rover remained in place, during which it continued to use cameras and a spectrometer on its mast, plus environmental monitoring instruments. In this view, hardware visible at lower right includes the sundial-theme calibration target for Curiosity's Mast Camera. http://photojournal.jpl.nasa.gov/catalog/PIA21140
NASA Astrophysics Data System (ADS)
Saari, H.; Akujärvi, A.; Holmlund, C.; Ojanen, H.; Kaivosoja, J.; Nissinen, A.; Niemeläinen, O.
2017-10-01
The accurate determination of the quality parameters of crops requires a spectral range from 400 nm to 2500 nm (Kawamura et al., 2010, Thenkabail et al., 2002). Presently the hyperspectral imaging systems that cover this wavelength range consist of several separate hyperspectral imagers and the system weight is from 5 to 15 kg. In addition the cost of the Short Wave Infrared (SWIR) cameras is high ( 50 k€). VTT has previously developed compact hyperspectral imagers for drones and Cubesats for Visible and Very near Infrared (VNIR) spectral ranges (Saari et al., 2013, Mannila et al., 2013, Näsilä et al., 2016). Recently VTT has started to develop a hyperspectral imaging system that will enable imaging simultaneously in the Visible, VNIR, and SWIR spectral bands. The system can be operated from a drone, on a camera stand, or attached to a tractor. The targeted main applications of the DroneKnowledge hyperspectral system are grass, peas, and cereals. In this paper the characteristics of the built system are shortly described. The system was used for spectral measurements of wheat, several grass species and pea plants fixed to the camera mount in the test fields in Southern Finland and in the green house. The wheat, grass and pea field measurements were also carried out using the system mounted on the tractor. The work is part of the Finnish nationally funded DroneKnowledge - Towards knowledge based export of small UAS remote sensing technology project.
Fireball Observations in Visible and Sodium Bands
NASA Astrophysics Data System (ADS)
Fletcher, Sandra
On November 17th at 1:32am MST, a large Leonid fireball was simultaneously imaged by two experiments, a visible band CCD camera and a 590nm filtered band equi-angle fisheye and telecentric lens assembly. The visible band camera, ROTSE (Robotic Optical Transient Search Experiment) is a two by two f/1.9 telephoto lens array with 2k x2k Thompson CCD and is located at 35.87 N, 106.25 W at an altitude of 2115m. One-minute exposures along the radiant were taken of the event for 30 minutes after the initial explosion. The sodium band experiment was located at 35.29 N,106.46 W at an altitude of 1860m. It took ninety second exposures and captured several events throughout the night. Triangulation from two New Mexico sites resulted in an altitude of 83km over Wagon Mound, NM. Two observers present at the ROTSE site saw a green flash and a persistent glow up to seven minutes after the explosion. Cataloging of all sodium trails for comparison with lidar and infrasonic measurements is in progress. The raw data from both experiments and the atmospheric chemistry interpretation of them will be presented.
The advanced linked extended reconnaissance and targeting technology demonstration project
NASA Astrophysics Data System (ADS)
Cruickshank, James; de Villers, Yves; Maheux, Jean; Edwards, Mark; Gains, David; Rea, Terry; Banbury, Simon; Gauthier, Michelle
2007-06-01
The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing key operational needs of the future Canadian Army's Surveillance and Reconnaissance forces by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. We discuss concepts for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as beyond line-of-sight systems such as a mini-UAV and unattended ground sensors. The authors address technical issues associated with the use of fully digital IR and day video cameras and discuss video-rate image processing developed to assist the operator to recognize poorly visible targets. Automatic target detection and recognition algorithms processing both IR and visible-band images have been investigated to draw the operator's attention to possible targets. The machine generated information display requirements are presented with the human factors engineering aspects of the user interface in this complex environment, with a view to establishing user trust in the automation. The paper concludes with a summary of achievements to date and steps to project completion.
Fast sub-electron detectors review for interferometry
NASA Astrophysics Data System (ADS)
Feautrier, Philippe; Gach, Jean-Luc; Bério, Philippe
2016-08-01
New disruptive technologies are now emerging for detectors dedicated to interferometry. The detectors needed for this kind of applications need antonymic characteristics: the detector noise must be very low, especially when the signal is dispersed but at the same time must also sample the fast temporal characteristics of the signal. This paper describes the new fast low noise technologies that have been recently developed for interferometry and adaptive optics. The first technology is the Avalanche PhotoDiode (APD) infrared arrays made of HgCdTe. In this paper are presented the two programs that have been developed in that field: the Selex Saphira 320x256 [1] and the 320x255 RAPID detectors developed by Sofradir/CEA LETI in France [2], [3], [4]. Status of these two programs and future developments are presented. Sub-electron noise can now be achieved in the infrared using this technology. The exceptional characteristics of HgCdTe APDs are due to a nearly exclusive impaction ionization of the electrons, and this is why these devices have been called "electrons avalanche photodiodes" or e-APDs. These characteristics have inspired a large effort in developing focal plan arrays using HgCdTe APDs for low photon number applications such as active imaging in gated mode (2D) and/or with direct time of flight detection (3D imaging) and, more recently, passive imaging for infrared wave front correction and fringe tracking in astronomical observations. In addition, a commercial camera solution called C-RED, based on Selex Saphira and commercialized by First Light Imaging [5], is presented here. Some groups are also working with instruments in the visible. In that case, another disruptive technology is showing outstanding performances: the Electron Multiplying CCDs (EMCCD) developed mainly by e2v technologies in UK. The OCAM2 camera, commercialized by First Light Imaging [5], uses the 240x240 EMMCD from e2v and is successfully implemented on the VEGA instrument on the CHARA interferometer (US) by the Lagrange laboratory from Observatoire de la Cote d'Azur. By operating the detector at gain 1000, the readout noise is as low as 0.1 e and data can be analyzed with a better contrast in photon counting mode.
NASA Astrophysics Data System (ADS)
Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim
2016-04-01
Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can detect upwelling in the nearshore zone, and enhances the safety of beach users. All data can be presented in real- or quasi-real time and are stored for future analysis and training/validation of coastal processes models. Acknowledgements: This work was supported by the project BEACHTOUR (11SYN-8-1466) of the Operational Program "Cooperation 2011, Competitiveness and Entrepreneurship", co-funded by the European Regional Development Fund and the Greek Ministry of Education and Religious Affairs.
2015-04-13
and receiver optimal lighting configuration should be determined and evaluated in dusk, twilight and full dark lunar illumination periods. Degraded...should consist of sunset to nautical twilight . These conditions provide poor illumination for visible cameras, but high for IR ones. Night conditions
Covering Jupiter from Earth and Space
2011-08-03
Ground-based astronomers will be playing a vital role in NASA Juno mission. Images from the amateur astronomy community are needed to help the JunoCam instrument team predict what features will be visible when the camera images are taken.
2005-12-19
Using the JMars targeting software, eighth grade students from Charleston Middle School in Charleston, IL, selected the location of -8.37N and 276.66E for capture by the THEMIS visible camera during Mars Odyssey sixth orbit of Mars on Nov. 22, 2005
Two Moons and the Pleiades from Mars
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Inverted animation of PIA06340 Two Moons and the Pleiades from Mars Annotated animation of PIA06340 Two Moons and the Pleiades from Mars Taking advantage of extra solar energy collected during the day, NASA's Mars Exploration Rover Spirit recently settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. In this view, the Pleiades, a star cluster also known as the 'Seven Sisters,' is visible in the lower left corner. The bright star Aldebaran and some of the stars in the constellation Taurus are visible on the right. Spirit acquired this image the evening of martian day, or sol, 590 (Aug. 30, 2005). The image on the right provides an enhanced-contrast view with annotation. Within the enhanced halo of light is an insert of an unsaturated view of Phobos taken a few images later in the same sequence. 'It is incredibly cool to be running an observatory on another planet,' said planetary scientist Jim Bell of Cornell University, Ithaca, N.Y., lead scientist for the panoramic cameras on Spirit and Opportunity. In the annotated animation (figure 2), both martian moons, Deimos on the left and Phobos on the right, travel across the night sky in front of the constellation Sagittarius. Part of Sagittarius resembles an upside-down teapot. In this view, Phobos moves toward the handle and Deimos moves toward the lid. Phobos is the brighter object on the right; Deimos is on the left. Each of the stars in Sagittarius is labeled with its formal name. The inset shows an enlarged, enhanced view of Phobos, shaped rather like a potato with a hole near one end. The hole is the large impact creater Stickney, visible on the moon's upper right limb. On Mars, Phobos would be easily visible to the naked eye at night, but would be only about one-third as large as the full Moon appears from Earth. Astronauts staring at Phobos from the surface of Mars would notice its oblong, potato-like shape and that it moves quickly against the background stars. Phobos takes only 7 hours, 39 minutes to complete one orbit of Mars. That is so fast, relative to the 24-hour-and-39-minute sol on Mars (the length of time it takes for Mars to complete one rotation), that Phobos rises in the west and sets in the east. Earth's moon, by comparison, rises in the east and sets in the west. The smaller martian moon, Deimos, takes 30 hours, 12 minutes to complete one orbit of Mars. That orbital period is longer than a martian sol, and so Deimos rises, like most solar system moons, in the east and sets in the west. Scientists will use images of the two moons to better map their orbital positions, learn more about their composition, and monitor the presence of nighttime clouds or haze. Spirit took the five images that make up this composite with the panoramic camera, using the camera's broadband filter, which was designed specifically for acquiring images under low-light conditions.Low-cost panoramic infrared surveillance system
NASA Astrophysics Data System (ADS)
Kecskes, Ian; Engel, Ezra; Wolfe, Christopher M.; Thomson, George
2017-05-01
A nighttime surveillance concept consisting of a single surface omnidirectional mirror assembly and an uncooled Vanadium Oxide (VOx) longwave infrared (LWIR) camera has been developed. This configuration provides a continuous field of view spanning 360° in azimuth and more than 110° in elevation. Both the camera and the mirror are readily available, off-the-shelf, inexpensive products. The mirror assembly is marketed for use in the visible spectrum and requires only minor modifications to function in the LWIR spectrum. The compactness and portability of this optical package offers significant advantages over many existing infrared surveillance systems. The developed system was evaluated on its ability to detect moving, human-sized heat sources at ranges between 10 m and 70 m. Raw camera images captured by the system are converted from rectangular coordinates in the camera focal plane to polar coordinates and then unwrapped into the users azimuth and elevation system. Digital background subtraction and color mapping are applied to the images to increase the users ability to extract moving items from background clutter. A second optical system consisting of a commercially available 50 mm f/1.2 ATHERM lens and a second LWIR camera is used to examine the details of objects of interest identified using the panoramic imager. A description of the components of the proof of concept is given, followed by a presentation of raw images taken by the panoramic LWIR imager. A description of the method by which these images are analyzed is given, along with a presentation of these results side-by-side with the output of the 50 mm LWIR imager and a panoramic visible light imager. Finally, a discussion of the concept and its future development are given.
Park, J H; Garipov, G K; Jeon, J A; Khrenov, B A; Kim, J E; Kim, M; Kim, Y K; Lee, C-H; Lee, J; Na, G W; Nam, S; Park, I H; Park, Y-S
2008-12-08
We introduce a novel telescope consisting of a pinhole-like camera with rotatable MEMS micromirrors substituting for pinholes. The design is ideal for observations of transient luminous phenomena or fast-moving objects, such as upper atmospheric lightning and bright gamma ray bursts. The advantage of the MEMS "obscura telescope" over conventional cameras is that it is capable both of searching for events over a wide field of view, and fast zooming to allow detailed investigation of the structure of events. It is also able to track the triggering object to investigate its space-time development, and to center the interesting portion of the image on the photodetector array. We present the proposed system and the test results for the MEMS obscura telescope which has a field of view of 11.3 degrees, sixteen times zoom-in and tracking within 1 ms. (c) 2008 Optical Society of America
Characterization of flotation color by machine vision
NASA Astrophysics Data System (ADS)
Siren, Ari
1999-09-01
Flotation is the most common industrial method by which valuable minerals are separated from waste rock after crushing and grinding the ore. For process control, flotation plants and devices are equipped with conventional and specialized sensors. However, certain variables are left to the visual observation of the operator, such as the color of the froth and the size of the bubbles in the froth. The ChaCo-Project (EU-Project 24931) was launched in November 1997. In this project a measuring station was built at the Pyhasalmi flotation plant. The system includes an RGB camera and a spectral color measuring instrument for the color inspection of the flotation. The RGB camera or visible spectral range is also measured to compare the operators' comments on the color of the froth relating to the sphalerite concentration and the process balance. Different dried mineral (sphalerite) ratios were studied with iron pyrite to find out about the minerals' typical spectral features. The correlation between sphalerite spectral reflectance and sphalerite concentration over various wavelengths are used to select the proper camera system with filters or to compare the results with the color information from the RGB camera. Various machine vision candidate techniques are discussed for this application and the preprocessed information of the dried mineral colors is used and adapted to the online measuring station. Moving froth bubbles produce total reflections, disturbing the color information. Polarization filters are used and the results are reported. Also the reflectance outside the visible light is studied and reported.
A method for measuring aircraft height and velocity using dual television cameras
NASA Technical Reports Server (NTRS)
Young, W. R.
1977-01-01
A unique electronic optical technique, consisting of two closed circuit television cameras and timing electronics, was devised to measure an aircraft's horizontal velocity and height above ground without the need for airborne cooperative devices. The system is intended to be used where the aircraft has a predictable flight path and a height of less than 660 meters (2,000 feet) at or near the end of an air terminal runway, but is suitable for greater aircraft altitudes whenever the aircraft remains visible. Two television cameras, pointed at zenith, are placed in line with the expected path of travel of the aircraft. Velocity is determined by measuring the time it takes the aircraft to travel the measured distance between cameras. Height is determined by correlating this speed with the time required to cross the field of view of either camera. Preliminary tests with a breadboard version of the system and a small model aircraft indicate the technique is feasible.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1: Temperature Map This image composite shows comet Tempel 1 in visible (left) and infrared (right) light (figure 1). The infrared picture highlights the warm, or sunlit, side of the comet, where NASA's Deep Impact probe later hit. These data were acquired about six minutes before impact. The visible image was taken by the medium-resolution camera on the mission's flyby spacecraft, and the infrared data were acquired by the flyby craft's infrared spectrometer.Performance Analysis of Visible Light Communication Using CMOS Sensors.
Do, Trong-Hop; Yoo, Myungsik
2016-02-29
This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis.
Performance Analysis of Visible Light Communication Using CMOS Sensors
Do, Trong-Hop; Yoo, Myungsik
2016-01-01
This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis. PMID:26938535
Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.
2017-01-01
Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888
Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel
2013-01-01
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604
Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel
2012-12-27
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.
Fast camera observations of injected and intrinsic dust in TEXTOR
NASA Astrophysics Data System (ADS)
Shalpegin, A.; Vignitchouk, L.; Erofeev, I.; Brochard, F.; Litnovsky, A.; Bozhenkov, S.; Bykov, I.; den Harder, N.; Sergienko, G.
2015-12-01
Stereoscopic fast camera observations of pre-characterized carbon and tungsten dust injection in TEXTOR are reported, along with the modelling of tungsten particle trajectories with MIGRAINe. Particle tracking analysis of the video data showed significant differences in dust dynamics: while carbon flakes were prone to agglomeration and explosive destruction, spherical tungsten particles followed quasi-inertial trajectories. Although this inertial nature prevented any validation of the force models used in MIGRAINe, comparisons between the experimental and simulated lifetimes provide a direct evidence of dust temperature overestimation in dust dynamics codes. Furthermore, wide-view observations of the TEXTOR interior revealed the main production mechanism of intrinsic carbon dust, as well as the location of probable dust remobilization sites.
Thermal-to-visible transducer (TVT) for thermal-IR imaging
NASA Astrophysics Data System (ADS)
Flusberg, Allen; Swartz, Stephen; Huff, Michael; Gross, Steven
2008-04-01
We have been developing a novel thermal-to-visible transducer (TVT), an uncooled thermal-IR imager that is based on a Fabry-Perot Interferometer (FPI). The FPI-based IR imager can convert a thermal-IR image to a video electronic image. IR radiation that is emitted by an object in the scene is imaged onto an IR-absorbing material that is located within an FPI. Temperature variations generated by the spatial variations in the IR image intensity cause variations in optical thickness, modulating the reflectivity seen by a probe laser beam. The reflected probe is imaged onto a visible array, producing a visible image of the IR scene. This technology can provide low-cost IR cameras with excellent sensitivity, low power consumption, and the potential for self-registered fusion of thermal-IR and visible images. We will describe characteristics of requisite pixelated arrays that we have fabricated.
Phoenix Conductivity Probe with Shadow and Toothmark
NASA Technical Reports Server (NTRS)
2008-01-01
NASA's Phoenix Mars Lander inserted the four needles of its thermal and conductivity probe into Martian soil during the 98th Martian day, or sol, of the mission and left it in place until Sol 99 (Sept. 4, 2008). The Robotic Arm Camera on Phoenix took this image on the morning of Sol 99 after the probe was lifted away from the soil. The imprint left by the insertion is visible below the probe, and a shadow showing the probe's four needles is cast on a rock to the left. The thermal and conductivity probe measures how fast heat and electricity move from one needle to an adjacent one through the soil or air between the needles. Conductivity readings can be indicators about water vapor, water ice and liquid water. The probe is part of Phoenix's Microscopy, Electrochemistry and Conductivity suite of instruments. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Lithium granule ablation and penetration during ELM pacing experiments at DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunsford, R.; Bortolon, A.; Roquemore, A. L.
At DIII-D, lithium granules were radially injected into the plasma at the outer midplane to trigger and pace edge localized modes (ELMs). Granules ranging in size from 300 to 1000 microns were horizontally launched into H-mode discharges with velocities near 100 m/s, and granule to granule injection frequencies less than 500 Hz. While the smaller granules were only successful in triggering ELMs approximately 20% of the time, the larger granules regularly demonstrated ELM triggering efficiencies of greater than 80%. A fast visible camera looking along the axis of injection observed the ablation of the lithium granules. We used the durationmore » of ablation as a benchmark for a neutral gas shielding calculation, and approximated the ablation rate and mass deposition location for the various size granules, using measured edge plasma profiles as inputs. In conclusion, this calculation suggests that the low triggering efficiency of the smaller granules is due to the inability of these granules to traverse the steep edge pressure gradient region and reach the top of the pedestal prior to full ablation.« less
NASA Astrophysics Data System (ADS)
Kikuchi, Y.; Sakuma, I.; Asai, Y.; Onishi, K.; Isono, W.; Nakazono, T.; Nakane, M.; Fukumoto, N.; Nagata, M.
2016-02-01
Energy transfer processes from ELM-like pulsed helium (He) plasmas with a pulse duration of ˜0.1 ms to aluminum (Al) and tungsten (W) surfaces were experimentally investigated by the use of a magnetized coaxial plasma gun device. The surface absorbed energy density of the He pulsed plasma on the W surface measured with a calorimeter was ˜0.44 MJ m-2, whereas it was ˜0.15 MJ m-2 on the Al surface. A vapor layer in front of the Al surface exposed to the He pulsed plasma was clearly identified by Al neutral emission line (Al i) measured with a high time resolution spectrometer, and fast imaging with a high-speed visible camera filtered around the Al i emission line. On the other hand, no clear evaporation in front of the W surface exposed to the He pulsed plasma was observed in the present condition. Discussions on the reduction in the surface absorbed energy density on the Al surface are provided by considering the latent heat of vaporization and radiation cooling due to the Al vapor cloud.
Measuring the retina optical properties using a structured illumination imaging system
NASA Astrophysics Data System (ADS)
Basiri, A.; Nguyen, T. A.; Ibrahim, M.; Nguyen, Q. D.; Ramella-Roman, Jessica C.
2011-03-01
Patients with diabetic retinopathy (DR) may experience a reduction in retinal oxygen saturation (SO2). Close monitoring with a fundus ophthalmoscope can help in the prediction of the progression of disease. In this paper we present a noninvasive instrument based on structured illumination aimed at measuring the retina optical properties including oxygen saturation. The instrument uses two wavelngths one in the NIR and one visible, a fast acquisition camera, and a splitter system that allows for contemporaneous collection of images at two different wavelengths. This scheme greatly reduces eye movement artifacts. Structured illumination was achieved in two different ways, firstly several binary illumination masks fabricated using laser micro-machining were used, a near-sinusoidal projection pattern is ultimately achieved at the image plane by appropriate positioning of the binary masks. Secondarily a sinusoidal pattern printed on a thin plastic sheet was positioned at image plane of a fundus ophthalmoscope. The system was calibrated using optical phantoms of known optical properties as well as an eye phantom that included a 150μm capillary vessel containing different concentrations of oxygenated and deoxygenated hemoglobin.
Speed of Dog Adoption: Impact of Online Photo Traits.
Lampe, Rachel; Witte, Thomas H
2015-01-01
The Internet has radically changed how dogs are advertised for adoption in the United States. This study was used to investigate how different characteristics in dogs' photos presented online affected the speed of their adoptions, as a proof of concept to encourage more research in this field. The study analyzed the 1st images of 468 adopted young and adult black dogs identified as Labrador Retriever mixed breeds across the United States. A subjective global measure of photo quality had the largest impact on time to adoption. Other photo traits that positively impacted adoption speed included direct canine eye contact with the camera, the dog standing up, the photo being appropriately sized, an outdoor photo location, and a nonblurry image. Photos taken in a cage, dogs wearing a bandana, dogs having a visible tongue, and some other traits had no effect on how fast the dogs were adopted. Improving the quality of online photos of dogs presented for adoption may speed up and possibly increase the number of adoptions, thereby providing a cheap and easy way to help fight the homeless companion animal population problem.
NASA Astrophysics Data System (ADS)
Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan
2017-02-01
Sonic Infrared imaging (SIR) technology is a relatively new NDE technique that has received significant acceptance in the NDE community. SIR NDE is a super-fast, wide range NDE method. The technology uses short pulses of ultrasonic excitation together with infrared imaging to detect defects in the structures under inspection. Defects become visible to the IR camera when the temperature in the crack vicinity increases due to various heating mechanisms in the specimen. Defect detection is highly affected by noise levels as well as mode patterns in the image. Mode patterns result from the superposition of sonic waves interfering within the specimen during the application of sound pulse. Mode patterns can be a serious concern, especially in composite structures. Mode patterns can either mimic real defects in the specimen, or alternatively, hide defects if they overlap. In last year's QNDE, we have presented algorithms to improve defects detectability in severe noise. In this paper, we will present our development of algorithms on defect extraction targeting specifically to mode patterns in SIR images.
Lithium granule ablation and penetration during ELM pacing experiments at DIII-D
Lunsford, R.; Bortolon, A.; Roquemore, A. L.; ...
2016-05-25
At DIII-D, lithium granules were radially injected into the plasma at the outer midplane to trigger and pace edge localized modes (ELMs). Granules ranging in size from 300 to 1000 microns were horizontally launched into H-mode discharges with velocities near 100 m/s, and granule to granule injection frequencies less than 500 Hz. While the smaller granules were only successful in triggering ELMs approximately 20% of the time, the larger granules regularly demonstrated ELM triggering efficiencies of greater than 80%. A fast visible camera looking along the axis of injection observed the ablation of the lithium granules. We used the durationmore » of ablation as a benchmark for a neutral gas shielding calculation, and approximated the ablation rate and mass deposition location for the various size granules, using measured edge plasma profiles as inputs. In conclusion, this calculation suggests that the low triggering efficiency of the smaller granules is due to the inability of these granules to traverse the steep edge pressure gradient region and reach the top of the pedestal prior to full ablation.« less
Toward an image compression algorithm for the high-resolution electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.
Measuring visibility using smartphones
NASA Astrophysics Data System (ADS)
Friesen, Jan; Bialon, Raphael; Claßen, Christoph; Graffi, Kalman
2017-04-01
Spatial information on fog density is an important parameter for ecohydrological studies in cloud forests. The Dhofar cloud forest in Southern Oman exhibits a close interaction between the fog, trees, and rainfall. During the three month monsoon season the trees capture substantial amounts of horizontal precipitation from fog which increases net precipitation below the tree canopy. As fog density measurements are scarce, a smartphone app was designed to measure visibility. Different smartphone units use a variety of different parts. It is therefore important to assess the developed visibility measurement across a suite of different smartphones. In this study we tested five smartphones/ tablets (Google/ LG Nexus 5X, Huawei P8 lite, Huawei Y3, HTC Nexus 9, and Samsung Galaxy S4 mini) against digital camera (Sony DLSR-A900) and visual visibility observations. Visibility was assessed from photos using image entropy, from the number of visible targets, and from WiFi signal strength using RSSI. Results show clear relationships between object distance and fog density, yet a considerable spread across the different smartphone/ tablet units is evident.
NASA Technical Reports Server (NTRS)
Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)
1985-01-01
Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
FAST CHOPPER BUILDING, TRA665. CONTEXTUAL VIEW: CHOPPER BUILDING IN CENTER. ...
FAST CHOPPER BUILDING, TRA-665. CONTEXTUAL VIEW: CHOPPER BUILDING IN CENTER. MTR REACTOR SERVICES BUILDING,TRA-635, TO LEFT; MTR BUILDING TO RIGHT. CAMERA FACING WEST. INL NEGATIVE NO. HD42-1. Mike Crane, Photographer, 3/2004 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Impact of New Camera Technologies on Discoveries in Cell Biology.
Stuurman, Nico; Vale, Ronald D
2016-08-01
New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaffney, Kelly
Movies have transformed our perception of the world. With slow motion photography, we can see a hummingbird flap its wings, and a bullet pierce an apple. The remarkably small and extremely fast molecular world that determines how your body functions cannot be captured with even the most sophisticated movie camera today. To see chemistry in real time requires a camera capable of seeing molecules that are one ten billionth of a foot with a frame rate of 10 trillion frames per second! SLAC has embarked on the construction of just such a camera. Please join me as I discuss howmore » this molecular movie camera will work and how it will change our perception of the molecular world.« less
Piao, Jin-Chun; Kim, Shin-Dug
2017-01-01
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
Kang, Han Gyu; Lee, Ho-Young; Kim, Kyeong Min; Song, Seong-Hyun; Hong, Gun Chul; Hong, Seong Jong
2017-01-01
The aim of this study is to integrate NIR, gamma, and visible imaging tools into a single endoscopic system to overcome the limitation of NIR using gamma imaging and to demonstrate the feasibility of endoscopic NIR/gamma/visible fusion imaging for sentinel lymph node (SLN) mapping with a small animal. The endoscopic NIR/gamma/visible imaging system consists of a tungsten pinhole collimator, a plastic focusing lens, a BGO crystal (11 × 11 × 2 mm 3 ), a fiber-optic taper (front = 11 × 11 mm 2 , end = 4 × 4 mm 2 ), a 122-cm long endoscopic fiber bundle, an NIR emission filter, a relay lens, and a CCD camera. A custom-made Derenzo-like phantom filled with a mixture of 99m Tc and indocyanine green (ICG) was used to assess the spatial resolution of the NIR and gamma images. The ICG fluorophore was excited using a light-emitting diode (LED) with an excitation filter (723-758 nm), and the emitted fluorescence photons were detected with an emission filter (780-820 nm) for a duration of 100 ms. Subsequently, the 99m Tc distribution in the phantom was imaged for 3 min. The feasibility of in vivo SLN mapping with a mouse was investigated by injecting a mixture of 99m Tc-antimony sulfur colloid (12 MBq) and ICG (0.1 mL) into the right paw of the mouse (C57/B6) subcutaneously. After one hour, NIR, gamma, and visible images were acquired sequentially. Subsequently, the dissected SLN was imaged in the same way as the in vivo SLN mapping. The NIR, gamma, and visible images of the Derenzo-like phantom can be obtained with the proposed endoscopic imaging system. The NIR/gamma/visible fusion image of the SLN showed a good correlation among the NIR, gamma, and visible images both for the in vivo and ex vivo imaging. We demonstrated the feasibility of the integrated NIR/gamma/visible imaging system using a single endoscopic fiber bundle. In future, we plan to investigate miniaturization of the endoscope head and simultaneous NIR/gamma/visible imaging with dichroic mirrors and three CCD cameras. © 2016 American Association of Physicists in Medicine.
Measuring the Temperature of the Ithaca College MOT Cloud using a CMOS Camera
NASA Astrophysics Data System (ADS)
Smucker, Jonathan; Thompson, Bruce
2015-03-01
We present our work on measuring the temperature of Rubidium atoms cooled using a magneto-optical trap (MOT). The MOT uses laser trapping methods and Doppler cooling to trap and cool Rubidium atoms to form a cloud that is visible to a CMOS Camera. The Rubidium atoms are cooled further using optical molasses cooling after they are released from the trap (by removing the magnetic field). In order to measure the temperature of the MOT we take pictures of the cloud using a CMOS camera as it expands and calculate the temperature based on the free expansion of the cloud. Results from the experiment will be presented along with a summary of the method used.
Design of an ROV-based lidar for seafloor monitoring
NASA Astrophysics Data System (ADS)
Harsdorf, Stefan; Janssen, Manfred; Reuter, Rainer; Wachowicz, Bernhard
1997-05-01
In recent years, accidents of ships with chemical cargo have led to strong impacts on the marine ecosystem, and to risks for pollution control and clean-up teams. In order to enable a fast, safe, and efficient reaction, a new optical instrument has been designed for the inspection of objects on the seafloor by range-gated scattered light images as well as for the detection of substances by measuring the laser induced emission on the seafloor and within the water column. This new lidar is operated as a payload of a remotely operated vehicle (ROV). A Nd:YAG laser is employed as the light source of the lidar. In the video mode, the submarine lidar system uses the 2nd harmonic laser pulse to illuminate the seafloor. Elastically scattered and reflected light is collected with a gateable intensified CCD camera. The beam divergence of the laser is the same as the camera field-of-view. Synchronization of laser emission and camera gate time allows to suppress backscattered light from the water column and to record only the light backscattered by the object. This results in a contrast enhanced video image which increases the visibility range in turbid water up to four times. Substances seeping out from a container are often invisible in video images because of their low contrast. Therefore, a fluorescence lidar mode is integrated into the submarine lidar. the 3rd harmonic Nd:YAG laser pulse is applied, and the emission response of the water body between ROV and seafloor and of the seafloor itself is recorded at variable wavelengths with a maximum depth resolution is realized by a 2D scanner, which allows to select targets within the range-gated image for a measurement of fluorescence. The analysis of the time- and spectral-resolved signals permits the detection, the exact location, and a classification of fluorescent and/or absorbing substances.
Composite x-ray pinholes for time-resolved microphotography of laser compressed targets.
Attwood, D T; Weinstein, B W; Wuerker, R F
1977-05-01
Composite x-ray pinholes having dichroic properties are presented. These pinholes permit both x-ray imaging and visible alignment with micron accuracy by presenting different apparent apertures in these widely disparate regions of the spectrum. Their use is mandatory in certain applications in which the x-ray detection consists of a limited number of resolvable elements whose use one wishes to maximize. Mating the pinhole camera with an x-ray streaking camera is described, along with experiments which spatially and temporally resolve the implosion of laser irradiated targets.
Focal plane alignment and detector characterization for the Subaru prime focus spectrograph
NASA Astrophysics Data System (ADS)
Hart, Murdock; Barkhouser, Robert H.; Carr, Michael; Golebiowski, Mirek; Gunn, James E.; Hope, Stephen C.; Smee, Stephen A.
2014-07-01
We describe the infrastructure being developed to align and characterize the detectors for the Subaru Measure- ment of Images and Redshifts (SuMIRe) Prime Focus Spectrograph (PFS). PFS will employ four three-channel spectrographs with an operating wavelength range of 3800 °A to 12600 °A. Each spectrograph will be comprised of two visible channels and one near infrared (NIR) channel, where each channel will use a separate Schmidt camera to image the captured spectra onto their respective detectors. In the visible channels, Hamamatsu 2k × 4k CCDs will be mounted in pairs to create a single 4k × 4k detector, while the NIR channel will use a single Teledyne 4k × 4k H4RG HgCdTe device. The fast f/1.1 optics of the Schmidt cameras will give a shallow depth of focus necessitating an optimization of the focal plane array flatness. The minimum departure from flatness of the focal plane array for the visible channels is set the by the CCD flatness, typically 10 μm peak-to-valley. We will adjust the coplanarity for a pair of CCDs such that the flatness of the array is consistent with the flatness of the detectors themselves. To achieve this we will use an optical non-contact measurement system to measure surface flatness and coplanarity at both ambient and operating temperatures, and use shims to adjust the coplanarity of the CCDs. We will characterize the performance of the detectors for PFS consistent with the scientific goals for the project. To this end we will measure the gain, linearity, full well, quantum efficiency (QE), charge diffusion, charge transfer inefficiency (CTI), and noise properties of these devices. We also desire to better understand the non-linearity of the photon transfer curve for the CCDs, and the charge persistence/reciprocity problems of the HgCdTe devices. To enable the metrology and characterization of these detectors we are building two test cryostats nearly identical in design. The first test cryostat will primarily be used for the coplanarity measurements and sub- pixel illumination testing, and the second will be dedicated to performance characterization requiring at field illumination. In this paper we will describe the design of the test cryostats. We will also describe the system we have built for measuring focal plane array flatness, and examine the precision and error with which it operates. Finally we will detail the methods by which we plan to characterize the performance of the detectors for PFS, and provide preliminary results.
Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter
NASA Technical Reports Server (NTRS)
Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko;
2015-01-01
In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the alpha-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned following four steps in order to reduce standing time alignment me. 1: is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm). 2: The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3: CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4: Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.
Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter
NASA Technical Reports Server (NTRS)
Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko;
2015-01-01
In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the a-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned 'following four steps in order to reduce standing time alignment me. 1. is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm).2. The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3. CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4. Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.
Research on a solid state-streak camera based on an electro-optic crystal
NASA Astrophysics Data System (ADS)
Wang, Chen; Liu, Baiyu; Bai, Yonglin; Bai, Xiaohong; Tian, Jinshou; Yang, Wenzheng; Xian, Ouyang
2006-06-01
With excellent temporal resolution ranging from nanosecond to sub-picoseconds, a streak camera is widely utilized in measuring ultrafast light phenomena, such as detecting synchrotron radiation, examining inertial confinement fusion target, and making measurements of laser-induced discharge. In combination with appropriate optics or spectroscope, the streak camera delivers intensity vs. position (or wavelength) information on the ultrafast process. The current streak camera is based on a sweep electric pulse and an image converting tube with a wavelength-sensitive photocathode ranging from the x-ray to near infrared region. This kind of streak camera is comparatively costly and complex. This paper describes the design and performance of a new-style streak camera based on an electro-optic crystal with large electro-optic coefficient. Crystal streak camera accomplishes the goal of time resolution by direct photon beam deflection using the electro-optic effect which can replace the current streak camera from the visible to near infrared region. After computer-aided simulation, we design a crystal streak camera which has the potential of time resolution between 1ns and 10ns.Some further improvements in sweep electric circuits, a crystal with a larger electro-optic coefficient, for example LN (γ 33=33.6×10 -12m/v) and the optimal optic system may lead to better time resolution less than 1ns.
Yang, Hualei; Yang, Xi; Heskel, Mary; ...
2017-04-28
Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporalmore » resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). Here we found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Hualei; Yang, Xi; Heskel, Mary
Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporalmore » resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). Here we found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.« less
9. EMPIRE STATE MINE, BOTTOM ORE BIN/SHOOT. TIN ROOF OF ...
9. EMPIRE STATE MINE, BOTTOM ORE BIN/SHOOT. TIN ROOF OF SOUTHERN MOST BUILDING AND UPPER ORE SHOOT VISIBLE. CAMERA POINTED EAST-NORTHEAST. - Florida Mountain Mining Sites, Empire State Mine, West side of Florida Mountain, Silver City, Owyhee County, ID
Neptune Through a Clear Filter
1999-07-25
On July 23, 1989, NASA Voyager 2 spacecraft took this picture of Neptune through a clear filter on its narrow-angle camera. The image on the right has a latitude and longitude grid added for reference. Neptune Great Dark Spot is visible on the left.
NASA Astrophysics Data System (ADS)
Viard, Clément; Nakashima, Kiyoko; Lamory, Barbara; Pâques, Michel; Levecq, Xavier; Château, Nicolas
2011-03-01
This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first examined using SD-OCT and infrared SLO. Retinal areas of 4°x4° were imaged using an AO flood-illumination retinal camera based on a large-stroke deformable mirror. Contrast was finally enhanced by registering and averaging rough images using classical algorithms. Cellular-resolution images could be obtained in most cases. In ARM, AO images revealed granular contents in drusen, which were invisible in SLO or OCT images, and allowed the observation of the cone mosaic between drusen. In glaucoma cases, visual field was correlated to changes in cone visibility. In inherited retinal dystrophies, AO helped to evaluate cone loss across the retina. Other microstructures, slightly larger in size than cones, were also visible in several retinas. AO provided potentially useful diagnostic and prognostic information in various diseases. In addition to cones, other microscopic structures revealed by AO images may also be of interest in monitoring retinal diseases.
Northern California and San Francisco Bay
NASA Technical Reports Server (NTRS)
2000-01-01
The left image of this pair was acquired by MISR's nadir camera on August 17, 2000 during Terra orbit 3545. Toward the top, and nestled between the Coast Range and the Sierra Nevadas, are the green fields of the Sacramento Valley. The city of Sacramento is the grayish area near the right-hand side of the image. Further south, San Francisco and other cities of the Bay Area are visible.On the right is a zoomed-in view of the area outlined by the yellow polygon. It highlights the southern end of San Francisco Bay, and was acquired by MISR's airborne counterpart, AirMISR, during an engineering check-out flight on August 25, 1997. AirMISR flies aboard a NASA ER-2 high-altitude aircraft and contains a single camera that rotates to different view angles. When this image was acquired, the AirMISR camera was pointed 70 degrees forward of the vertical. Colorful tidal flats are visible in both the AirMISR and MISR imagery.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.For more information: http://www-misr.jpl.nasa.gov7. VIEW OF TIP TOP AND PHILLIPS MINES. PHOTO MADE ...
7. VIEW OF TIP TOP AND PHILLIPS MINES. PHOTO MADE FROM THE 'NOTTINGHAM' SADDLE VISIBLE IN PHOTOGRAPHS ID-31-3 AND ID-31-6. CAMERA POINTED NORTHEAST TIP TOP IS CLEARLY VISIBLE IN UPPER RIGHT; RUNNING A STRAIGHT EDGE THROUGH THE TRUNK LINE OF SMALL TREE IN LOWER RIGHT THROUGH TRUNK LINE OF LARGER TREE WILL DIRECT ONE TO LIGHT AREA WHERE TIP TOP IS LOCATED; BLACK SQUARE IS THE RIGHT WINDOW ON WEST SIDE (FRONT) OF STRUCTURE. PHILLIPS IS VISIBLE BY FOLLOWING TREE LINE DIAGONALLY THROUGH IMAGE TO FAR LEFT SIDE. SULLIVAN IS HIDDEN IN THE TREE TO THE RIGHT OF PHILLIPS. - Florida Mountain Mining Sites, Silver City, Owyhee County, ID
A robust and hierarchical approach for the automatic co-registration of intensity and visible images
NASA Astrophysics Data System (ADS)
González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José
2012-09-01
This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.
Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-06-01
Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.
Cross-modal face recognition using multi-matcher face scores
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2015-05-01
The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.
NASA Astrophysics Data System (ADS)
Delaney, John K.; Zeibel, Jason G.; Thoury, Mathieu; Littleton, Roy; Morales, Kathryn M.; Palmer, Michael; de la Rie, E. René
2009-07-01
Reflectance imaging spectroscopy, the collection of images in narrow spectral bands, has been developed for remote sensing of the Earth. In this paper we present findings on the use of imaging spectroscopy to identify and map artist pigments as well as to improve the visualization of preparatory sketches. Two novel hyperspectral cameras, one operating from the visible to near-infrared (VNIR) and the other in the shortwave infrared (SWIR), have been used to collect diffuse reflectance spectral image cubes on a variety of paintings. The resulting image cubes (VNIR 417 to 973 nm, 240 bands, and SWIR 970 to 1650 nm, 85 bands) were calibrated to reflectance and the resulting spectra compared with results from a fiber optics reflectance spectrometer (350 to 2500 nm). The results show good agreement between the spectra acquired with the hyperspectral cameras and those from the fiber reflectance spectrometer. For example, the primary blue pigments and their distribution in Picasso's Harlequin Musician (1924) are identified from the reflectance spectra and agree with results from X-ray fluorescence data and dispersed sample analysis. False color infrared reflectograms, obtained from the SWIR hyperspectral images, of extensively reworked paintings such as Picasso's The Tragedy (1903) are found to give improved visualization of changes made by the artist. These results show that including the NIR and SWIR spectral regions along with the visible provides for a more robust identification and mapping of artist pigments than using visible imaging spectroscopy alone.
Infrared and visible cooperative vehicle identification markings
NASA Astrophysics Data System (ADS)
O'Keefe, Eoin S.; Raven, Peter N.
2006-05-01
Airborne surveillance helicopters and aeroplanes used by security and defence forces around the world increasingly rely on their visible band and thermal infrared cameras to prosecute operations such as the co-ordination of police vehicles during the apprehension of a stolen car, or direction of all emergency services at a serious rail crash. To perform their function effectively, it is necessary for the airborne officers to unambiguously identify police and the other emergency service vehicles. In the visible band, identification is achieved by placing high contrast symbols and characters on the vehicle roof. However, at the wavelengths at which thermal imagers operate, the dark and light coloured materials have similar low reflectivity and the visible markings cannot be discerned. Hence there is a requirement for a method of passively and unobtrusively marking vehicles concurrently in the visible and thermal infrared, over a large range of viewing angles. In this paper we discuss the design, detailed angle-dependent spectroscopic characterisation and operation of novel visible and infrared vehicle marking materials, and present airborne IR and visible imagery of materials in use.
Ansell, James; Warren, Neil; Wall, Pete; Cocks, Kim; Goddard, Stuart; Whiston, Richard; Stechman, Michael; Scott-Coombes, David; Torkington, Jared
2014-07-01
Ultravision™ is a new device that utilizes electrostatic precipitation to clear surgical smoke. The aim was to evaluate its performance during laparoscopic cholecystectomy. Patients undergoing laparoscopic cholecystectomy were randomized into "active (device on)" or "control (device off)." Three operating surgeons scored the percentage effective visibility and three reviewers scored the percentage of the procedure where smoke was present. All assessors also used a 5-point scale (1 = imperceptible/excellent and 5 = very annoying/bad) to rate visual impairment. Secondary outcomes were the number of smoke-related pauses, camera cleaning, and pneumoperitoneum reductions. Mean results are presented with 95% confidence intervals (CI). In 30 patients (active 13, control 17), the effective visibility was 89.2% (83.3-95.0) for active cases and 71.2% (65.7-76.7) for controls. The proportion of the procedure where smoke was present was 41.1% (33.8-48.3) for active cases and 61.5% (49.0-74.1) for controls. Operating surgeons rated the visual impairment as 2.2 (1.7-2.6) for active cases and 3.2 (2.8-3.5) for controls. Reviewers rated the visual impairment as 2.3 (2.0-2.5) for active cases and 3.2 (2.8-3.7) for controls. In the active group, 23% of procedures were paused to allow smoke clearance compared to 94% of control cases. Camera cleaning was not needed in 85% of active procedures and 35% of controls. The pneumoperitoneum was reduced in 0% of active cases and 88% of controls. Ultravision™ improves visibility during laparoscopic surgery and reduces delays in surgery for smoke clearance and camera cleaning.
Night vision imaging system design, integration and verification in spacecraft vacuum thermal test
NASA Astrophysics Data System (ADS)
Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing
2015-08-01
The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.
Garcia, Jair E; Greentree, Andrew D; Shrestha, Mani; Dorin, Alan; Dyer, Adrian G
2014-01-01
The study of the signal-receiver relationship between flowering plants and pollinators requires a capacity to accurately map both the spectral and spatial components of a signal in relation to the perceptual abilities of potential pollinators. Spectrophotometers can typically recover high resolution spectral data, but the spatial component is difficult to record simultaneously. A technique allowing for an accurate measurement of the spatial component in addition to the spectral factor of the signal is highly desirable. Consumer-level digital cameras potentially provide access to both colour and spatial information, but they are constrained by their non-linear response. We present a robust methodology for recovering linear values from two different camera models: one sensitive to ultraviolet (UV) radiation and another to visible wavelengths. We test responses by imaging eight different plant species varying in shape, size and in the amount of energy reflected across the UV and visible regions of the spectrum, and compare the recovery of spectral data to spectrophotometer measurements. There is often a good agreement of spectral data, although when the pattern on a flower surface is complex a spectrophotometer may underestimate the variability of the signal as would be viewed by an animal visual system. Digital imaging presents a significant new opportunity to reliably map flower colours to understand the complexity of these signals as perceived by potential pollinators. Compared to spectrophotometer measurements, digital images can better represent the spatio-chromatic signal variability that would likely be perceived by the visual system of an animal, and should expand the possibilities for data collection in complex, natural conditions. However, and in spite of its advantages, the accuracy of the spectral information recovered from camera responses is subject to variations in the uncertainty levels, with larger uncertainties associated with low radiance levels.
Measuring SO2 ship emissions with an ultraviolet imaging camera
NASA Astrophysics Data System (ADS)
Prata, A. J.
2014-05-01
Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.
Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera
NASA Astrophysics Data System (ADS)
Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.
2016-04-01
The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.
Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes.
Liu, Xinyang; Rice, Christina E; Shekhar, Raj
2017-10-01
The oblique-viewing (i.e., angled) rigid endoscope is a commonly used tool in conventional endoscopic surgeries. The relative rotation between its two moveable parts, the telescope and the camera head, creates a rotation offset between the actual and the projection of an object in the camera image. A calibration method tailored to compensate such offset is needed. We developed a fast calibration method for oblique-viewing rigid endoscopes suitable for clinical use. In contrast to prior approaches based on optical tracking, we used electromagnetic (EM) tracking as the external tracking hardware to improve compactness and practicality. Two EM sensors were mounted on the telescope and the camera head, respectively, with considerations to minimize EM tracking errors. Single-image calibration was incorporated into the method, and a sterilizable plate, laser-marked with the calibration pattern, was also developed. Furthermore, we proposed a general algorithm to estimate the rotation center in the camera image. Formulas for updating the camera matrix in terms of clockwise and counterclockwise rotations were also developed. The proposed calibration method was validated using a conventional [Formula: see text], 5-mm laparoscope. Freehand calibrations were performed using the proposed method, and the calibration time averaged 2 min and 8 s. The calibration accuracy was evaluated in a simulated clinical setting with several surgical tools present in the magnetic field of EM tracking. The root-mean-square re-projection error averaged 4.9 pixel (range 2.4-8.5 pixel, with image resolution of [Formula: see text] for rotation angles ranged from [Formula: see text] to [Formula: see text]. We developed a method for fast and accurate calibration of oblique-viewing rigid endoscopes. The method was also designed to be performed in the operating room and will therefore support clinical translation of many emerging endoscopic computer-assisted surgical systems.
Three-dimensional particle tracking velocimetry using dynamic vision sensors
NASA Astrophysics Data System (ADS)
Borer, D.; Delbruck, T.; Rösgen, T.
2017-12-01
A fast-flow visualization method is presented based on tracking neutrally buoyant soap bubbles with a set of neuromorphic cameras. The "dynamic vision sensors" register only the changes in brightness with very low latency, capturing fast processes at a low data rate. The data consist of a stream of asynchronous events, each encoding the corresponding pixel position, the time instant of the event and the sign of the change in logarithmic intensity. The work uses three such synchronized cameras to perform 3D particle tracking in a medium sized wind tunnel. The data analysis relies on Kalman filters to associate the asynchronous events with individual tracers and to reconstruct the three-dimensional path and velocity based on calibrated sensor information.
NASA Astrophysics Data System (ADS)
van der Wal, Daphne; van Dalen, Jeroen; Wielemaker-van den Dool, Annette; Dijkstra, Jasper T.; Ysebaert, Tom
2014-07-01
Intertidal benthic macroalgae are a biological quality indicator in estuaries and coasts. While remote sensing has been applied to quantify the spatial distribution of such macroalgae, it is generally not used for their monitoring. We examined the day-to-day and seasonal dynamics of macroalgal cover on a sandy intertidal flat using visible and near-infrared images from a time-lapse camera mounted on a tower. Benthic algae were identified using supervised, semi-supervised and unsupervised classification techniques, validated with monthly ground-truthing over one year. A supervised classification (based on maximum likelihood, using training areas identified in the field) performed best in discriminating between sediment, benthic diatom films and macroalgae, with highest spectral separability between macroalgae and diatoms in spring/summer. An automated unsupervised classification (based on the Normalised Differential Vegetation Index NDVI) allowed detection of daily changes in macroalgal coverage without the need for calibration. This method showed a bloom of macroalgae (filamentous green algae, Ulva sp.) in summer with > 60% cover, but with pronounced superimposed day-to-day variation in cover. Waves were a major factor in regulating macroalgal cover, but regrowth of the thalli after a summer storm was fast (2 weeks). Images and in situ data demonstrated that the protruding tubes of the polychaete Lanice conchilega facilitated both settlement (anchorage) and survival (resistance to waves) of the macroalgae. Thus, high-frequency, high resolution images revealed the mechanisms for regulating the dynamics in cover of the macroalgae and for their spatial structuring. Ramifications for the mode, timing, frequency and evaluation of monitoring macroalgae by field and remote sensing surveys are discussed.
FAST CHOPPER BUILDING, TRA665. DETAIL SHOWS UPPER AND LOWER LEVEL ...
FAST CHOPPER BUILDING, TRA-665. DETAIL SHOWS UPPER AND LOWER LEVEL WALLS OF DIFFERING MATERIALS. NOTE DOORWAY TO MTR TO RIGHT OF CHOPPER BUILDING'S CLIPPED CORNER. CAMERA FACING WEST. INL NEGATIVE NO. HD42-1. Mike Crane, Photographer, 3/2004 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-06-30
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.
Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor.
Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung
2017-10-28
Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods.
Initial application of a dual-sweep streak camera to the Duke storage ring OK-4 source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A.H.; Yang, B.X.; Litvinenko, V.
1997-08-01
The visible and UV spontaneous emission radiation (SER) from the Duke OK-4 wiggler has been used with a Hamamatsu C5680 dual-sweep streak camera to characterize the stored electron beams. Particle beam energies of 270 and 500 MeV in the Duke storage ring were used in this initial application with the OK-4 adjusted to generate wavelengths from 500 nm to near 200 nm. The OK-4 magnetic system with its 68 periods provided a much stronger radiation source than a nearby bending magnet source point. Sensitivity to single-bunch, single-turn SER was shown down to 4 {mu}A beam current at {lambda} = 450more » nm. The capability of seeing second passes in the FEL resonator at a wavelength near 200 nm was used to assess the cavity length versus orbit length. These tests (besides supporting preparation for UV-visible SR FEL startups) are also relevant to possible diagnostics techniques for single-pass FEL prototype facilities.« less
NASA Astrophysics Data System (ADS)
Khalifa, Aly A.; Aly, Hussein A.; El-Sherif, Ashraf F.
2016-02-01
Near infrared (NIR) dynamic scene projection systems are used to perform hardware in-the-loop (HWIL) testing of a unit under test operating in the NIR band. The common and complex requirement of a class of these units is a dynamic scene that is spatio-temporal variant. In this paper we apply and investigate active external modulation of NIR laser in different ranges of temporal frequencies. We use digital micromirror devices (DMDs) integrated as the core of a NIR projection system to generate these dynamic scenes. We deploy the spatial pattern to the DMD controller to simultaneously yield the required amplitude by pulse width modulation (PWM) of the mirror elements as well as the spatio-temporal pattern. Desired modulation and coding of high stable, high power visible (Red laser at 640 nm) and NIR (Diode laser at 976 nm) using the combination of different optical masks based on DMD were achieved. These spatial versatile active coding strategies for both low and high frequencies in the range of kHz for irradiance of different targets were generated by our system and recorded using VIS-NIR fast cameras. The temporally-modulated laser pulse traces were measured using array of fast response photodetectors. Finally using a high resolution spectrometer, we evaluated the NIR dynamic scene projection system response in terms of preserving the wavelength and band spread of the NIR source after projection.
Photogrammetry research for FAST eleven-meter reflector panel surface shape measurement
NASA Astrophysics Data System (ADS)
Zhou, Rongwei; Zhu, Lichun; Li, Weimin; Hu, Jingwen; Zhai, Xuebing
2010-10-01
In order to design and manufacture the Five-hundred-meter Aperture Spherical Radio Telescope (FAST) active reflector measuring equipment, measurement on each reflector panel surface shape was presented, static measurement of the whole neutral spherical network of nodes was performed, real-time dynamic measurement at the cable network dynamic deformation was undertaken. In the implementation process of the FAST, reflector panel surface shape detection was completed before eleven-meter reflector panel installation. Binocular vision system was constructed based on the method of binocular stereo vision in machine vision, eleven-meter reflector panel surface shape was measured with photogrammetry method. Cameras were calibrated with the feature points. Under the linearity camera model, the lighting spot array was used as calibration standard pattern, and the intrinsic and extrinsic parameters were acquired. The images were collected for digital image processing and analyzing with two cameras, feature points were extracted with the detection algorithm of characteristic points, and those characteristic points were matched based on epipolar constraint method. Three-dimensional reconstruction coordinates of feature points were analyzed and reflective panel surface shape structure was established by curve and surface fitting method. The error of reflector panel surface shape was calculated to realize automatic measurement on reflector panel surface shape. The results show that unit reflector panel surface inspection accuracy was 2.30mm, within the standard deviation error of 5.00mm. Compared with the requirement of reflector panel machining precision, photogrammetry has fine precision and operation feasibility on eleven-meter reflector panel surface shape measurement for FAST.
Development of an Extra-vehicular (EVA) Infrared (IR) Camera Inspection System
NASA Technical Reports Server (NTRS)
Gazarik, Michael; Johnson, Dave; Kist, Ed; Novak, Frank; Antill, Charles; Haakenson, David; Howell, Patricia; Pandolf, John; Jenkins, Rusty; Yates, Rusty
2006-01-01
Designed to fulfill a critical inspection need for the Space Shuttle Program, the EVA IR Camera System can detect crack and subsurface defects in the Reinforced Carbon-Carbon (RCC) sections of the Space Shuttle s Thermal Protection System (TPS). The EVA IR Camera performs this detection by taking advantage of the natural thermal gradients induced in the RCC by solar flux and thermal emission from the Earth. This instrument is a compact, low-mass, low-power solution (1.2cm3, 1.5kg, 5.0W) for TPS inspection that exceeds existing requirements for feature detection. Taking advantage of ground-based IR thermography techniques, the EVA IR Camera System provides the Space Shuttle program with a solution that can be accommodated by the existing inspection system. The EVA IR Camera System augments the visible and laser inspection systems and finds cracks and subsurface damage that is not measurable by the other sensors, and thus fills a critical gap in the Space Shuttle s inspection needs. This paper discusses the on-orbit RCC inspection measurement concept and requirements, and then presents a detailed description of the EVA IR Camera System design.
Observation of runaway electrons by infrared camera in J-TEXT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, R. H.; Chen, Z. Y., E-mail: zychen@hust.edu.cn; Zhang, M.
2016-11-15
When the energy of confined runaway electrons approaches several tens of MeV, the runaway electrons can emit synchrotron radiation in the range of infrared wavelength. An infrared camera working in the wavelength of 3-5 μm has been developed to study the runaway electrons in the Joint Texas Experimental Tokamak (J-TEXT). The camera is located in the equatorial plane looking tangentially into the direction of electron approach. The runaway electron beam inside the plasma has been observed at the flattop phase. With a fast acquisition of the camera, the behavior of runaway electron beam has been observed directly during the runawaymore » current plateau following the massive gas injection triggered disruptions.« less
Kidd, David G; McCartt, Anne T
2016-02-01
This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which augment the driving task may do the same. Copyright © 2015 Elsevier Ltd. All rights reserved.
2000-01-09
JSC2003-E-15407 (9 Jan. 1990) --- A 35mm still camera located in the umbilical well of the Space Shuttle Columbia took this photograph of the external fuel tank (ET) after it was dropped from the launch stack as the shuttle headed for Earth-orbit on Jan. 9, 1990 for the STS-32 mission. Several large divots are visible near the forward ET/orbiter bipod and smaller divots are visible on the H2 tank acreage. The vertical streak and the horizontal bands were the results of repairs done prior to launch.
2005-10-04
During its time in orbit, Cassini has spotted many beautiful cat's eye-shaped patterns like the ones visible here. These patterns occur in places where the winds and the atmospheric density at one latitude are different from those at another latitude. The opposing east-west flowing cloud bands are the dominant patterns seen here and elsewhere in Saturn's atmosphere. Contrast in the image was enhanced to aid the visibility of atmospheric features. The image was taken with the Cassini spacecraft wide-angle camera on Aug. 20, 2005. http://photojournal.jpl.nasa.gov/catalog/PIA07600
NASA Astrophysics Data System (ADS)
Grasser, R.; Peyronneaudi, Benjamin; Yon, Kevin; Aubry, Marie
2015-10-01
CILAS, subsidiary of Airbus Defense and Space, develops, manufactures and sales laser-based optronics equipment for defense and homeland security applications. Part of its activity is related to active systems for threat detection, recognition and identification. Active surveillance and active imaging systems are often required to achieve identification capacity in case for long range observation in adverse conditions. In order to ease the deployment of active imaging systems often complex and expensive, CILAS suggests a new concept. It consists on the association of two apparatus working together. On one side, a patented versatile laser platform enables high peak power laser illumination for long range observation. On the other side, a small camera add-on works as a fast optical switch to select photons with specific time of flight only. The association of the versatile illumination platform and the fast optical switch presents itself as an independent body, so called "flash module", giving to virtually any passive observation systems gated active imaging capacity in NIR and SWIR.
The optical design of a visible adaptive optics system for the Magellan Telescope
NASA Astrophysics Data System (ADS)
Kopon, Derek
The Magellan Adaptive Optics system will achieve first light in November of 2012. This AO system contains several subsystems including the 585-actuator concave adaptive secondary mirror, the Calibration Return Optic (CRO) alignment and calibration system, the CLIO 1-5 microm IR science camera, the movable guider camera and active optics assembly, and the W-Unit, which contains both the Pyramid Wavefront Sensor (PWFS) and the VisAO visible science camera. In this dissertation, we present details of the design, fabrication, assembly, alignment, and laboratory performance of the VisAO camera and its optical components. Many of these components required a custom design, such as the Spectral Differential Imaging Wollaston prisms and filters and the coronagraphic spots. One component, the Atmospheric Dispersion Corrector (ADC), required a unique triplet design that had until now never been fabricated and tested on sky. We present the design, laboratory, and on-sky results for our triplet ADC. We also present details of the CRO test setup and alignment. Because Magellan is a Gregorian telescope, the ASM is a concave ellipsoidal mirror. By simulating a star with a white light point source at the far conjugate, we can create a double-pass test of the whole system without the need for a real on-sky star. This allows us to test the AO system closed loop in the Arcetri test tower at its nominal design focal length and optical conjugates. The CRO test will also allow us to calibrate and verify the system off-sky at the Magellan telescope during commissioning and periodically thereafter. We present a design for a possible future upgrade path for a new visible Integral Field Spectrograph. By integrating a fiber array bundle at the VisAO focal plane, we can send light to a pre-existing facility spectrograph, such as LDSS3, which will allow 20 mas spatial sampling and R˜1,800 spectra over the band 0.6-1.05 microm. This would be the highest spatial resolution IFU to date, either from the ground or in space.
Software defined multi-spectral imaging for Arctic sensor networks
NASA Astrophysics Data System (ADS)
Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi
2016-05-01
Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.
InfraCAM (trade mark): A Hand-Held Commercial Infrared Camera Modified for Spaceborne Applications
NASA Technical Reports Server (NTRS)
Manitakos, Daniel; Jones, Jeffrey; Melikian, Simon
1996-01-01
In 1994, Inframetrics introduced the InfraCAM(TM), a high resolution hand-held thermal imager. As the world's smallest, lightest and lowest power PtSi based infrared camera, the InfraCAM is ideal for a wise range of industrial, non destructive testing, surveillance and scientific applications. In addition to numerous commercial applications, the light weight and low power consumption of the InfraCAM make it extremely valuable for adaptation to space borne applications. Consequently, the InfraCAM has been selected by NASA Lewis Research Center (LeRC) in Cleveland, Ohio, for use as part of the DARTFire (Diffusive and Radiative Transport in Fires) space borne experiment. In this experiment, a solid fuel is ignited in a low gravity environment. The combustion period is recorded by both visible and infrared cameras. The infrared camera measures the emission from polymethyl methacrylate, (PMMA) and combustion products in six distinct narrow spectral bands. Four cameras successfully completed all qualification tests at Inframetrics and at NASA Lewis. They are presently being used for ground based testing in preparation for space flight in the fall of 1995.
Performance evaluation of a quasi-microscope for planetary landers
NASA Technical Reports Server (NTRS)
Burcher, E. E.; Huck, F. O.; Wall, S. D.; Woehrle, S. B.
1977-01-01
Spatial resolutions achieved with cameras on lunar and planetary landers have been limited to about 1 mm, whereas microscopes of the type proposed for such landers could have obtained resolutions of about 1 um but were never accepted because of their complexity and weight. The quasi-microscope evaluated in this paper could provide intermediate resolutions of about 10 um with relatively simple optics that would augment a camera, such as the Viking lander camera, without imposing special design requirements on the camera of limiting its field of view of the terrain. Images of natural particulate samples taken in black and white and in color show that grain size, shape, and texture are made visible for unconsolidated materials in a 50- to 500-um size range. Such information may provide broad outlines of planetary surface mineralogy and allow inferences to be made of grain origin and evolution. The mineralogical descriptions of single grains would be aided by the reflectance spectra that could, for example, be estimated from the six-channel multispectral data of the Viking lander camera.
Dense depth maps from correspondences derived from perceived motion
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2017-01-01
Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.
Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing
2015-01-01
This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264
Experiments on helical modes in magnetized thin foil-plasmas
NASA Astrophysics Data System (ADS)
Yager-Elorriaga, David
2017-10-01
This paper gives an in-depth experimental study of helical features on magnetized, ultrathin foil-plasmas driven by the 1-MA linear transformer driver at University of Michigan. Three types of cylindrical liner loads were designed to produce: (a) pure magneto-hydrodynamic (MHD) modes (defined as being void of the acceleration-driven magneto-Rayleigh-Taylor instability, MRT) using a non-imploding geometry, (b) pure kink modes using a non-imploding, kink-seeded geometry, and (c) MRT-MHD coupled modes in an unseeded, imploding geometry. For each configuration, we applied relatively small axial magnetic fields of Bz = 0.2-2.0 T (compared to peak azimuthal fields of 30-40 T). The resulting liner-plasmas and instabilities were imaged using 12-frame laser shadowgraphy and visible self-emission on a fast framing camera. The azimuthal mode number was carefully identified with a tracking algorithm of self-emission minima. Our experiments show that the helical structures are a manifestation of discrete eigenmodes. The pitch angle of the helix is simply m / kR , from implosion to explosion, where m, k, and R are the azimuthal mode number, axial wavenumber, and radius of the helical instability. Thus, the pitch angle increases (decreases) during implosion (explosion) as R becomes smaller (larger). We found that there are one, or at most two, discrete helical modes that arise for magnetized liners, with no apparent threshold on the applied Bz for the appearance of helical modes; increasing the axial magnetic field from zero to 0.5 T changes the relative weight between the m = 0 and m = 1 modes. Further increasing the applied axial magnetic fields yield higher m modes. Finally, the seeded kink instability overwhelms the intrinsic instability modes of the plasma. These results are corroborated with our analytic theory on the effects of radial acceleration on the classical sausage, kink, and higher m modes. Work supported by US DOE award DE-SC0012328, Sandia National Laboratories, and the National Science Foundation. D.Y.E. was supported by NSF fellowship Grant Number DGE 1256260. The fast framing camera was supported by a DURIP, AFOSR Grant FA9550-15-1-0419.
Investigation of high power impulse magnetron sputtering (HIPIMS) discharge using fast ICCD camera
NASA Astrophysics Data System (ADS)
Hecimovic, Ante
2012-10-01
High power impulse magnetron sputtering (HIPIMS) combines impulse glow discharges at power levels up to the MW range with conventional magnetron cathodes to achieve a highly ionised sputtered flux. The dynamics of the HIPIMS discharge was investigated using fast Intensified Charge Coupled Device (ICCD) camera. In the first experiment the HIPIMS plasma was recorded from the side with goal to analyse the plasma intensity using Abel inversion to obtain the emissivity maps of the plasma species. Resulting emissivity maps provide the information on the spatial distribution of Ar and sputtered material and evolution of the plasma chemistry above the cathode. In the second experiment the plasma emission was recorded with camera facing the target. The images show that the HIPIMS plasma develops drift wave type instabilities characterized by well defined regions of high and low plasma emissivity along the racetrack of the magnetron. The instabilities cause periodic shifts in the floating potential. The structures rotate in ExB direction at velocities of 10 kms-1 and frequencies up to 200 kHz. The high emissivity regions comprise Ar and metal ion emission with strong Ar and metal neutral emission depletion. A detailed analysis of the temporal evolution of the saturated instabilities using four consequently triggered fast ICCD cameras is presented. Furthermore working gas pressure and discharge current variation showed that the shape and the speed of the instability strongly depend on the working gas and target material combination. In order to better understand the mechanism of the instability, different optical interference band pass filters (of metal and gas atom, and ion lines) were used to observe the spatial distribution of each species within the instability.
2. EXTERIOR VIEW OF DOWNSTREAM SIDE OF COTTAGE 191 TAKEN ...
2. EXTERIOR VIEW OF DOWNSTREAM SIDE OF COTTAGE 191 TAKEN FROM ROOF OF GARAGE 393. CAMERA FACING SOUTHEAST. COTTAGE 181 AND CHILDREN'S PLAY AREA VISIBLE ON EITHER SIDE OF ROOF. GRAPE ARBOR IN FOREGROUND. - Swan Falls Village, Cottage 191, Snake River, Kuna, Ada County, ID
ERIC Educational Resources Information Center
Daneman, Kathy
1998-01-01
Describes the integration of security systems to provide enhanced security that is both effective and long lasting. Examines combining card-access systems with camera surveillance, and highly visible emergency phones and security officers. as one of many possible combinations. Some systems most capable of being integrated are listed. (GR)
11. Interior view of first floor of 1922 north section, ...
11. Interior view of first floor of 1922 north section, showing east wall and windows at far north end of building. Camera pointed E. Rear of building is partially visible on far left. - Puget Sound Naval Shipyard, Pattern Shop, Farragut Avenue, Bremerton, Kitsap County, WA
NASA Astrophysics Data System (ADS)
Wang, Jingli; Liu, Xulin; Yang, Xihua; Lei, Ming; Ruan, Shunxian; Nie, Kai; Miao, Yupeng; Liu, Jincheng
2014-04-01
Visibility information is fundamental in aviation, navigation, land transportation, air quality and dust storm monitoring, and military activities which often require frequent and accurate real-time observation of visibility. The traditional manual observation, the primary means to obtain visibility information by human eyes, is subjective, inconsistent and costly. Instrumental observation (or traditional optical instrument) has overcome some of these limitations, but it is difficult to obtain correct visibility information in a complicated atmospheric (e.g. rainy and foggy) environment. We developed a new visibility instrument, digital photography visiometer system (DPVS), equipped with advanced digital photographic technology including high-resolution charge-coupled-device camera and computer. The new DPVS imitates the human eye observation and accurately calculates the visibility based on its definition and observational principles. We compared the results of the new DPVS with those from a forward scattering visibility instrument (FD12) and manual visibility observations in various (rainy, non-rainy, foggy) weather conditions. The comparative results show that the new DPVS, FD12, and manual observation have the same trend of change, but the observation from the new DPVS is closer to that from the manual observations in rainy days or complicated weather conditions. Our study demonstrates that the new DVPS is superior to the optical visibility instrument and can be used for automated visibility observations under all weather conditions.
NASA Astrophysics Data System (ADS)
Farries, Mark; Ward, Jon; Valle, Stefano; Stephens, Gary; Moselund, Peter; van der Zanden, Koen; Napier, Bruce
2015-06-01
Mid-IR imaging spectroscopy has the potential to offer an effective tool for early cancer diagnosis. Current development of bright super-continuum sources, narrow band acousto-optic tunable filters and fast cameras have made feasible a system that can be used for fast diagnosis of cancer in vivo at point of care. The performance of a proto system that has been developed under the Minerva project is described.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
2004-09-07
Lonely Mimas swings around Saturn, seeming to gaze down at the planet's splendid rings. The outermost, narrow F ring is visible here and exhibits some clumpy structure near the bottom of the frame. The shadow of Saturn's southern hemisphere stretches almost entirely across the rings. Mimas is 398 kilometers (247 miles) wide. The image was taken with the Cassini spacecraft narrow angle camera on August 15, 2004, at a distance of 8.8 million kilometers (5.5 million miles) from Saturn, through a filter sensitive to visible red light. The image scale is 53 kilometers (33 miles) per pixel. Contrast was slightly enhanced to aid visibility.almost entirely across the rings. Mimas is 398 kilometers (247 miles) wide. http://photojournal.jpl.nasa.gov/catalog/PIA06471
2015-10-30
During its closest ever dive past the active south polar region of Saturn moon Enceladus, NASA Cassini spacecraft quickly shuttered its imaging cameras to capture glimpses of the fast moving terrain below.
Hurricane Matthew over Haiti seen by NASA MISR
2016-10-04
On the morning of October 4, 2016, Hurricane Matthew passed over the island nation of Haiti. A Category 4 storm, it made landfall around 7 a.m. local time (5 a.m. PDT/8 a.m. EDT) with sustained winds over 145 mph. This is the strongest hurricane to hit Haiti in over 50 years. On October 4, at 10:30 a.m. local time (8:30 a.m. PDT/11:30 a.m. EDT), the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite passed over Hurricane Matthew. This animation was made from images taken by MISR's downward-pointing (nadir) camera is 235 miles (378 kilometers) across, which is much narrower than the massive diameter of Matthew, so only the hurricane's eye and a portion of the storm's right side are visible. Haiti is completely obscured by Matthew's clouds, but part of the Bahamas is visible to the north. Several hot towers are visible within the central part of the storm, and another at the top right of the image. Hot towers are enormous thunderheads that punch through the tropopause (the boundary between the lowest layer of the atmosphere, the troposphere, and the next level, the stratosphere). The rugged topography of Haiti causes uplift within the storm, generating these hot towers and fueling even more rain than Matthew would otherwise dump on the country. MISR has nine cameras fixed at different angles, which capture images of the same point on the ground within about seven minutes. This animation was created by blending images from these nine cameras. The change in angle between the images causes a much larger motion from south to north than actually exists, but the rotation of the storm is real motion. From this animation, you can get an idea of the incredible height of the hot towers, especially the one to the upper right. The counter-clockwise rotation of Matthew around its closed (cloudy) eye is also visible. These data were acquired during Terra orbit 89345. An animation is available at http://photojournal.jpl.nasa.gov/catalog/PIA21070
Stereo optical guidance system for control of industrial robots
NASA Technical Reports Server (NTRS)
Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)
1992-01-01
A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.
Smartphone Based Platform for Colorimetric Sensing of Dyes
NASA Astrophysics Data System (ADS)
Dutta, Sibasish; Nath, Pabitra
We demonstrate the working of a smartphone based optical sensor for measuring absorption band of coloured dyes. By integration of simple laboratory optical components with the camera unit of the smartphone we have converted it into a visible spectrometer with a pixel resolution of 0.345 nm/pixel. Light from a broadband optical source is allowed to transmit through a specific dye solution. The transmitted light signal is captured by the camera of the smartphone. The present sensor is inexpensive, portable and light weight making it an ideal handy sensor suitable for different on-field sensing.
1988-11-01
atmospheric point the sensor line of sight to a target. Both oxidizers.) The stability of the booster plume as optical systems look out through windows...vertical. The optical layout olume unless it is tracking the UV plume outside for the UV camera is as shown in Figure 1. A the atmosphere. Thus, other...and olune and handoff to the missile in the atmosphere camera was used on the rear platform for the with high resolution optics . visible observation
Camera Concepts for the Advanced Gamma-Ray Imaging System (AGIS)
NASA Astrophysics Data System (ADS)
Nepomuk Otte, Adam
2009-05-01
The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. The incorporation of trigger electronics and signal digitization into the camera are under study. Given the size of AGIS, the camera must be reliable, robust, and cost effective. We are investigating several directions that include innovative technologies such as Geiger-mode avalanche-photodiodes as a possible detector and switched capacitor arrays for the digitization.
Li, Junfeng; Wan, Xiaoxia
2018-01-15
To enrich the contents of digital archive and to guide the copy and restoration of colored relics, non-invasive methods for extraction of painting boundary and identification of pigment composition are proposed in this study based on the visible spectral images of colored relics. Superpixel concept is applied for the first time to the field of oversegmentation of visible spectral images and implemented on the visible spectral images of colored relics to extract their painting boundary. Since different pigments are characterized by their own spectrum and the same kind of pigment has the similar geometric profile in spectrum, an automatic identification method is established by comparing the proximity between the geometric profiles of the unknown spectrum from each superpixel and the pre-known spectrum from a deliberately prepared database. The methods are validated using the visible spectral images of the ancient wall paintings in Mogao Grottoes. By the way, the visible spectral images are captured by a multispectral imaging system consisting of two broadband filters and a RGB camera with high spatial resolution. Copyright © 2017 Elsevier B.V. All rights reserved.
Thermal-to-visible face recognition using partial least squares.
Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson
2015-03-01
Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.
Nonlinear Optical Properties of Semiconducting Polymers.
1990-01-01
geberation in both cis and trans- polyacetylene. In the fast transient photoconductivity area, we will attempt to move into the sub-picosecond regime...spectroscopy (ir through visible) of third harmonic geberation in both cis and trans- polyacetylene. In the fast transient photoconductivity area, we will
A Fast Visible-Infrared Imaging Radiometer Suite Simulator for Cloudy Atmopheres
NASA Technical Reports Server (NTRS)
Liu, Chao; Yang, Ping; Nasiri, Shaima L.; Platnick, Steven; Meyer, Kerry G.; Wang, Chen Xi; Ding, Shouguo
2015-01-01
A fast instrument simulator is developed to simulate the observations made in cloudy atmospheres by the Visible Infrared Imaging Radiometer Suite (VIIRS). The correlated k-distribution (CKD) technique is used to compute the transmissivity of absorbing atmospheric gases. The bulk scattering properties of ice clouds used in this study are based on the ice model used for the MODIS Collection 6 ice cloud products. Two fast radiative transfer models based on pre-computed ice cloud look-up-tables are used for the VIIRS solar and infrared channels. The accuracy and efficiency of the fast simulator are quantify in comparison with a combination of the rigorous line-by-line (LBLRTM) and discrete ordinate radiative transfer (DISORT) models. Relative errors are less than 2 for simulated TOA reflectances for the solar channels and the brightness temperature differences for the infrared channels are less than 0.2 K. The simulator is over three orders of magnitude faster than the benchmark LBLRTM+DISORT model. Furthermore, the cloudy atmosphere reflectances and brightness temperatures from the fast VIIRS simulator compare favorably with those from VIIRS observations.
Efficient Geometric Sound Propagation Using Visibility Culling
NASA Astrophysics Data System (ADS)
Chandak, Anish
2011-07-01
Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario.
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Fast imaging diagnostics on the C-2U advanced beam-driven field-reversed configuration device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granstedt, E. M., E-mail: egranstedt@trialphaenergy.com; Petrov, P.; Knapp, K.
2016-11-15
The C-2U device employed neutral beam injection, end-biasing, and various particle fueling techniques to sustain a Field-Reversed Configuration (FRC) plasma. As part of the diagnostic suite, two fast imaging instruments with radial and nearly axial plasma views were developed using a common camera platform. To achieve the necessary viewing geometry, imaging lenses were mounted behind re-entrant viewports attached to welded bellows. During gettering, the vacuum optics were retracted and isolated behind a gate valve permitting their removal if cleaning was necessary. The axial view incorporated a stainless-steel mirror in a protective cap assembly attached to the vacuum-side of the viewport.more » For each system, a custom lens-based, high-throughput optical periscope was designed to relay the plasma image about half a meter to a high-speed camera. Each instrument also contained a remote-controlled filter wheel, set between shots to isolate a particular hydrogen or impurity emission line. The design of the camera platform, imaging performance, and sample data for each view is presented.« less
Fast imaging diagnostics on the C-2U advanced beam-driven field-reversed configuration device
NASA Astrophysics Data System (ADS)
Granstedt, E. M.; Petrov, P.; Knapp, K.; Cordero, M.; Patel, V.
2016-11-01
The C-2U device employed neutral beam injection, end-biasing, and various particle fueling techniques to sustain a Field-Reversed Configuration (FRC) plasma. As part of the diagnostic suite, two fast imaging instruments with radial and nearly axial plasma views were developed using a common camera platform. To achieve the necessary viewing geometry, imaging lenses were mounted behind re-entrant viewports attached to welded bellows. During gettering, the vacuum optics were retracted and isolated behind a gate valve permitting their removal if cleaning was necessary. The axial view incorporated a stainless-steel mirror in a protective cap assembly attached to the vacuum-side of the viewport. For each system, a custom lens-based, high-throughput optical periscope was designed to relay the plasma image about half a meter to a high-speed camera. Each instrument also contained a remote-controlled filter wheel, set between shots to isolate a particular hydrogen or impurity emission line. The design of the camera platform, imaging performance, and sample data for each view is presented.
Multiparametric Experiments and Multiparametric Setups for Metering Explosive Eruptions
NASA Astrophysics Data System (ADS)
Taddeucci, J.; Scarlato, P.; Del Bello, E.
2016-12-01
Explosive eruptions are multifaceted processes best studied by integrating a variety of observational perspectives. This need marries well with the continuous stream of new means that technological progress provides to volcanologists to parameterize these eruptions. Since decades, new technologies have been tested and integrated approaches have been attempted during so-called multiparametric experiments, i.e., short field campaigns with many, different instruments (and scientists) targeting natural laboratory volcanoes. Recently, portable multiparametric setups have been developed, including a few, highly complementary instruments to be rapidly deployed at any erupting volcano. Multiparametric experiments and setups share most of their challenges, like technical issues, site logistics, and data processing and interpretation. Our FAMoUS (FAst MUltiparametric Setup) setup pivots around coupled, high-speed imaging (visible and thermal) and acoustic (infrasonic to audible) recording, plus occasional seismic recording and sample collection. FAMoUS provided new insights on pyroclasts ejection and settling and jet noise dynamics at volcanoes worldwide. In the last years we conducted a series of BAcIO (Broadband ACquisition and Imaging Operation) experiments at Stromboli (Italy). These hosted state-of-the-art and prototypal eruption-metering technologies, including: multiple high-speed high-definition cameras for 3-D imaging; combined visible-infrared-ultraviolet imaging; in-situ and remote gas measurements; UAV aerial surveys; Doppler radar, and microphone arrays. This combined approach provides new understandings of the fundamental controls of Strombolian-style activity, and allows for crucial cross-validation of instruments and techniques. Several documentary expeditions participated in the BAcIO, attesting its tremendous potential for public outreach. Finally, sharing field work promotes interdisciplinary discussions and cooperation like nothing in the world.
Limiter Observations during W7-X First Plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, Glen Anthony; Biedermann, C.; Effenberg, F.
During the first operational phase (referred to as OP1.1) of the new Wendelstein 7-X (W7-X) stellarator, five poloidal graphite limiters were mounted on the inboard side of the vacuum vessel, one in each of the five toroidal modules which form the W7-X vacuum vessel. Each limiter consisted of nine specially shaped graphite tiles, designed to conform to the last closed field line geometry in the bean-shaped section of the standard OP1.1 magnetic field configuration (Sunn Pedersen et al 2015 Nucl. Fusion 55 126001). Here, we observed the limiters with multiple infrared and visible camera systems, as well as filtered photomultipliers.more » Power loads are calculated from infrared (IR) temperature measurements using THEODOR, and heating patterns (dual stripes) compare well with field line mapping and EMC3-EIRENE predictions. While the poloidal symmetry of the heat loads was excellent, the toroidal heating pattern showed up to a factor of 2× variation, with peak heat loads on Limiter 1. The total power intercepted by the limiters was up to ~60% of the input ECRH heating power. Calorimetry using bulk tile heating (measured via post-shot IR thermography) on Limiter 3 showed a difference between short high power discharges, and longer lower power ones, with regards to the fraction of energy deposited on the limiters. Finally, fast heating transients, with frequency >1 kHz were detected, and their visibility was enhanced by the presence of surface coatings which developed on the limiters by the end of the campaign.« less
Eclipse Science Results from the Airborne Infrared Spectrometer (AIR-Spec)
NASA Astrophysics Data System (ADS)
Samra, J.; Cheimets, P.; DeLuca, E.; Golub, L.; Judge, P. G.; Lussier, L.; Madsen, C. A.; Marquez, V.; Tomczyk, S.; Vira, A.
2017-12-01
We present the first science results from the commissioning flight of the Airborne Infrared Spectrometer (AIR-Spec), an innovative solar spectrometer that will observe the 2017 solar eclipse from the NSF/NCAR High-Performance Instrumented Airborne Platform for Environmental Research (HIAPER). During the eclipse, AIR-Spec will image five magnetically sensitive coronal emission lines between 1.4 and 4 microns to determine whether they may be useful probes of coronal magnetism. The instrument will measure emission line intensity, FWHM, and Doppler shift from an altitude of over 14 km, above local weather and most of the absorbing water vapor. Instrumentation includes an image stabilization system, feed telescope, grating spectrometer, infrared camera, and visible slit-jaw imager. Results from the 2017 eclipse are presented in the context of the mission's science goals. AIR-Spec will identify line strengths as a function of position in the solar corona and search for the high frequency waves that are candidates for heating and acceleration of the solar wind. The instrument will also identify large scale flows in the corona, particularly in polar coronal holes. Three of the five lines are expected to be strong in coronal hole plasmas because they are excited in part by scattered photospheric light. Line profile analysis will probe the origins of the fast and slow solar wind. Finally, the AIR-Spec measurements will complement ground based eclipse observations to provide detailed plasma diagnostics throughout the corona. AIR-Spec will measure infrared emission of ions observed in the visible from the ground, giving insight into plasma heating and acceleration at radial distances inaccessible to existing or planned spectrometers.
Limiter Observations during W7-X First Plasmas
Wurden, Glen Anthony; Biedermann, C.; Effenberg, F.; ...
2017-04-03
During the first operational phase (referred to as OP1.1) of the new Wendelstein 7-X (W7-X) stellarator, five poloidal graphite limiters were mounted on the inboard side of the vacuum vessel, one in each of the five toroidal modules which form the W7-X vacuum vessel. Each limiter consisted of nine specially shaped graphite tiles, designed to conform to the last closed field line geometry in the bean-shaped section of the standard OP1.1 magnetic field configuration (Sunn Pedersen et al 2015 Nucl. Fusion 55 126001). Here, we observed the limiters with multiple infrared and visible camera systems, as well as filtered photomultipliers.more » Power loads are calculated from infrared (IR) temperature measurements using THEODOR, and heating patterns (dual stripes) compare well with field line mapping and EMC3-EIRENE predictions. While the poloidal symmetry of the heat loads was excellent, the toroidal heating pattern showed up to a factor of 2× variation, with peak heat loads on Limiter 1. The total power intercepted by the limiters was up to ~60% of the input ECRH heating power. Calorimetry using bulk tile heating (measured via post-shot IR thermography) on Limiter 3 showed a difference between short high power discharges, and longer lower power ones, with regards to the fraction of energy deposited on the limiters. Finally, fast heating transients, with frequency >1 kHz were detected, and their visibility was enhanced by the presence of surface coatings which developed on the limiters by the end of the campaign.« less
Cloud Forecasting and 3-D Radiative Transfer Model Validation using Citizen-Sourced Imagery
NASA Astrophysics Data System (ADS)
Gasiewski, A. J.; Heymsfield, A.; Newman Frey, K.; Davis, R.; Rapp, J.; Bansemer, A.; Coon, T.; Folsom, R.; Pfeufer, N.; Kalloor, J.
2017-12-01
Cloud radiative feedback mechanisms are one of the largest sources of uncertainty in global climate models. Variations in local 3D cloud structure impact the interpretation of NASA CERES and MODIS data for top-of-atmosphere radiation studies over clouds. Much of this uncertainty results from lack of knowledge of cloud vertical and horizontal structure. Surface-based data on 3-D cloud structure from a multi-sensor array of low-latency ground-based cameras can be used to intercompare radiative transfer models based on MODIS and other satellite data with CERES data to improve the 3-D cloud parameterizations. Closely related, forecasting of solar insolation and associated cloud cover on time scales out to 1 hour and with spatial resolution of 100 meters is valuable for stabilizing power grids with high solar photovoltaic penetrations. Data for cloud-advection based solar insolation forecasting with requisite spatial resolution and latency needed to predict high ramp rate events obtained from a bottom-up perspective is strongly correlated with cloud-induced fluctuations. The development of grid management practices for improved integration of renewable solar energy thus also benefits from a multi-sensor camera array. The data needs for both 3D cloud radiation modelling and solar forecasting are being addressed using a network of low-cost upward-looking visible light CCD sky cameras positioned at 2 km spacing over an area of 30-60 km in size acquiring imagery on 30 second intervals. Such cameras can be manufactured in quantity and deployed by citizen volunteers at a marginal cost of 200-400 and operated unattended using existing communications infrastructure. A trial phase to understand the potential utility of up-looking multi-sensor visible imagery is underway within this NASA Citizen Science project. To develop the initial data sets necessary to optimally design a multi-sensor cloud camera array a team of 100 citizen scientists using self-owned PDA cameras is being organized to collect distributed cloud data sets suitable for MODIS-CERES cloud radiation science and solar forecasting algorithm development. A low-cost and robust sensor design suitable for large scale fabrication and long term deployment has been developed during the project prototyping phase.
Southern Italy, Instrument Pointing Subsystem
1985-08-06
51F-32-024 (29 July - 6 August 1985) --- Italy's boot heel" surrounded by waters of the Ionian Sea/Golfo di Taranto and the Adriatic Sea is very clearly visible in this scene made with a handheld 70mm camera. Spacelab 2's versatile instrument pointing system (IPS) protrudes from the cargo bay.
ISS, Soyuz, and Endeavour undocking seen from the SM during Expedition Four
2001-12-15
ISS004-E-5024 (15 December 2001) --- A Soyuz vehicle, docked to the International Space Station (ISS), is photographed by a crewmember on the station. A portion of the Space Shuttle Endeavour is visible in the background. The image was taken with a digital still camera.
Remote sensing technologies are a class of instrument and sensor systems that include laser imageries, imaging spectrometers, and visible to thermal infrared cameras. These systems have been successfully used for gas phase chemical compound identification in a variety of field e...
Onufrienko with fresh fruit in the Zvezda SM, Expedition Four
2002-01-16
ISS004-E-6334 (January 2002) --- Cosmonaut Yury I. Onufrienko, Expedition Four mission commander representing Rosaviakosmos, is photographed in the Zvezda Service Module on the International Space Station (ISS). Apples and oranges are visible floating freely in front of Onufrienko. The image was taken with a digital still camera.
Earth Observations taken by the Expedition 10 crew
2005-01-23
ISS010-E-14618 (23 January 2005) --- Egypt's Lake Nasser, centered roughly at 22.64 degrees north latitude and 32.45 degrees east longitude, was captured with an electronic still camera by the Expedition 10 crew onboard the International Space Station. Sunglint on the lake makes it more easily visible.
2009-03-27
View from the balcony of the Russian Mission Control Center in Korolev, Russia moments before the Soyuz TMA-14 docks to the International Space Station on Saturday, March 28, 2009. A view of the International Space Station from Soyuz onboard cameras is visible in the upper right display. Photo Credit: (NASA/Bill Ingalls)
2009-03-27
View from the balcony of the Russian Mission Control Center in Korolev, Russia moments before the Soyuz TMA-14 docks to the International Space Station on Saturday, March 28, 2009. A view of the International Space Station from Soyuz onboard cameras is visible in the upper display. Photo Credit: (NASA/Bill Ingalls)
Production application of injection-molded diffractive elements
NASA Astrophysics Data System (ADS)
Clark, Peter P.; Chao, Yvonne Y.; Hines, Kevin P.
1995-12-01
We demonstrate that transmission kinoforms for visible light applications can be injection molded in acrylic in production volumes. A camera is described that employs molded Fresnel lenses to change the convergence of a projection ranging system. Kinoform surfaces are used in the projection system to achromatize the Fresnel lenses.
Usachev is visible in the open ODS hatch
2001-08-12
STS105-E-5094 (12 August 2001) --- Yury V. Usachev of Rosaviakosmos, Expedition Two mission commander, can be seen through the recently opened airlock hatch of Space Shuttle Discovery as he welcomes the STS-105 and Expedition Three crews. This image was taken with a digital still camera.
Automatic Spatio-Temporal Flow Velocity Measurement in Small Rivers Using Thermal Image Sequences
NASA Astrophysics Data System (ADS)
Lin, D.; Eltner, A.; Sardemann, H.; Maas, H.-G.
2018-05-01
An automatic spatio-temporal flow velocity measurement approach, using an uncooled thermal camera, is proposed in this paper. The basic principle of the method is to track visible thermal features at the water surface in thermal camera image sequences. Radiometric and geometric calibrations are firstly implemented to remove vignetting effects in thermal imagery and to get the interior orientation parameters of the camera. An object-based unsupervised classification approach is then applied to detect the interest regions for data referencing and thermal feature tracking. Subsequently, GCPs are extracted to orient the river image sequences and local hot points are identified as tracking features. Afterwards, accurate dense tracking outputs are obtained using pyramidal Lucas-Kanade method. To validate the accuracy potential of the method, measurements obtained from thermal feature tracking are compared with reference measurements taken by a propeller gauge. Results show a great potential of automatic flow velocity measurement in small rivers using imagery from a thermal camera.
NASA Astrophysics Data System (ADS)
Ou, Yangwei; Zhang, Hongbo; Li, Bin
2018-04-01
The purpose of this paper is to show that absolute orbit determination can be achieved based on spacecraft formation. The relative position vectors expressed in the inertial frame are used as measurements. In this scheme, the optical camera is applied to measure the relative line-of-sight (LOS) angles, i.e., the azimuth and elevation. The LIDAR (Light radio Detecting And Ranging) or radar is used to measure the range and we assume that high-accuracy inertial attitude is available. When more deputies are included in the formation, the formation configuration is optimized from the perspective of the Fisher information theory. Considering the limitation on the field of view (FOV) of cameras, the visibility of spacecraft and the installation of cameras are investigated. In simulations, an extended Kalman filter (EKF) is used to estimate the position and velocity. The results show that the navigation accuracy can be enhanced by using more deputies and the installation of cameras significantly affects the navigation performance.
2016-03-07
Peering deep into the early Universe, this picturesque parallel field observation from the NASA/ESA Hubble Space Telescope reveals thousands of colourful galaxies swimming in the inky blackness of space. A few foreground stars from our own galaxy, the Milky Way, are also visible. In October 2013 Hubble’s Wide Field Camera 3 (WFC3) and Advanced Camera for Surveys (ACS) began observing this portion of sky as part of the Frontier Fields programme. This spectacular skyscape was captured during the study of the giant galaxy cluster Abell 2744, otherwise known as Pandora’s Box. While one of Hubble’s cameras concentrated on Abell 2744, the other camera viewed this adjacent patch of sky near to the cluster. Containing countless galaxies of various ages, shapes and sizes, this parallel field observation is nearly as deep as the Hubble Ultra-Deep Field. In addition to showcasing the stunning beauty of the deep Universe in incredible detail, this parallel field — when compared to other deep fields — will help astronomers understand how similar the Universe looks in different directions
Using remote underwater video to estimate freshwater fish species richness.
Ebner, B C; Morgan, D L
2013-05-01
Species richness records from replicated deployments of baited remote underwater video stations (BRUVS) and unbaited remote underwater video stations (UBRUVS) in shallow (<1 m) and deep (>1 m) water were compared with those obtained from using fyke nets, gillnets and beach seines. Maximum species richness (14 species) was achieved through a combination of conventional netting and camera-based techniques. Chanos chanos was the only species not recorded on camera, whereas Lutjanus argentimaculatus, Selenotoca multifasciata and Gerres filamentosus were recorded on camera in all three waterholes but were not detected by netting. BRUVSs and UBRUVSs provided versatile techniques that were effective at a range of depths and microhabitats. It is concluded that cameras warrant application in aquatic areas of high conservation value with high visibility. Non-extractive video methods are particularly desirable where threatened species are a focus of monitoring or might be encountered as by-catch in net meshes. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.
Fluorescent image tracking velocimeter
Shaffer, Franklin D.
1994-01-01
A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.
Inflight Radiometric Calibration of New Horizons' Multispectral Visible Imaging Camera (MVIC)
NASA Technical Reports Server (NTRS)
Howett, C. J. A.; Parker, A. H.; Olkin, C. B.; Reuter, D. C.; Ennico, K.; Grundy, W. M.; Graps, A. L.; Harrison, K. P.; Throop, H. B.; Buie, M. W.;
2016-01-01
We discuss two semi-independent calibration techniques used to determine the inflight radiometric calibration for the New Horizons Multi-spectral Visible Imaging Camera (MVIC). The first calibration technique compares the measured number of counts (DN) observed from a number of well calibrated stars to those predicted using the component-level calibration. The ratio of these values provides a multiplicative factor that allows a conversation between the preflight calibration to the more accurate inflight one, for each detector. The second calibration technique is a channel-wise relative radiometric calibration for MVIC's blue, near-infrared and methane color channels using Hubble and New Horizons observations of Charon and scaling from the red channel stellar calibration. Both calibration techniques produce very similar results (better than 7% agreement), providing strong validation for the techniques used. Since the stellar calibration described here can be performed without a color target in the field of view and covers all of MVIC's detectors, this calibration was used to provide the radiometric keyword values delivered by the New Horizons project to the Planetary Data System (PDS). These keyword values allow each observation to be converted from counts to physical units; a description of how these keyword values were generated is included. Finally, mitigation techniques adopted for the gain drift observed in the near-infrared detector and one of the panchromatic framing cameras are also discussed.
Rogers, B.T. Jr.; Davis, W.C.
1957-12-17
This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.
Solid state television camera (CCD-buried channel)
NASA Technical Reports Server (NTRS)
1976-01-01
The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.
Solid state television camera (CCD-buried channel), revision 1
NASA Technical Reports Server (NTRS)
1977-01-01
An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.
Experiments with synchronized sCMOS cameras
NASA Astrophysics Data System (ADS)
Steele, Iain A.; Jermak, Helen; Copperwheat, Chris M.; Smith, Robert J.; Poshyachinda, Saran; Soonthorntham, Boonrucksar
2016-07-01
Scientific-CMOS (sCMOS) cameras can combine low noise with high readout speeds and do not suffer the charge multiplication noise that effectively reduces the quantum efficiency of electron multiplying CCDs by a factor 2. As such they have strong potential in fast photometry and polarimetry instrumentation. In this paper we describe the results of laboratory experiments using a pair of commercial off the shelf sCMOS cameras based around a 4 transistor per pixel architecture. In particular using a both stable and a pulsed light sources we evaluate the timing precision that may be obtained when the cameras readouts are synchronized either in software or electronically. We find that software synchronization can introduce an error of 200-msec. With electronic synchronization any error is below the limit ( 50-msec) of our simple measurement technique.
Solid state, CCD-buried channel, television camera study and design
NASA Technical Reports Server (NTRS)
Hoagland, K. A.; Balopole, H.
1976-01-01
An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.
Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.
Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F
1980-01-01
Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.
Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission
NASA Astrophysics Data System (ADS)
Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.
2018-02-01
NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Philip Michael; Ahn, Joonwook; Bell, R. E.
High-harmonic fast wave (HHFW) heating and current drive is being developed in NSTX to provide bulk electron heating and q(0) control during non-inductively sustained Hmode plasmas fuelled by deuterium neutral-beam injection (NBI). In addition, it is used to assist the plasma current ramp-up. A major modification to increase the RF power limit was made in 2009; the original end-grounded, single end-powered current straps of the 12- element array were replaced with center-grounded, double end-powered straps. Greater than 3 MW have been coupled into NBI-driven, ELMy H-mode plasmas with this upgraded antenna. Improved core HHFW heating, particularly at longer wavelengths andmore » during low-density start-up and plasma current ramp-up, has been obtained by lowering the edge density with lithium wall conditioning, thereby moving the critical density for fast-wave propagation away from the vessel wall [1]. Significant core electron heating of NBI-fuelled H-modes has been observed for the first time over a range of launched wavelengths and H-modes can be accessed by HHFW alone. Visible and IR camera images of the antenna and divertor indicate that fast wave interactions can deposit considerable RF energy on the outboard divertor plate, especially at longer wavelengths that begin to propagate closer to the vessel walls. Edge power loss can also arise from HHFWgenerated parametric decay instabilities; edge ion heating is observed that is wavelength dependent. During plasmas where HHFW is combined with NBI, there is a significant enhancement in neutron rate, and fast-ion D-alpha (FIDA) emission measurements clearly show broadening of the fast-ion profile in the plasma core. Large edge localized modes (ELMs) have been observed immediately following the termination of RF power, whether the power turn off is programmed or due to antenna arcing. Causality has not been established but new experiments are planned and will be reported. Fast digitization of the reflected power signal indicates a much faster rise time for arcs than for ELMs. Based on this observation, an ELM/arc discrimination system is being implemented to maintain RF power during ELMs even when the reflection coefficient becomes large. This work is supported by US DOE contracts DE-AC-05-00OR22725 and DE-AC02- 09CH11466. References [1] C. K. Phillips, et al, Nuclear Fusion 10, 075015 (2009)« less
Land-based infrared imagery for marine mammal detection
NASA Astrophysics Data System (ADS)
Graber, Joseph; Thomson, Jim; Polagye, Brian; Jessup, Andrew
2011-09-01
A land-based infrared (IR) camera is used to detect endangered Southern Resident killer whales in Puget Sound, Washington, USA. The observations are motivated by a proposed tidal energy pilot project, which will be required to monitor for environmental effects. Potential monitoring methods also include visual observation, passive acoustics, and active acoustics. The effectiveness of observations in the infrared spectrum is compared to observations in the visible spectrum to assess the viability of infrared imagery for cetacean detection and classification. Imagery was obtained at Lime Kiln Park, Washington from 7/6/10-7/9/10 using a FLIR Thermovision A40M infrared camera (7.5-14μm, 37°HFOV, 320x240 pixels) under ideal atmospheric conditions (clear skies, calm seas, and wind speed 0-4 m/s). Whales were detected during both day (9 detections) and night (75 detections) at distances ranging from 42 to 162 m. The temperature contrast between dorsal fins and the sea surface ranged from 0.5 to 4.6 °C. Differences in emissivity from sea surface to dorsal fin are shown to aid detection at high incidence angles (near grazing). A comparison to theory is presented, and observed deviations from theory are investigated. A guide for infrared camera selection based on site geometry and desired target size is presented, with specific considerations regarding marine mammal detection. Atmospheric conditions required to use visible and infrared cameras for marine mammal detection are established and compared with 2008 meteorological data for the proposed tidal energy site. Using conservative assumptions, infrared observations are predicted to provide a 74% increase in hours of possible detection, compared with visual observations.
Novel fast catadioptric objective with wide field of view
NASA Astrophysics Data System (ADS)
Muñoz, Fernando; Infante Herrero, José M.; Benítez, Pablo; Miñano, Juan C.; Lin, Wang; Vilaplana, Juan; Biot, Guillermo; de la Fuente, Marta
2010-08-01
Using the Simultaneous Multiple Surface method in 2D (SMS2D), we present a fast catadioptric objective with a wide field of view (125°×96°) designed for a microbolometer detector with 640×480 pixels and 25 microns pixel pitch Keywords: Infrared lens design, thermal imaging, Schwarzschild configuration, SMS2D, wide field of view, driving cameras, panoramic systems
Li, Jin; Liu, Zilong
2017-07-24
Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.
Nonlinear Optical Properties of Semiconducting Polymers
1990-10-26
harmonic geberation in both cis and trans- polyacetylene. in the fast transient photoconductivity area, we will attempt to move into the sub-picosecond...addition, we plan to carry out a full spectroscopy (ir through visible) of third harmonic geberation in both cis and trans- polyacetylene. In the fast
Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.
Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro
2016-01-01
At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2017-12-08
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Fast and Sensitive Solution-Processed Visible-Blind Perovskite UV Photodetectors.
Adinolfi, Valerio; Ouellette, Olivier; Saidaminov, Makhsud I; Walters, Grant; Abdelhady, Ahmed L; Bakr, Osman M; Sargent, Edward H
2016-09-01
The first visible-blind UV photodetector based on MAPbCl3 integrated on a substrate exhibits excellent performance, with responsivities reaching 18 A W(-1) below 400 nm and imaging-compatible response times of 1 ms. This is achieved by using substrate-integrated single crystals, thus overcoming the severe limitations affecting thin films and offering a new application of efficient, solution-processed, visible-transparent perovskite optoelectronics. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Earth taken by Galileo after completing its first Earth Gravity Assist
NASA Technical Reports Server (NTRS)
1990-01-01
Near-infrared photograph of Earth was taken by Galileo spacecraft at 6:07 am Pacific Standard Time (PST), 12-11-90, at a range of about 1.32 million miles. Camera used light with a wavelength of 1 micron, which easily penetrates atmospheric hazes and enhances the brightness of land surfaces. South America is prominent near the center; at the top, the East Coast of the United States, including Florida is visible. The West Coast of Africa is visible on the horizon at right. Photo provided by the Jet Propulsion Laboratory (JPL) with alternate number P-37328, 12-19-90.
2005-01-17
This Cassini image shows predominantly the impact-scarred leading hemisphere of Saturn's icy moon Rhea (1,528 kilometers, or 949 miles across). The image was taken in visible light with the Cassini spacecraft narrow angle camera on Dec. 12, 2004, at a distance of 2 million kilometers (1.2 million miles) from Rhea and at a Sun-Rhea-spacecraft, or phase, angle of 30 degrees. The image scale is about 12 kilometers (7.5 miles) per pixel. The image has been magnified by a factor of two and contrast enhanced to aid visibility. http://photojournal.jpl.nasa.gov/catalog/PIA06564
A multi-channel coronal spectrophotometer.
NASA Technical Reports Server (NTRS)
Landman, D. A.; Orrall, F. Q.; Zane, R.
1973-01-01
We describe a new multi-channel coronal spectrophotometer system, presently being installed at Mees Solar Observatory, Mount Haleakala, Maui. The apparatus is designed to record and interpret intensities from many sections of the visible and near-visible spectral regions simultaneously, with relatively high spatial and temporal resolution. The detector, a thermoelectrically cooled silicon vidicon camera tube, has its central target area divided into a rectangular array of about 100,000 pixels and is read out in a slow-scan (about 2 sec/frame) mode. Instrument functioning is entirely under PDP 11/45 computer control, and interfacing is via the CAMAC system.
Sun, Guanghao; Nakayama, Yosuke; Dagdanpurev, Sumiyakhand; Abe, Shigeto; Nishimura, Hidekazu; Kirimoto, Tetsuo; Matsui, Takemi
2017-02-01
Infrared thermography (IRT) is used to screen febrile passengers at international airports, but it suffers from low sensitivity. This study explored the application of a combined visible and thermal image processing approach that uses a CMOS camera equipped with IRT to remotely sense multiple vital signs and screen patients with suspected infectious diseases. An IRT system that produced visible and thermal images was used for image acquisition. The subjects' respiration rates were measured by monitoring temperature changes around the nasal areas on thermal images; facial skin temperatures were measured simultaneously. Facial blood circulation causes tiny color changes in visible facial images that enable the determination of the heart rate. A logistic regression discriminant function predicted the likelihood of infection within 10s, based on the measured vital signs. Sixteen patients with an influenza-like illness and 22 control subjects participated in a clinical test at a clinic in Fukushima, Japan. The vital-sign-based IRT screening system had a sensitivity of 87.5% and a negative predictive value of 91.7%; these values are higher than those of conventional fever-based screening approaches. Multiple vital-sign-based screening efficiently detected patients with suspected infectious diseases. It offers a promising alternative to conventional fever-based screening. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.
2009-01-01
For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.
A fast fusion scheme for infrared and visible light images in NSCT domain
NASA Astrophysics Data System (ADS)
Zhao, Chunhui; Guo, Yunting; Wang, Yulei
2015-09-01
Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.
NASA Astrophysics Data System (ADS)
Ćwiok, M.; Dominik, W.; Małek, K.; Mankiewicz, L.; Mrowca-Ciułacz, J.; Nawrocki, K.; Piotrowski, L. W.; Sitek, P.; Sokołowski, M.; Wrochna, G.; Żarnecki, A. F.
2007-06-01
Experiment “Pi of the Sky” is designed to search for prompt optical emission from GRB sources. 32 CCD cameras covering 2 steradians will monitor the sky continuously. The data will be analysed on-line in search for optical flashes. The prototype with 2 cameras operated at Las Campanas (Chile) since 2004 has recognised several outbursts of flaring stars and has given limits for a few GRB.
A combined vision-inertial fusion approach for 6-DoF object pose estimation
NASA Astrophysics Data System (ADS)
Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.
2015-02-01
The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.
NASA Astrophysics Data System (ADS)
Hsu, S. C.; Moser, A. L.; Merritt, E. C.; Adams, C. S.
2015-11-01
Over the past 4 years on the Plasma Liner Experiment (PLX) at LANL, we have studied obliquely and head-on-merging supersonic plasma jets of an argon/impurity or hydrogen/impurity mixture. The jets are formed/launched by pulsed-power-driven railguns. In successive experimental campaigns, we characterized the (a) evolution of plasma parameters of a single plasma jet as it propagated up to ~ 1 m away from the railgun nozzle, (b) density profiles and 2D morphology of the stagnation layer and oblique shocks that formed between obliquely merging jets, and (c) collisionless interpenetration transitioning to collisional stagnation between head-on-merging jets. Key plasma diagnostics included a fast-framing CCD camera, an 8-chord visible interferometer, a survey spectrometer, and a photodiode array. This talk summarizes the primary results mentioned above, and highlights analyses of inferred post-shock temperatures based on observations of density gradients that we attribute to shock-layer thickness. We also briefly describe more recent PLX experiments on Rayleigh-Taylor-instability evolution with magnetic and viscous effects, and potential future collisionless shock experiments enabled by low-impurity, higher-velocity plasma jets formed by contoured-gap coaxial guns. Supported by DOE Fusion Energy Sciences and LANL LDRD.
Development of the SEASIS instrument for SEDSAT
NASA Technical Reports Server (NTRS)
Maier, Mark W.
1996-01-01
Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.
The Explosive Counterparts of Gravitational Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Astronomy collaborations like the Dark Energy Survey, which Fermilab leads, can track down the visible sources of gravitational waves caused by binary neutron stars. This animation takes you through the collision of two neutron stars, and shows you the explosion of light and energy seen by the Dark Energy Camera on August 17, 2017.
3. ONTARIO MINE. ADIT ENTRANCE WITH TIN ROOF. TIP TOP ...
3. ONTARIO MINE. ADIT ENTRANCE WITH TIN ROOF. TIP TOP IS LOCATED IN LINE WITH 'Y' BRANCH AND THE TAILING PILE FOR TIP TOP IS VISIBLE JUST TO RIGHT OF IT. CAMERA POINTED SOUTH-SOUTHEAST. - Florida Mountain Mining Sites, Ontario Mine, Northwest side of Florida Mountain, Silver City, Owyhee County, ID
Nondestructive defect detection in laser optical coatings
NASA Astrophysics Data System (ADS)
Marrs, C. D.; Porteus, J. O.; Palmer, J. R.
1985-03-01
Defects responsible for laser damage in visible-wavelength mirrors are observed at nondamaging intensities using a new video microscope system. Studies suggest that a defect scattering phenomenon combined with lag characteristics of video cameras makes this possible. Properties of the video-imaged light are described for multilayer dielectric coatings and diamond-turned metals.
1967-08-01
The Apollo Telescope Mount (ATM), designed and developed by the Marshall Space Flight Center, served as the primary scientific instrument unit aboard the Skylab. The ATM contained eight complex astronomical instruments designed to observe the Sun over a wide spectrum from visible light to x-rays. This photo depicts a mockup of the ATM contamination monitor camera and photometer.
Helms with laptop in Destiny laboratory module
2001-03-30
ISS002-E-5478 (30 March 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, works at a laptop computer in the U.S. Laboratory / Destiny module of the International Space Station (ISS). The Space Station Remote Manipulator System (SSRMS) control panel is visible to Helms' right. This image was recorded with a digital still camera.
2. CHANNEL DIMENSIONS AND ALIGNMENT RESEARCH INSTRUMENTATION. HYDRAULIC ENGINEER PILOTING ...
2. CHANNEL DIMENSIONS AND ALIGNMENT RESEARCH INSTRUMENTATION. HYDRAULIC ENGINEER PILOTING VIDEO-CONTROLED BOAT MODEL FROM CONTROL TRAILER. NOTE VIEW FROM BOAT-MOUNTED VIDEO CAMERA SHOWN ON MONITOR, AND MODEL WATERWAY VISIBLE THROUGH WINDOW AT LEFT. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS
4. LOWER NOTTINGHAM MINE. DETAIL OF OBJECTS ASSOCIATED WITH CABIN ...
4. LOWER NOTTINGHAM MINE. DETAIL OF OBJECTS ASSOCIATED WITH CABIN 'B'; PIPE, WOOD, STOVE MATERIALS, AND COLLAPSED ROOT CELLAR IN CENTRAL AREA. VERTICAL, DARK PIPE IS VISIBLE IN CENTER/UPPER THIRD. CAMERA POINTED EAST. - Florida Mountain Mining Sites, Lower Nottingham Mine, Western slope of Florida Mountain, Silver City, Owyhee County, ID
A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-01-01
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods. PMID:28665361
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-08-30
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.
Yang, Xiaofeng; Wu, Wei; Wang, Guoan
2015-04-01
This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.
MARS PATHFINDER CAMERA TEST IN SAEF-2
NASA Technical Reports Server (NTRS)
1996-01-01
In the Spacecraft Assembly and Encapsulation Facility-2 (SAEF-2), workers from the Jet Propulsion Laboratory (JPL) are conducting a systems test of the imager for the Mars Pathfinder. The imager (white and metallic cylindrical element close to hand of worker at left) is a specially designed camera featuring a stereo- imaging system with color capability provided by a set of selectable filters. It is mounted atop an extendable mast on the Pathfinder lander. Visible to the far left is the small rover which will be deployed from the lander to explore the Martian surface. Transmitting back to Earth images of the trail left by the rover will be one of the mission objectives for the imager. To the left of the worker standing near the imager is the mast for the low-gain antenna; the round high-gain antenna is to the right. Visible in the background is the cruise stage that will carry the Pathfinder on a direct trajectory to Mars. The Mars Pathfinder is one of two Mars-bound spacecraft slated for launch aboard Delta II expendable launch vehicles this year.
NASA Astrophysics Data System (ADS)
Chickadel, C. C.; Lindsay, R. W.; Clark, D.
2014-12-01
An uncooled thermal camera (microbolometer) and RGB camera were mounted in the tail section of a US Coast Guard HC-130 to observe sea ice, open water, and cloud tops through the open rear cargo doors during routine Arctic Domain Awareness (ADA) flights. Recent flights were conducted over the Beaufort Sea in June, July, and August of 2014, with flights planned for September and October. Thermal and visible images were collected at low altitude (100m) during times when the cargo doors were open and recorded high resolution information on ice floes, melt ponds, and surface temperature variability associated with the marginal ice zone (MIZ). These observations of sea ice conditions and surface water temperatures will be used to characterize floe size development and the temperature and albedo of ice ponds and leads. This information will allow for a detailed characterization of sea ice that can be used in process studies and for model evaluation, calibration of satellite remote sensing products, and initialization of sea ice prediction schemes.
Candidate cave entrances on Mars
Cushing, Glen E.
2012-01-01
This paper presents newly discovered candidate cave entrances into Martian near-surface lava tubes, volcano-tectonic fracture systems, and pit craters and describes their characteristics and exploration possibilities. These candidates are all collapse features that occur either intermittently along laterally continuous trench-like depressions or in the floors of sheer-walled atypical pit craters. As viewed from orbit, locations of most candidates are visibly consistent with known terrestrial features such as tube-fed lava flows, volcano-tectonic fractures, and pit craters, each of which forms by mechanisms that can produce caves. Although we cannot determine subsurface extents of the Martian features discussed here, some may continue unimpeded for many kilometers if terrestrial examples are indeed analogous. The features presented here were identified in images acquired by the Mars Odyssey's Thermal Emission Imaging System visible-wavelength camera, and by the Mars Reconnaissance Orbiter's Context Camera. Select candidates have since been targeted by the High-Resolution Imaging Science Experiment. Martian caves are promising potential sites for future human habitation and astrobiology investigations; understanding their characteristics is critical for long-term mission planning and for developing the necessary exploration technologies.
2016-11-21
Surface features are visible on Saturn's moon Prometheus in this view from NASA's Cassini spacecraft. Most of Cassini's images of Prometheus are too distant to resolve individual craters, making views like this a rare treat. Saturn's narrow F ring, which makes a diagonal line beginning at top center, appears bright and bold in some Cassini views, but not here. Since the sun is nearly behind Cassini in this image, most of the light hitting the F ring is being scattered away from the camera, making it appear dim. Light-scattering behavior like this is typical of rings comprised of small particles, such as the F ring. This view looks toward the unilluminated side of the rings from about 14 degrees below the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 24, 2016. The view was acquired at a distance of approximately 226,000 miles (364,000 kilometers) from Prometheus and at a sun-Prometheus-spacecraft, or phase, angle of 51 degrees. Image scale is 1.2 miles (2 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20508
Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor
Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung
2017-01-01
Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods. PMID:29143764
Cloud Detection with the Earth Polychromatic Imaging Camera (EPIC)
NASA Technical Reports Server (NTRS)
Meyer, Kerry; Marshak, Alexander; Lyapustin, Alexei; Torres, Omar; Wang, Yugie
2011-01-01
The Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) would provide a unique opportunity for Earth and atmospheric research due not only to its Lagrange point sun-synchronous orbit, but also to the potential for synergistic use of spectral channels in both the UV and visible spectrum. As a prerequisite for most applications, the ability to detect the presence of clouds in a given field of view, known as cloud masking, is of utmost importance. It serves to determine both the potential for cloud contamination in clear-sky applications (e.g., land surface products and aerosol retrievals) and clear-sky contamination in cloud applications (e.g., cloud height and property retrievals). To this end, a preliminary cloud mask algorithm has been developed for EPIC that applies thresholds to reflected UV and visible radiances, as well as to reflected radiance ratios. This algorithm has been tested with simulated EPIC radiances over both land and ocean scenes, with satisfactory results. These test results, as well as algorithm sensitivity to potential instrument uncertainties, will be presented.
NASA Astrophysics Data System (ADS)
Russell, E.; Chi, J.; Waldo, S.; Pressley, S. N.; Lamb, B. K.; Pan, W.
2017-12-01
Diurnal and seasonal gas fluxes vary by crop growth stage. Digital cameras are increasingly being used to monitor inter-annual changes in vegetation phenology in a variety of ecosystems. These cameras are not designed as scientific instruments but the information they gather can add value to established measurement techniques (i.e. eddy covariance). This work combined deconstructed digital images with eddy covariance data from five agricultural sites (1 fallow, 4 cropped) in the inland Pacific Northwest, USA. The data were broken down with respect to crop stage and management activities. The fallow field highlighted the camera response to changing net radiation, illumination, and rainfall. At the cropped sites, the net ecosystem exchange, gross primary production, and evapotranspiration were correlated with the greenness and redness values derived from the images over the growing season. However, the color values do not change quickly enough to respond to day-to-day variability in the flux exchange as the two measurement types are based on different processes. The management practices and changes in phenology through the growing season were not visible within the camera data though the camera did capture the general evolution of the ecosystem fluxes.
Trade-off between TMA and RC configurations for JANUS camera
NASA Astrophysics Data System (ADS)
Greggio, D.; Magrin, D.; Munari, M.; Paolinetti, R.; Turella, A.; Zusi, M.; Cremonese, G.; Debei, S.; Della Corte, V.; Friso, E.; Hoffmann, H.; Jaumann, R.; Michaelis, H.; Mugnuolo, R.; Olivieri, A.; Palumbo, P.; Ragazzoni, R.; Schmitz, N.
2016-07-01
JANUS (Jovis Amorum Ac Natorum Undique Scrutator) is a high-resolution visible camera designed for the ESA space mission JUICE (Jupiter Icy moons Explorer). The main scientific goal of JANUS is to observe the surface of the Jupiter satellites Ganymede and Europa in order to characterize their physical and geological properties. During the design phases, we have proposed two possible optical configurations: a Three Mirror Anastigmat (TMA) and a Ritchey-Chrétien (RC) both matching the performance requirements. Here we describe the two optical solutions and compare their performance both in terms of achieved optical quality, sensitivity to misalignment and stray light performances.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
X-ray ‘ghost images’ could cut radiation doses
NASA Astrophysics Data System (ADS)
Chen, Sophia
2018-03-01
On its own, a single-pixel camera captures pictures that are pretty dull: squares that are completely black, completely white, or some shade of gray in between. All it does, after all, is detect brightness. Yet by connecting a single-pixel camera to a patterned light source, a team of physicists in China has made detailed x-ray images using a statistical technique called ghost imaging, first pioneered 20 years ago in infrared and visible light. Researchers in the field say future versions of this system could take clear x-ray photographs with cheap cameras—no need for lenses and multipixel detectors—and less cancer-causing radiation than conventional techniques.
NASA Astrophysics Data System (ADS)
Kittle, David S.; Patil, Chirag G.; Mamelak, Adam; Hansen, Stacey; Perry, Jeff; Ishak, Laura; Black, Keith L.; Butte, Pramod V.
2016-03-01
Current surgical microscopes are limited in sensitivity for NIR fluorescence. Recent developments in tumor markers attached with NIR dyes require newer, more sensitive imaging systems with high resolution to guide surgical resection. We report on a small, single camera solution enabling advanced image processing opportunities previously unavailable for ultra-high sensitivity imaging of these agents. The system captures both visible reflectance and NIR fluorescence at 300 fps while displaying full HD resolution video at 60 fps. The camera head has been designed to easily mount onto the Zeiss Pentero microscope head for seamless integration into surgical procedures.
Super-Resolution in Plenoptic Cameras Using FPGAs
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-01-01
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246
Super-resolution in plenoptic cameras using FPGAs.
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-05-16
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.
Completely optical orientation determination for an unstabilized aerial three-line camera
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2010-10-01
Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.
Differential effects of film on preschool children's behaviour dependent on editing pace.
Kostyrka-Allchorne, Katarzyna; Cooper, Nicholas R; Gossmann, Anna Maria; Barber, Katy J; Simpson, Andrew
2017-05-01
Evidence on how the pace of television and film editing affects children's behaviour and attention is inconclusive. We examined whether a fast-paced film affected how preschool-aged children interacted with toys. The study comprised 70 children (36 girls) aged two to four-and-a-half years who attended preschools in Essex, United Kingdom. The children were paired up and tested with either a fast- or a slow-paced film of a narrator reading a children's story. The fast-paced version had 102 camera cuts and 16 still images, and the slow-paced version had 22 camera cuts and four still images. Each dyad took part in two video-recorded free-play sessions, before and after they watched one of the specially edited four-minute films. The number of toys the children played with before and after the film sessions was recorded. Before they watched the films, the children's behaviour did not differ between the groups. However, after watching the film, the children in the fast-paced group shifted their attention between toys more frequently than the children who watched the slow-paced film. Even a brief exposure to differently paced films had an immediate effect on how the children interacted with their toys. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Europe's space camera unmasks a cosmic gamma-ray machine
NASA Astrophysics Data System (ADS)
1996-11-01
The new-found neutron star is the visible counterpart of a pulsating radio source, Pulsar 1055-52. It is a mere 20 kilometres wide. Although the neutron star is very hot, at about a million degrees C, very little of its radiant energy takes the form of visible light. It emits mainly gamma-rays, an extremely energetic form of radiation. By examining it at visible wavelengths, astronomers hope to figure out why Pulsar 1055-52 is the most efficient generator of gamma-rays known so far, anywhere the Universe. The Faint Object Camera found Pulsar 1055-52 in near ultraviolet light at 3400 angstroms, a little shorter in wavelength than the violet light at the extremity of the human visual range. Roberto Mignani, Patrizia Caraveo and Giovanni Bignami of the Istituto di Fisica Cosmica in Milan, Italy, report its optical identification in a forthcoming issue of Astrophysical Journal Letters (1 January 1997). The formal name of the object is PSR 1055-52. Evading the glare of an adjacent star The Italian team had tried since 1988 to spot Pulsar 1055-52 with two of the most powerful ground-based optical telescopes in the Southern Hemisphere. These were the 3.6-metre Telescope and the 3.5-metre New Technology Telescope of the European Southern Observatory at La Silla, Chile. Unfortunately an ordinary star 100,000 times brighter lay in almost the same direction in the sky, separated from the neutron star by only a thousandth of a degree. The Earth's atmosphere defocused the star's light sufficiently to mask the glimmer from Pulsar 1055-52. The astronomers therefore needed an instrument in space. The Faint Object Camera offered the best precision and sensitivity to continue the hunt. Devised by European astronomers to complement the American wide field camera in the Hubble Space Telescope, the Faint Object Camera has a relatively narrow field of view. It intensifies the image of a faint object by repeatedly accelerating electrons from photo-electric films, so as to produce brighter flashes when the electrons hit a phosphor screen. Since Hubble's launch in 1990, the Faint Object Camera has examined many different kinds of cosmic objects, from the moons of Jupiter to remote galaxies and quasars. When the space telescope's optics were corrected at the end of 1993 the Faint Object Camera immediately celebrated the event with the discovery of primeval helium in intergalactic gas. In their search for Pulsar 1055-52, the astronomers chose a near-ultraviolet filter to sharpen the Faint Object Camera's vision and reduce the adjacent star's huge advantage in intensity. In May 1996, the Hubble Space Telescope operators aimed at the spot which radio astronomers had indicated, as the source of the radio pulsations of Pulsar 1055-52. The neutron star appeared precisely in the centre of the field of view, and it was clearly separated from the glare of the adjacent star. At magnitude 24.9, Pulsar 1055-52 was comfortably within the power of the Faint Object Camera, which can see stars 20 times fainter still. "The Faint Object Camera is the instrument of choice for looking for neutron stars," says Giovanni Bignami, speaking on behalf of the Italian team. "Whenever it points to a judiciously selected neutron star it detects the corresponding visible or ultraviolet light. The Faint Object Camera has now identified three neutron stars in that way, including Pulsar 1055-52, and it has examined a few that were first detected by other instruments." Mysteries of the neutron stars The importance of the new result can be gauged by the tally of only eight neutron stars seen so far at optical wavelengths, compared with about 760 known from their radio pulsations, and about 21 seen emitting X-rays. Since the first pulsar was detected by radio astronomers in Cambridge, England, nearly 30 years ago, theorists have come to recognize neutron stars as fantastic objects. They are veritable cosmic laboratories in which Nature reveals the behaviour of matter under extreme stress, just one step short of a black hole. A neutron star is created by the force of a supernova explosion in a large star, which crushes the star's core to an unimaginable density. A mass greater than the Sun's is squeezed into a ball no wider than a city. The gravity and magnetic fields are billions of times stronger than the Earth's. The neutron star revolves rapidly, which causes it to wink like a cosmic lighthouse as it swivels its magnetic poles towards and away from the Earth. Pulsar 1055-52 spins at five revolutions per second. At its formation in a supernova explosion, a neutron star is endowed with two main forms of energy. One is heat, at temperatures of millions of degrees, which the neutron star radiates mainly as X-rays, with only a small proportion emerging as visible light. The other power supply for the neutron star comes from its high rate of spin and a gradual slowing of the rotation. By a variety of processes involving the magnetic field and accelerated particles in the neutron star's vicinity, the spin energy of the neutron star is converted into radiation at many different wavelengths, from radio waves to gamma-rays. The exceptional gamma-ray intensity of Pulsar 1055-52 was first appreciated in observations by NASA's Compton Gamma Ray Observatory. The team in Milan recently used the Hubble Space Telescope to find the distance of the peculiar neutron star Geminga, which is not detectable by radio pulses but is a strong source of gamma-rays (see ESA Information Note 04-96, 28 March 1996). Pulsar 1055-52 is even more powerful in that respect. About 50 per cent of its radiant energy is gamma-rays, compared with 15 per cent from Geminga and 0.1 per cent from the famous Crab Pulsar, the first neutron star seen by visible light. Making the gamma-rays requires the acceleration of electrons through billions of volts. The magnetic environment of Pulsar 1055-52 fashions a natural gamma-ray machine of amazing power. The orientation of the neutron star's magnetic field with respect to the Earth may contribute to its brightness in gamma-rays. Geminga, Pulsar 1055-52 and another object, Pulsar 0656+14, make a trio that the Milanese astronomers call the Three Musketeers. All have been observed with the Faint Object Camera. They are isolated, elderly neutron stars, some hundreds of thousands of years old, contrasting with the 942 year-old Crab Pulsar which is still surrounded by dispersing debris of a supernova seen by Chinese astronomers in the 11th Century. The mysteries of the neutron stars will keep astronomers busy for years to come, and the Faint Object Camera in the Hubble Space Telescope will remain the best instrument for spotting their faint visible light. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency (ESA). The Space Telescope Science Institute is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) for NASA, under contract with the Goddard Space Flight Center, Greenbelt, Maryland. Note to editors: An image is available of (i) PSR 1055-52 seen by ESA's Faint Object Camera in the Hubble Space Telescope, and (ii) the same region of the sky seen by the European Southern Observatory's New Technology Telescope, with the position of PSR 1055-52 indicated. The image is available on the World Wide Web at http://ecf.hq.eso.org/stecf-pubrel.html http://www.estec.esa.nl/spdwww/h2000/html/snlmain.htm
MISR Images Forest Fires and Hurricane
NASA Technical Reports Server (NTRS)
2000-01-01
These images show forest fires raging in Montana and Hurricane Hector swirling in the Pacific. These two unrelated, large-scale examples of nature's fury were captured by the Multi-angle Imaging SpectroRadiometer(MISR) during a single orbit of NASA's Terra satellite on August 14, 2000.
In the left image, huge smoke plumes rise from devastating wildfires in the Bitterroot Mountain Range near the Montana-Idaho border. Flathead Lake is near the upper left, and the Great Salt Lake is at the bottom right. Smoke accumulating in the canyons and plains is also visible. This image was generated from the MISR camera that looks forward at a steep angle (60 degrees); the instrument has nine different cameras viewing Earth at different angles. The smoke is far more visible when seen at this highly oblique angle than it would be in a conventional, straight-downward (nadir)view. The wide extent of the smoke is evident from comparison with the image on the right, a view of Hurricane Hector acquired from MISR's nadir-viewing camera. Both images show an area of approximately 400 kilometers (250 miles)in width and about 850 kilometers (530 miles) in length.When this image of Hector was taken, the eastern Pacific tropical cyclone was located approximately 1,100 kilometers (680 miles) west of the southern tip of Baja California, Mexico. The eye is faintly visible and measures 25 kilometers (16 miles) in diameter. The storm was beginning to weaken, and 24hours later the National Weather Service downgraded Hector from a hurricane to a tropical storm.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.For more information: http://www-misr.jpl.nasa.gov