Image dynamic range test and evaluation of Gaofen-2 dual cameras
NASA Astrophysics Data System (ADS)
Zhang, Zhenhua; Gan, Fuping; Wei, Dandan
2015-12-01
In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications
NASA Astrophysics Data System (ADS)
Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.
2005-08-01
A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
Minimum Requirements for Taxicab Security Cameras.
Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene
2014-07-01
The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
Minimum Requirements for Taxicab Security Cameras*
Zeng, Shengke; Amandus, Harlan E.; Amendola, Alfred A.; Newbraugh, Bradley H.; Cantis, Douglas M.; Weaver, Darlene
2015-01-01
Problem The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Methods Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Results Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. Practical Applications These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability. PMID:26823992
High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project
NASA Astrophysics Data System (ADS)
Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique
2015-04-01
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
ColorChecker at the beach: dangers of sunburn and glare
NASA Astrophysics Data System (ADS)
McCann, John
2014-01-01
In High-Dynamic-Range (HDR) imaging, optical veiling glare sets the limits of accurate scene information recorded by a camera. But, what happens at the beach? Here we have a Low-Dynamic-Range (LDR) scene with maximal glare. Can we calibrate a camera at the beach and not be burnt? We know that we need sunscreen and sunglasses, but what about our cameras? The effect of veiling glare is scene-dependent. When we compare RAW camera digits with spotmeter measurements we find significant differences. As well, these differences vary, depending on where we aim the camera. When we calibrate our camera at the beach we get data that is valid for only that part of that scene. Camera veiling glare is an issue in LDR scenes in uniform illumination with a shaded lens.
High Dynamic Range Imaging Using Multiple Exposures
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei
2017-06-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.
Prediction of Viking lander camera image quality
NASA Technical Reports Server (NTRS)
Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.
1976-01-01
Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.
Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.
Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue
2015-01-01
A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.
High-dynamic-range imaging for cloud segmentation
NASA Astrophysics Data System (ADS)
Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan
2018-04-01
Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.
Absolute colorimetric characterization of a DSLR camera
NASA Astrophysics Data System (ADS)
Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo
2014-03-01
A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.
The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.
2017-02-01
Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.
Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.
Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K
2010-09-01
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.
Nonlinear dynamic range transformation in visual communication channels.
Alter-Gartenberg, R
1996-01-01
The article evaluates nonlinear dynamic range transformation in the context of the end-to-end continuous-input/discrete processing/continuous-display imaging process. Dynamic range transformation is required when we have the following: (i) the wide dynamic range encountered in nature is compressed into the relatively narrow dynamic range of the display, particularly for spatially varying irradiance (e.g., shadow); (ii) coarse quantization is expanded to the wider dynamic range of the display; and (iii) nonlinear tone scale transformation compensates for the correction in the camera amplifier.
Vann, C.
1998-03-24
The Laser Pulse Sampler (LPS) measures temporal pulse shape without the problems of a streak camera. Unlike the streak camera, the laser pulse directly illuminates a camera in the LPS, i.e., no additional equipment or energy conversions are required. The LPS has several advantages over streak cameras. The dynamic range of the LPS is limited only by the range of its camera, which for a cooled camera can be as high as 16 bits, i.e., 65,536. The LPS costs less because there are fewer components, and those components can be mass produced. The LPS is easier to calibrate and maintain because there is only one energy conversion, i.e., photons to electrons, in the camera. 5 figs.
NASA Astrophysics Data System (ADS)
Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi
Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.
High dynamic range CMOS (HDRC) imagers for safety systems
NASA Astrophysics Data System (ADS)
Strobel, Markus; Döttling, Dietmar
2013-04-01
The first part of this paper describes the high dynamic range CMOS (HDRC®) imager - a special type of CMOS image sensor with logarithmic response. The powerful property of a high dynamic range (HDR) image acquisition is detailed by mathematical definition and measurement of the optoelectronic conversion function (OECF) of two different HDRC imagers. Specific sensor parameters will be discussed including the pixel design for the global shutter readout. The second part will give an outline on the applications and requirements of cameras for industrial safety. Equipped with HDRC global shutter sensors SafetyEYE® is a high-performance stereo camera system for safe three-dimensional zone monitoring enabling new and more flexible solutions compared to existing safety guards.
Evaluation of High Dynamic Range Photography as a Luminance Mapping Technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inanici, Mehlika; Galvin, Jim
2004-12-30
The potential, limitations, and applicability of the High Dynamic Range (HDR) photography technique is evaluated as a luminance mapping tool. Multiple exposure photographs of static scenes are taken with a Nikon 5400 digital camera to capture the wide luminance variation within the scenes. The camera response function is computationally derived using the Photosphere software, and is used to fuse the multiple photographs into HDR images. The vignetting effect and point spread function of the camera and lens system is determined. Laboratory and field studies have shown that the pixel values in the HDR photographs can correspond to the physical quantitymore » of luminance with reasonable precision and repeatability.« less
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
MacPhee, A G; Dymoke-Bradshaw, A K L; Hares, J D; Hassett, J; Hatch, B W; Meadowcroft, A L; Bell, P M; Bradley, D K; Datte, P S; Landen, O L; Palmer, N E; Piston, K W; Rekow, V V; Hilsabeck, T J; Kilkenny, J D
2016-11-01
We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamic range for the relevant part of the streak record.
Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System
NASA Astrophysics Data System (ADS)
Stebner, K.; Wieden, A.
2014-03-01
Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.
Generation of high-dynamic range image from digital photo
NASA Astrophysics Data System (ADS)
Wang, Ying; Potemin, Igor S.; Zhdanov, Dmitry D.; Wang, Xu-yang; Cheng, Han
2016-10-01
A number of the modern applications such as medical imaging, remote sensing satellites imaging, virtual prototyping etc use the High Dynamic Range Image (HDRI). Generally to obtain HDRI from ordinary digital image the camera is calibrated. The article proposes the camera calibration method based on the clear sky as the standard light source and takes sky luminance from CIE sky model for the corresponding geographical coordinates and time. The article considers base algorithms for getting real luminance values from ordinary digital image and corresponding programmed implementation of the algorithms. Moreover, examples of HDRI reconstructed from ordinary images illustrate the article.
Solid State Television Camera (CID)
NASA Technical Reports Server (NTRS)
Steele, D. W.; Green, W. T.
1976-01-01
The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.
MacPhee, A. G.; Dymoke-Bradshaw, A. K. L.; Hares, J. D.; ...
2016-08-08
Here, we report simulationsand experiments that demonstrate an increasein spatial resolution ofthe NIF core diagnostic x-ray streak camerasby a factor of two, especially off axis. A designwas achieved by usinga corrector electron optic to flatten the field curvature at the detector planeand corroborated by measurement. In addition, particle in cell simulations were performed to identify theregions in the streak camera that contribute most to space charge blurring. Our simulations provide a tool for convolving syntheticpre-shot spectra with the instrument functionso signal levels can be set to maximize dynamic range for the relevant part of the streak record.
Performance of Laser Megajoule’s x-ray streak camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuber, C., E-mail: celine.zuber@cea.fr; Bazzoli, S.; Brunel, P.
2016-11-15
A prototype of a picosecond x-ray streak camera has been developed and tested by Commissariat à l’Énergie Atomique et aux Énergies Alternatives to provide plasma-diagnostic support for the Laser Megajoule. We report on the measured performance of this streak camera, which almost fulfills the requirements: 50-μm spatial resolution over a 15-mm field in the photocathode plane, 17-ps temporal resolution in a 2-ns timebase, a detection threshold lower than 625 nJ/cm{sup 2} in the 0.05–15 keV spectral range, and a dynamic range greater than 100.
Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database
NASA Astrophysics Data System (ADS)
Banitalebi-Dehkordi, Amin
2017-03-01
High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.
Establishing imaging sensor specifications for digital still cameras
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2007-02-01
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.
NASA Astrophysics Data System (ADS)
Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.
2009-01-01
For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.
Solid state replacement of rotating mirror cameras
NASA Astrophysics Data System (ADS)
Frank, Alan M.; Bartolick, Joseph M.
2007-01-01
Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.
Design of CMOS imaging system based on FPGA
NASA Astrophysics Data System (ADS)
Hu, Bo; Chen, Xiaolai
2017-10-01
In order to meet the needs of engineering applications for high dynamic range CMOS camera under the rolling shutter mode, a complete imaging system is designed based on the CMOS imaging sensor NSC1105. The paper decides CMOS+ADC+FPGA+Camera Link as processing architecture and introduces the design and implementation of the hardware system. As for camera software system, which consists of CMOS timing drive module, image acquisition module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The ISE 14.6 emulator ISim is used in the simulation of signals. The imaging experimental results show that the system exhibits a 1280*1024 pixel resolution, has a frame frequency of 25 fps and a dynamic range more than 120dB. The imaging quality of the system satisfies the requirement of the index.
HDR video synthesis for vision systems in dynamic scenes
NASA Astrophysics Data System (ADS)
Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried
2016-09-01
High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.
Forbes, Ruaridh; Makhija, Varun; Veyrinas, Kévin; Stolow, Albert; Lee, Jason W L; Burt, Michael; Brouard, Mark; Vallance, Claire; Wilkinson, Iain; Lausten, Rune; Hockett, Paul
2017-07-07
The Pixel-Imaging Mass Spectrometry (PImMS) camera allows for 3D charged particle imaging measurements, in which the particle time-of-flight is recorded along with (x, y) position. Coupling the PImMS camera to an ultrafast pump-probe velocity-map imaging spectroscopy apparatus therefore provides a route to time-resolved multi-mass ion imaging, with both high count rates and large dynamic range, thus allowing for rapid measurements of complex photofragmentation dynamics. Furthermore, the use of vacuum ultraviolet wavelengths for the probe pulse allows for an enhanced observation window for the study of excited state molecular dynamics in small polyatomic molecules having relatively high ionization potentials. Herein, preliminary time-resolved multi-mass imaging results from C 2 F 3 I photolysis are presented. The experiments utilized femtosecond VUV and UV (160.8 nm and 267 nm) pump and probe laser pulses in order to demonstrate and explore this new time-resolved experimental ion imaging configuration. The data indicate the depth and power of this measurement modality, with a range of photofragments readily observed, and many indications of complex underlying wavepacket dynamics on the excited state(s) prepared.
Prototype of microbolometer thermal infrared camera for forest fire detection from space
NASA Astrophysics Data System (ADS)
Guerin, Francois; Dantes, Didier; Bouzou, Nathalie; Chorier, Philippe; Bouchardy, Anne-Marie; Rollin, Joël.
2017-11-01
The contribution of the thermal infrared (TIR) camera to the Earth observation FUEGO mission is to participate; to discriminate the clouds and smoke; to detect the false alarms of forest fires; to monitor the forest fires. Consequently, the camera needs a large dynamic range of detectable radiances. A small volume, low mass and power are required by the small FUEGO payload. These specifications can be attractive for other similar missions.
High dynamic spectroscopy using a digital micromirror device and periodic shadowing.
Kristensson, Elias; Ehn, Andreas; Berrocal, Edouard
2017-01-09
We present an optical solution called DMD-PS to boost the dynamic range of 2D imaging spectroscopic measurements up to 22 bits by incorporating a digital micromirror device (DMD) prior to detection in combination with the periodic shadowing (PS) approach. In contrast to high dynamic range (HDR), where the dynamic range is increased by recording several images at different exposure times, the current approach has the potential of improving the dynamic range from a single exposure and without saturation of the CCD sensor. In the procedure, the spectrum is imaged onto the DMD that selectively reduces the reflection from the intense spectral lines, allowing the signal from the weaker lines to be increased by a factor of 28 via longer exposure times, higher camera gains or increased laser power. This manipulation of the spectrum can either be based on a priori knowledge of the spectrum or by first performing a calibration measurement to sense the intensity distribution. The resulting benefits in detection sensitivity come, however, at the cost of strong generation of interfering stray light. To solve this issue the Periodic Shadowing technique, which is based on spatial light modulation, is also employed. In this proof-of-concept article we describe the full methodology of DMD-PS and demonstrate - using the calibration-based concept - an improvement in dynamic range by a factor of ~100 over conventional imaging spectroscopy. The dynamic range of the presented approach will directly benefit from future technological development of DMDs and camera sensors.
An HDR imaging method with DTDI technology for push-broom cameras
NASA Astrophysics Data System (ADS)
Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin
2018-03-01
Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.
Patankar, S.; Gumbrell, E. T.; Robinson, T. S.; ...
2017-08-17
Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less
Use of the Polarized Radiance Distribution Camera System in the RADYO Program
2011-01-28
characterization and validation of a high dynamic range radiance camera", Ocean Optics XX, Anchorage, Ak., October 2010. POSTER G. Zibordi and K. J. Voss...on light in the ocean", Submitted to Physics Today, Dec 2010. H. Zhang and K. J. Voss, "On Hapke photometric model predictions on reflectance of
Image quality prediction - An aid to the Viking lander imaging investigation on Mars
NASA Technical Reports Server (NTRS)
Huck, F. O.; Wall, S. D.
1976-01-01
Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).
Pulsed spatial phase-shifting digital shearography based on a micropolarizer camera
NASA Astrophysics Data System (ADS)
Aranchuk, Vyacheslav; Lal, Amit K.; Hess, Cecil F.; Trolinger, James Davis; Scott, Eddie
2018-02-01
We developed a pulsed digital shearography system that utilizes the spatial phase-shifting technique. The system employs a commercial micropolarizer camera and a double pulse laser, which allows for instantaneous phase measurements. The system can measure dynamic deformation of objects as large as 1 m at a 2-m distance during the time between two laser pulses that range from 30 μs to 30 ms. The ability of the system to measure dynamic deformation was demonstrated by obtaining phase wrapped and unwrapped shearograms of a vibrating object.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patankar, S.; Gumbrell, E. T.; Robinson, T. S.
Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conder, A.; Mummolo, F. J.
The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.
Performance measurement of commercial electronic still picture cameras
NASA Astrophysics Data System (ADS)
Hsu, Wei-Feng; Tseng, Shinn-Yih; Chiang, Hwang-Cheng; Cheng, Jui-His; Liu, Yuan-Te
1998-06-01
Commercial electronic still picture cameras need a low-cost, systematic method for evaluating the performance. In this paper, we present a measurement method to evaluating the dynamic range and sensitivity by constructing the opto- electronic conversion function (OECF), the fixed pattern noise by the peak S/N ratio (PSNR) and the image shading function (ISF), and the spatial resolution by the modulation transfer function (MTF). The evaluation results of individual color components and the luminance signal from a PC camera using SONY interlaced CCD array as the image sensor are then presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacPhee, A. G., E-mail: macphee2@llnl.gov; Hatch, B. W.; Bell, P. M.
2016-11-15
We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamicmore » range for the relevant part of the streak record.« less
Light-pollution measurement with the Wide-field all-sky image analyzing monitoring system
NASA Astrophysics Data System (ADS)
Vítek, S.
2017-07-01
The purpose of this experiment was to measure light pollution in the capital of Czech Republic, Prague. As a measuring instrument is used calibrated consumer level digital single reflex camera with IR cut filter, therefore, the paper reports results of measuring and monitoring of the light pollution in the wavelength range of 390 - 700 nm, which most affects visual range astronomy. Combining frames of different exposure times made with a digital camera coupled with fish-eye lens allow to create high dynamic range images, contain meaningful values, so such a system can provide absolute values of the sky brightness.
Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Simpson, Robert; Smith, Arfon; Packer, Craig
2015-01-01
Camera traps can be used to address large-scale questions in community ecology by providing systematic data on an array of wide-ranging species. We deployed 225 camera traps across 1,125 km2 in Serengeti National Park, Tanzania, to evaluate spatial and temporal inter-species dynamics. The cameras have operated continuously since 2010 and had accumulated 99,241 camera-trap days and produced 1.2 million sets of pictures by 2013. Members of the general public classified the images via the citizen-science website www.snapshotserengeti.org. Multiple users viewed each image and recorded the species, number of individuals, associated behaviours, and presence of young. Over 28,000 registered users contributed 10.8 million classifications. We applied a simple algorithm to aggregate these individual classifications into a final ‘consensus’ dataset, yielding a final classification for each image and a measure of agreement among individual answers. The consensus classifications and raw imagery provide an unparalleled opportunity to investigate multi-species dynamics in an intact ecosystem and a valuable resource for machine-learning and computer-vision research. PMID:26097743
Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring.
Song, Kai-Tai; Tai, Jen-Chao
2006-10-01
Pan-tilt-zoom (PTZ) cameras have been widely used in recent years for monitoring and surveillance applications. These cameras provide flexible view selection as well as a wider observation range. This makes them suitable for vision-based traffic monitoring and enforcement systems. To employ PTZ cameras for image measurement applications, one first needs to calibrate the camera to obtain meaningful results. For instance, the accuracy of estimating vehicle speed depends on the accuracy of camera calibration and that of vehicle tracking results. This paper presents a novel calibration method for a PTZ camera overlooking a traffic scene. The proposed approach requires no manual operation to select the positions of special features. It automatically uses a set of parallel lane markings and the lane width to compute the camera parameters, namely, focal length, tilt angle, and pan angle. Image processing procedures have been developed for automatically finding parallel lane markings. Interesting experimental results are presented to validate the robustness and accuracy of the proposed method.
Infrared Camera Diagnostic for Heat Flux Measurements on NSTX
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. Mastrovito; R. Maingi; H.W. Kugel
2003-03-25
An infrared imaging system has been installed on NSTX (National Spherical Torus Experiment) at the Princeton Plasma Physics Laboratory to measure the surface temperatures on the lower divertor and center stack. The imaging system is based on an Indigo Alpha 160 x 128 microbolometer camera with 12 bits/pixel operating in the 7-13 {micro}m range with a 30 Hz frame rate and a dynamic temperature range of 0-700 degrees C. From these data and knowledge of graphite thermal properties, the heat flux is derived with a classic one-dimensional conduction model. Preliminary results of heat flux scaling are reported.
Uncooled radiometric camera performance
NASA Astrophysics Data System (ADS)
Meyer, Bill; Hoelter, T.
1998-07-01
Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.
Displacement and deformation measurement for large structures by camera network
NASA Astrophysics Data System (ADS)
Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu
2014-03-01
A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.
Multi-exposure high dynamic range image synthesis with camera shake correction
NASA Astrophysics Data System (ADS)
Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie
2017-10-01
Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.
CMOS Imaging Sensor Technology for Aerial Mapping Cameras
NASA Astrophysics Data System (ADS)
Neumann, Klaus; Welzenbach, Martin; Timm, Martin
2016-06-01
In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.
Unattended real-time re-establishment of visibility in high dynamic range video and stills
NASA Astrophysics Data System (ADS)
Abidi, B.
2014-05-01
We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
Large format geiger-mode avalanche photodiode LADAR camera
NASA Astrophysics Data System (ADS)
Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison
2013-05-01
Recently Spectrolab has successfully demonstrated a compact 32x32 Laser Detection and Range (LADAR) camera with single photo-level sensitivity with small size, weight, and power (SWAP) budget for threedimensional (3D) topographic imaging at 1064 nm on various platforms. With 20-kHz frame rate and 500- ps timing uncertainty, this LADAR system provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. At a 10 mph forward speed and 1000 feet above ground level (AGL), it covers 0.5 square-mile per hour with a resolution of 25 in2/pixel after data averaging. In order to increase the forward speed to fit for more platforms and survey a large area more effectively, Spectrolab is developing 32x128 Geiger-mode LADAR camera with 43 frame rate. With the increase in both frame rate and array size, the data collection rate is improved by 10 times. With a programmable bin size from 0.3 ps to 0.5 ns and 14-bit timing dynamic range, LADAR developers will have more freedom in system integration for various applications. Most of the special features of Spectrolab 32x32 LADAR camera, such as non-uniform bias correction, variable range gate width, windowing for smaller arrays, and short pixel protection, are implemented in this camera.
Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras
NASA Technical Reports Server (NTRS)
Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.
2011-01-01
The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.
Application of low-noise CID imagers in scientific instrumentation cameras
NASA Astrophysics Data System (ADS)
Carbone, Joseph; Hutton, J.; Arnold, Frank S.; Zarnowski, Jeffrey J.; Vangorden, Steven; Pilon, Michael J.; Wadsworth, Mark V.
1991-07-01
CIDTEC has developed a PC-based instrumentation camera incorporating a preamplifier per row CID imager and a microprocessor/LCA camera controller. The camera takes advantage of CID X-Y addressability to randomly read individual pixels and potentially overlapping pixel subsets in true nondestructive (NDRO) as well as destructive readout modes. Using an oxy- nitride fabricated CID and the NDRO readout technique, pixel full well and noise levels of approximately 1*10(superscript 6) and 40 electrons, respectively, were measured. Data taken from test structures indicates noise levels (which appear to be 1/f limited) can be reduced by a factor of two by eliminating the nitride under the preamplifier gate. Due to software programmability, versatile readout capabilities, wide dynamic range, and extended UV/IR capability, this camera appears to be ideally suited for use in spectroscopy and other scientific applications.
Adaptive Optics For Imaging Bright Objects Next To Dim Ones
NASA Technical Reports Server (NTRS)
Shao, Michael; Yu, Jeffrey W.; Malbet, Fabien
1996-01-01
Adaptive optics used in imaging optical systems, according to proposal, to enhance high-dynamic-range images (images of bright objects next to dim objects). Designed to alter wavefronts to correct for effects of scattering of light from small bumps on imaging optics. Original intended application of concept in advanced camera installed on Hubble Space Telescope for imaging of such phenomena as large planets near stars other than Sun. Also applicable to other high-quality telescopes and cameras.
Sellers and Fossum on the end of the OBSS during EVA1 on STS-121 / Expedition 13 joint operations
2006-07-08
STS121-323-011 (8 July 2006) --- Astronauts Piers J. Sellers and Michael E. Fossum, STS-121 mission specialists, work in tandem on Space Shuttle Discovery's Remote Manipulator System/Orbiter Boom Sensor System (RMS/OBSS) during the mission's first scheduled session of extravehicular activity (EVA). Also visible on the OBSS are the Laser Dynamic Range Imager (LDRI), Intensified Television Camera (ITVC) and Laser Camera System (LCS).
Formulation of image quality prediction criteria for the Viking lander camera
NASA Technical Reports Server (NTRS)
Huck, F. O.; Jobson, D. J.; Taylor, E. J.; Wall, S. D.
1973-01-01
Image quality criteria are defined and mathematically formulated for the prediction computer program which is to be developed for the Viking lander imaging experiment. The general objective of broad-band (black and white) imagery to resolve small spatial details and slopes is formulated as the detectability of a right-circular cone with surface properties of the surrounding terrain. The general objective of narrow-band (color and near-infrared) imagery to observe spectral characteristics if formulated as the minimum detectable albedo variation. The general goal to encompass, but not exceed, the range of the scene radiance distribution within single, commandable, camera dynamic range setting is also considered.
Electron-tracking Compton gamma-ray camera for small animal and phantom imaging
NASA Astrophysics Data System (ADS)
Kabuki, Shigeto; Kimura, Hiroyuki; Amano, Hiroo; Nakamoto, Yuji; Kubo, Hidetoshi; Miuchi, Kentaro; Kurosawa, Shunsuke; Takahashi, Michiaki; Kawashima, Hidekazu; Ueda, Masashi; Okada, Tomohisa; Kubo, Atsushi; Kunieda, Etuso; Nakahara, Tadaki; Kohara, Ryota; Miyazaki, Osamu; Nakazawa, Tetsuo; Shirahata, Takashi; Yamamoto, Etsuji; Ogawa, Koichi; Togashi, Kaori; Saji, Hideo; Tanimori, Toru
2010-11-01
We have developed an electron-tracking Compton camera (ETCC) for medical use. Our ETCC has a wide energy dynamic range (200-1300 keV) and wide field of view (3 sr), and thus has potential for advanced medical use. To evaluate the ETCC, we imaged the head (brain) and bladder of mice that had been administered with F-18-FDG. We also imaged the head and thyroid gland of mice using double tracers of F-18-FDG and I-131 ions.
Radiometric calibration of wide-field camera system with an application in astronomy
NASA Astrophysics Data System (ADS)
Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika
2017-09-01
Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.
Miniaturized GPS/MEMS IMU integrated board
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2012-01-01
This invention documents the efforts on the research and development of a miniaturized GPS/MEMS IMU integrated navigation system. A miniaturized GPS/MEMS IMU integrated navigation system is presented; Laser Dynamic Range Imager (LDRI) based alignment algorithm for space applications is discussed. Two navigation cameras are also included to measure the range and range rate which can be integrated into the GPS/MEMS IMU system to enhance the navigation solution.
NASA Astrophysics Data System (ADS)
Kerr, Andrew D.
Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.
In-Situ Cameras for Radiometric Correction of Remotely Sensed Data
NASA Astrophysics Data System (ADS)
Kautz, Jess S.
The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.
Low-cost digital dynamic visualization system
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
1995-05-01
High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.
Fu, Yu; Pedrini, Giancarlo
2014-01-01
In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503
NASA Astrophysics Data System (ADS)
Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald
2014-03-01
High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.
Evaluation of a gamma camera system for the RITS-6 accelerator using the self-magnetic pinch diode
NASA Astrophysics Data System (ADS)
Webb, Timothy J.; Kiefer, Mark L.; Gignac, Raymond; Baker, Stuart A.
2015-08-01
The self-magnetic pinch (SMP) diode is an intense radiographic source fielded on the Radiographic Integrated Test Stand (RITS-6) accelerator at Sandia National Laboratories in Albuquerque, NM. The accelerator is an inductive voltage adder (IVA) that can operate from 2-10 MV with currents up to 160 kA (at 7 MV). The SMP diode consists of an annular cathode separated from a flat anode, holding the bremsstrahlung conversion target, by a vacuum gap. Until recently the primary imaging diagnostic utilized image plates (storage phosphors) which has generally low DQE at these photon energies along with other problems. The benefits of using image plates include a high-dynamic range, good spatial resolution, and ease of use. A scintillator-based X-ray imaging system or "gamma camera" has been fielded in front of RITS and the SMP diode which has been able to provide vastly superior images in terms of signal-to-noise with similar resolution and acceptable dynamic range.
Dynamic photoelasticity by TDI imaging
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
2001-06-01
High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.
NASA Astrophysics Data System (ADS)
Jaanimagi, Paul A.
1992-01-01
This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.
Pham, Quang Duc; Hayasaki, Yoshio
2015-01-01
We demonstrate an optical frequency comb profilometer with a single-pixel camera to measure the position and profile of an object's surface that exceeds far beyond light wavelength without 2π phase ambiguity. The present configuration of the single-pixel camera can perform the profilometry with an axial resolution of 3.4 μm at 1 GHz operation corresponding to a wavelength of 30 cm. Therefore, the axial dynamic range was increased to 0.87×105. It was found from the experiments and computer simulations that the improvement was derived from higher modulation contrast of digital micromirror devices. The frame rate was also increased to 20 Hz.
Soft x-ray streak camera for laser fusion applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stradling, G.L.
This thesis reviews the development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development. A brief introduction of laser fusion and laser fusion diagnostics is presented. The need for a soft x-ray streak camera as a laser fusion diagnostic is shown. Basic x-ray streak camera characteristics, design, and operation are reviewed. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV aremore » also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.« less
NASA Astrophysics Data System (ADS)
Sun, Jiwen; Wei, Ling; Fu, Danying
2002-01-01
resolution and wide swath. In order to assure its high optical precision smoothly passing the rigorous dynamic load of launch, it should be of high structural rigidity. Therefore, a careful study of the dynamic features of the camera structure should be performed. Pro/E. An interference examination is performed on the precise CAD model of the camera for mending the structural design. for the first time in China, and the analysis of structural dynamic of the camera is accomplished by applying the structural analysis code PATRAN and NASTRAN. The main research programs include: 1) the comparative calculation of modes analysis of the critical structure of the camera is achieved by using 4 nodes and 10 nodes tetrahedral elements respectively, so as to confirm the most reasonable general model; 2) through the modes analysis of the camera from several cases, the inherent frequencies and modes are obtained and further the rationality of the structural design of the camera is proved; 3) the static analysis of the camera under self gravity and overloads is completed and the relevant deformation and stress distributions are gained; 4) the response calculation of sine vibration of the camera is completed and the corresponding response curve and maximum acceleration response with corresponding frequencies are obtained. software technique is accurate and efficient. sensitivity, the dynamic design and engineering optimization of the critical structure of the camera are discussed. fundamental technology in design of forecoming space optical instruments.
Soft X-ray streak camera for laser fusion applications
NASA Astrophysics Data System (ADS)
Stradling, G. L.
1981-04-01
The development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development is reviewed as well as laser fusion and laser fusion diagnostics. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV are also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
Color line scan camera technology and machine vision: requirements to consider
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.
1997-08-01
Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.
Multiple-frame IR photo-recorder KIT-3M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, E; Wilkins, P; Nebeker, N
2006-05-15
This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less
High-performance dual-speed CCD camera system for scientific imaging
NASA Astrophysics Data System (ADS)
Simpson, Raymond W.
1996-03-01
Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goddu, S; Sun, B; Grantham, K
2016-06-15
Purpose: Proton therapy (PT) delivery is complex and extremely dynamic. Therefore, quality assurance testing is vital, but highly time-consuming. We have developed a High-Speed Scintillation-Camera-System (HS-SCS) for simultaneously measuring multiple beam characteristics. Methods: High-speed camera was placed in a light-tight housing and dual-layer neutron shield. HS-SCS is synchronized with a synchrocyclotron to capture individual proton-beam-pulses (PBPs) at ∼504 frames/sec. The PBPs from synchrocyclotron trigger the HS-SCS to open its shutter for programmed exposure-time. Light emissions within 30×30×5cm3 plastic-scintillator (BC-408) were captured by a CCD-camera as individual images revealing dose-deposition in a 2D-plane with a resolution of 0.7mm for range andmore » SOBP measurements and 1.67mm for profiles. The CCD response as well as signal to noise ratio (SNR) was characterized for varying exposure times, gains for different light intensities using a TV-Optoliner system. Software tools were developed to analyze ∼5000 images to extract different beam parameters. Quenching correction-factors were established by comparing scintillation Bragg-Peaks with water scanned ionization-chamber measurements. Quenching corrected Bragg-peaks were integrated to ascertain proton-beam range (PBR), width of Spared-Out-Bragg-Peak (MOD) and distal.« less
Photometric Calibration of Consumer Video Cameras
NASA Technical Reports Server (NTRS)
Suggs, Robert; Swift, Wesley, Jr.
2007-01-01
Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.
Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teruya, A. T.; Palmer, N. E.; Schneider, M. B.
2013-09-01
The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effortmore » was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.« less
Digital amateur observations of Venus at 0.9μm
NASA Astrophysics Data System (ADS)
Kardasis, E.
2017-09-01
Venus atmosphere is extremely dynamic, though it is very difficult to observe any features on it in the visible and even in the near-IR range. Digital observations with planetary cameras in recent years routinely produce high-quality images, especially in the near-infrared (0.7-1μm), since IR wavelengths are less influenced by Earth's atmosphere and Venus's atmosphere is partially transparent in this spectral region. Continuous observations over a few hours may track dark atmospheric features in the dayside and determine their motion. In this work we will present such observations and some dark-feature motion measurements at 0.9μm. Ground-based observations at this wavelength are rare and are complementary to in situ observations by JAXA's Akatsuki orbiter, that studies the atmospheric dynamics of Venus also in this band with the IR1 camera.
HDR imaging and color constancy: two sides of the same coin?
NASA Astrophysics Data System (ADS)
McCann, John J.
2011-01-01
At first, we think that High Dynamic Range (HDR) imaging is a technique for improved recordings of scene radiances. Many of us think that human color constancy is a variation of a camera's automatic white balance algorithm. However, on closer inspection, glare limits the range of light we can detect in cameras and on retinas. All scene regions below middle gray are influenced, more or less, by the glare from the bright scene segments. Instead of accurate radiance reproduction, HDR imaging works well because it preserves the details in the scene's spatial contrast. Similarly, on closer inspection, human color constancy depends on spatial comparisons that synthesize appearances from all the scene segments. Can spatial image processing play similar principle roles in both HDR imaging and color constancy?
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
Red ball ranging optimization based on dual camera ranging method
NASA Astrophysics Data System (ADS)
Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung
2018-05-01
In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.
Design, demonstration and testing of low F-number LWIR panoramic imaging relay optics
NASA Astrophysics Data System (ADS)
Furxhi, Orges; Frascati, Joe; Driggers, Ronald
2018-04-01
Panoramic imaging is inherently wide field of view. High sensitivity uncooled Long Wave Infrared (LWIR) imaging requires low F-number optics. These two requirements result in short back working distance designs that, in addition to being costly, are challenging to integrate with commercially available uncooled LWIR cameras and cores. Common challenges include the relocation of the shutter flag, custom calibration of the camera dynamic range and NUC tables, focusing, and athermalization. Solutions to these challenges add to the system cost and make panoramic uncooled LWIR cameras commercially unattractive. In this paper, we present the design of Panoramic Imaging Relay Optics (PIRO) and show imagery and test results with one of the first prototypes. PIRO designs use several reflective surfaces (generally two) to relay a panoramic scene onto a real, donut-shaped image. The PIRO donut is imaged on the focal plane of the camera using a commercially-off-the-shelf (COTS) low F-number lens. This approach results in low component cost and effortless integration with pre-calibrated commercially available cameras and lenses.
High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera Tracking
NASA Astrophysics Data System (ADS)
Liss, J.; Dunagan, S. E.; Johnson, R. R.; Chang, C. S.; LeBlanc, S. E.; Shinozuka, Y.; Redemann, J.; Flynn, C. J.; Segal-Rosenhaimer, M.; Pistone, K.; Kacenelenbogen, M. S.; Fahey, L.
2016-12-01
High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera TrackingThe NASA Ames Sun-photometer-Satellite Group, DOE, PNNL Atmospheric Sciences and Global Change Division, and NASA Goddard's AERONET (AErosol RObotic NETwork) team recently collaborated on the development of a new airborne sunphotometry instrument that provides information on gases and aerosols extending far beyond what can be derived from discrete-channel direct-beam measurements, while preserving or enhancing many of the desirable AATS features (e.g., compactness, versatility, automation, reliability). The enhanced instrument combines the sun-tracking ability of the current 14-Channel NASA Ames AATS-14 with the sky-scanning ability of the ground-based AERONET Sun/sky photometers, while extending both AATS-14 and AERONET capabilities by providing full spectral information from the UV (350 nm) to the SWIR (1,700 nm). Strengths of this measurement approach include many more wavelengths (isolated from gas absorption features) that may be used to characterize aerosols and detailed (oversampled) measurements of the absorption features of specific gas constituents. The Sky Scanning Sun Tracking Airborne Radiometer (3STAR) replicates the radiometer functionality of the AATS-14 instrument but incorporates modern COTS technologies for all instruments subsystems. A 19-channel radiometer bundle design is borrowed from a commercial water column radiance instrument manufactured by Biospherical Instruments of San Diego California (ref, Morrow and Hooker)) and developed using NASA funds under the Small Business Innovative Research (SBIR) program. The 3STAR design also incorporates the latest in robotic motor technology embodied in Rotary actuators from Oriental motor Corp. having better than 15 arc seconds of positioning accuracy. Control system was designed, tested and simulated using a Hybrid-Dynamical modeling methodology. The design also replaces the classic quadrant detector tracking sensor with a wide dynamic range camera that provides a high precision solar position tracking signal as well as an image of the sky in the 45° field of view around the solar axis, which can be of great assistance in flagging data for cloud effects or other factors that might impact data quality.
NASA Astrophysics Data System (ADS)
Brauchle, Joerg; Berger, Ralf; Hein, Daniel; Bucher, Tilman
2017-04-01
The DLR Institute of Optical Sensor Systems has developed the MACS-Himalaya, a custom built Modular Aerial Camera System specifically designed for the extreme geometric (steep slopes) and radiometric (high contrast) conditions of high mountain areas. It has an overall field of view of 116° across-track consisting of a nadir and two oblique looking RGB camera heads and a fourth nadir looking near-infrared camera. This design provides the capability to fly along narrow valleys and simultaneously cover ground and steep valley flank topography with similar ground resolution. To compensate for extreme contrasts between fresh snow and dark shadows in high altitudes a High Dynamic Range (HDR) mode was implemented, which typically takes a sequence of 3 images with graded integration times, each covering 12 bit radiometric depth, resulting in a total dynamic range of 15-16 bit. This enables dense image matching and interpretation for sunlit snow and glaciers as well as for dark shaded rock faces in the same scene. Small and lightweight industrial grade camera heads are used and operated at a rate of 3.3 frames per second with 3-step HDR, which is sufficient to achieve a longitudinal overlap of approximately 90% per exposure time at 1,000 m above ground at a velocity of 180 km/h. Direct georeferencing and multitemporal monitoring without the need of ground control points is possible due to the use of a high end GPS/INS system, a stable calibrated inner geometry of the camera heads and a fully photogrammetric workflow at DLR. In 2014 a survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried in a wingpod by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at altitudes up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced in regions and outcrops normally inaccessible to aerial imagery. These data are used in the fields of natural hazards, geomorphology and glaciology (see Thompson et al., CR4.3). In the presentation the camera system is introduced and examples and applications from the Nepal campaign are given.
MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera
NASA Astrophysics Data System (ADS)
Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.
2017-10-01
An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).
NASA Astrophysics Data System (ADS)
Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James
2017-01-01
A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.
Standoff aircraft IR characterization with ABB dual-band hyper spectral imager
NASA Astrophysics Data System (ADS)
Prel, Florent; Moreau, Louis; Lantagne, Stéphane; Bullis, Ritchie D.; Roy, Claude; Vallières, Christian; Levesque, Luc
2012-09-01
Remote sensing infrared characterization of rapidly evolving events generally involves the combination of a spectro-radiometer and infrared camera(s) as separated instruments. Time synchronization, spatial coregistration, consistent radiometric calibration and managing several systems are important challenges to overcome; they complicate the target infrared characterization data processing and increase the sources of errors affecting the final radiometric accuracy. MR-i is a dual-band Hyperspectal imaging spectro-radiometer, that combines two 256 x 256 pixels infrared cameras and an infrared spectro-radiometer into one single instrument. This field instrument generates spectral datacubes in the MWIR and LWIR. It is designed to acquire the spectral signatures of rapidly evolving events. The design is modular. The spectrometer has two output ports configured with two simultaneously operated cameras to either widen the spectral coverage or to increase the dynamic range of the measured amplitudes. Various telescope options are available for the input port. Recent platform developments and field trial measurements performances will be presented for a system configuration dedicated to the characterization of airborne targets.
High Dynamic Imaging for Photometry and Graphic Arts Evaluation
NASA Astrophysics Data System (ADS)
T. S., Sudheer Kumar; Kurian, Ciji Pearl; Shama, Kumara; K. R., Shailesh
2018-05-01
High Dynamic Range Imaging (HDRI) techniques for luminance measurement is gaining importance in recent years. This paper presents the application of the HDRI system for obtaining the photometric characteristics of lighting fixtures as well to assess the quality of lighting in colour viewing booth of a printing press. The process of quality control of prints in a printing press is known as graphic arts evaluation. This light booth plays a major role in the quality control of prints. In this work, Nikon D5100 camera was used to obtain the photometric characteristics of narrow beam spotlight. The results of this experiment are in agreement with photometric characteristics obtained from a standard industry grade Gonio-photometer. Similarly, Canon 60D camera was used to assess the quality of spatial luminance distribution of light in the colour viewing booth. This work demonstrates the usefulness of HDRI technology for photometric measurements and luminance distributions of illuminated interiors.
Lock-in imaging with synchronous digital mirror demodulation
NASA Astrophysics Data System (ADS)
Bush, Michael G.
2010-04-01
Lock-in imaging enables high contrast imaging in adverse conditions by exploiting a modulated light source and homodyne detection. We report results on a patent pending lock-in imaging system fabricated from commercial-off-theshelf parts utilizing standard cameras and a spatial light modulator. By leveraging the capabilities of standard parts we are able to present a low cost, high resolution, high sensitivity camera with applications in search and rescue, friend or foe identification (IFF), and covert surveillance. Different operating modes allow the same instrument to be utilized for dual band multispectral imaging or high dynamic range imaging, increasing the flexibility in different operational settings.
NASA Astrophysics Data System (ADS)
Reulke, R.; Baltrusch, S.; Brunn, A.; Komp, K.; Kresse, W.; von Schönermark, M.; Spreckels, V.
2012-08-01
10 years after the first introduction of a digital airborne mapping camera in the ISPRS conference 2000 in Amsterdam, several digital cameras are now available. They are well established in the market and have replaced the analogue camera. A general improvement in image quality accompanied the digital camera development. The signal-to-noise ratio and the dynamic range are significantly better than with the analogue cameras. In addition, digital cameras can be spectrally and radiometrically calibrated. The use of these cameras required a rethinking in many places though. New data products were introduced. In the recent years, some activities took place that should lead to a better understanding of the cameras and the data produced by these cameras. Several projects, like the projects of the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) or EuroSDR (European Spatial Data Research), were conducted to test and compare the performance of the different cameras. In this paper the current DIN (Deutsches Institut fuer Normung - German Institute for Standardization) standards will be presented. These include the standard for digital cameras, the standard for ortho rectification, the standard for classification, and the standard for pan-sharpening. In addition, standards for the derivation of elevation models, the use of Radar / SAR, and image quality are in preparation. The OGC has indicated its interest in participating that development. The OGC has already published specifications in the field of photogrammetry and remote sensing. One goal of joint future work could be to merge these formerly independent developments and the joint development of a suite of implementation specifications for photogrammetry and remote sensing.
Dynamic granularity of imaging systems
Geissel, Matthias; Smith, Ian C.; Shores, Jonathon E.; ...
2015-11-04
Imaging systems that include a specific source, imaging concept, geometry, and detector have unique properties such as signal-to-noise ratio, dynamic range, spatial resolution, distortions, and contrast. Some of these properties are inherently connected, particularly dynamic range and spatial resolution. It must be emphasized that spatial resolution is not a single number but must be seen in the context of dynamic range and consequently is better described by a function or distribution. We introduce the “dynamic granularity” G dyn as a standardized, objective relation between a detector’s spatial resolution (granularity) and dynamic range for complex imaging systems in a given environmentmore » rather than the widely found characterization of detectors such as cameras or films by themselves. We found that this relation can partly be explained through consideration of the signal’s photon statistics, background noise, and detector sensitivity, but a comprehensive description including some unpredictable data such as dust, damages, or an unknown spectral distribution will ultimately have to be based on measurements. Measured dynamic granularities can be objectively used to assess the limits of an imaging system’s performance including all contributing noise sources and to qualify the influence of alternative components within an imaging system. Our article explains the construction criteria to formulate a dynamic granularity and compares measured dynamic granularities for different detectors used in the X-ray backlighting scheme employed at Sandia’s Z-Backlighter facility.« less
Novel computer-based endoscopic camera
NASA Astrophysics Data System (ADS)
Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia
1995-05-01
We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.
Characterization of Vegetation using the UC Davis Remote Sensing Testbed
NASA Astrophysics Data System (ADS)
Falk, M.; Hart, Q. J.; Bowen, K. S.; Ustin, S. L.
2006-12-01
Remote sensing provides information about the dynamics of the terrestrial biosphere with continuous spatial and temporal coverage on many different scales. We present the design and construction of a suite of instrument modules and network infrastructure with size, weight and power constraints suitable for small scale vehicles, anticipating vigorous growth in unmanned aerial vehicles (UAV) and other mobile platforms. Our approach provides the rapid deployment and low cost acquisition of high aerial imagery for applications requiring high spatial resolution and revisits. The testbed supports a wide range of applications, encourages remote sensing solutions in new disciplines and demonstrates the complete range of engineering knowledge required for the successful deployment of remote sensing instruments. The initial testbed is deployed on a Sig Kadet Senior remote controlled plane. It includes an onboard computer with wireless radio, GPS, inertia measurement unit, 3-axis electronic compass and digital cameras. The onboard camera is either a RGB digital camera or a modified digital camera with red and NIR channels. Cameras were calibrated using selective light sources, an integrating spheres and a spectrometer, allowing for the computation of vegetation indices such as the NDVI. Field tests to date have investigated technical challenges in wireless communication bandwidth limits, automated image geolocation, and user interfaces; as well as image applications such as environmental landscape mapping focusing on Sudden Oak Death and invasive species detection, studies on the impact of bird colonies on tree canopies, and precision agriculture.
The threshold of vapor channel formation in water induced by pulsed CO2 laser
NASA Astrophysics Data System (ADS)
Guo, Wenqing; Zhang, Xianzeng; Zhan, Zhenlin; Xie, Shusen
2012-12-01
Water plays an important role in laser ablation. There are two main interpretations of laser-water interaction: hydrokinetic effect and vapor phenomenon. The two explanations are reasonable in some way, but they can't explain the mechanism of laser-water interaction completely. In this study, the dynamic process of vapor channel formation induced by pulsed CO2 laser in static water layer was monitored by high-speed camera. The wavelength of pulsed CO2 laser is 10.64 um, and pulse repetition rate is 60 Hz. The laser power ranged from 1 to 7 W with a step of 0.5 W. The frame rate of high-speed camera used in the experiment was 80025 fps. Based on high-speed camera pictures, the dynamic process of vapor channel formation was examined, and the threshold of vapor channel formation, pulsation period, the volume, the maximum depth and corresponding width of vapor channel were determined. The results showed that the threshold of vapor channel formation was about 2.5 W. Moreover, pulsation period, the maximum depth and corresponding width of vapor channel increased with the increasing of the laser power.
High speed Infrared imaging method for observation of the fast varying temperature phenomena
NASA Astrophysics Data System (ADS)
Moghadam, Reza; Alavi, Kambiz; Yuan, Baohong
With new improvements in high-end commercial R&D camera technologies many challenges have been overcome for exploring the high-speed IR camera imaging. The core benefits of this technology is the ability to capture fast varying phenomena without image blur, acquire enough data to properly characterize dynamic energy, and increase the dynamic range without compromising the number of frames per second. This study presents a noninvasive method for determining the intensity field of a High Intensity Focused Ultrasound Device (HIFU) beam using Infrared imaging. High speed Infrared camera was placed above the tissue-mimicking material that was heated by HIFU with no other sensors present in the HIFU axial beam. A MATLAB simulation code used to perform a finite-element solution to the pressure wave propagation and heat equations within the phantom and temperature rise to the phantom was computed. Three different power levels of HIFU transducers were tested and the predicted temperature increase values were within about 25% of IR measurements. The fundamental theory and methods developed in this research can be used to detect fast varying temperature phenomena in combination with the infrared filters.
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.
Lapray, Pierre-Jean; Thomas, Jean-Baptiste; Gouton, Pierre
2017-06-03
Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research.
High-Speed Schlieren Movies of Decelerators at Supersonic Speeds
NASA Technical Reports Server (NTRS)
1960-01-01
Tests were conducted on several types of porous parachutes, a paraglider, and a simulated retrorocket. Mach numbers ranged from 1.8-3.0, porosity from 20-80 percent, and camera speeds from 1680-3000 feet per second (fps) in trials with porous parachutes. Trials of reefed parachutes were conducted at Mach number 2.0 and reefing of 12-33 percent at camera speeds of 600 fps. A flexible parachute with an inflatable ring in the periphery of the canopy was tested at Reynolds number 750,000 per foot, Mach number 2.85, porosity of 28 percent, and camera speed of 36oo fps. A vortex-ring parachute was tested at Mach number 2.2 and camera speed of 3000 fps. The paraglider, with a sweepback of 45 degrees at an angle of attack of 45 degrees was tested at Mach number 2.65, drag coefficient of 0.200, and lift coefficient of 0.278 at a camera speed of 600 fps. A cold air jet exhausting upstream from the center of a bluff body was used to simulate a retrorocket. The free-stream Mach number was 2.0, free-stream dynamic pressure was 620 lb/sq ft, jet-exit static pressure ratio was 10.9, and camera speed was 600 fps.
Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry
NASA Technical Reports Server (NTRS)
Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)
2016-01-01
A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.
Very High-Speed Digital Video Capability for In-Flight Use
NASA Technical Reports Server (NTRS)
Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald
2006-01-01
digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.
NASA Astrophysics Data System (ADS)
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Strategic options towards an affordable high-performance infrared camera
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.
2016-05-01
The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.
An optimal algorithm for reconstructing images from binary measurements
NASA Astrophysics Data System (ADS)
Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin
2010-01-01
We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.
Measuring high-resolution sky luminance distributions with a CCD camera.
Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther
2013-03-10
We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively.
Vision Based SLAM in Dynamic Scenes
2012-12-20
the correct relative poses between cameras at frame F. For this purpose, we detect and match SURF features between cameras in dilierent groups, and...all cameras in s uch a challenging case. For a compa rison, we disabled the ’ inte r-camera pose estimation’ and applied the ’ intra-camera pose esti
Noor, M Omair; Krull, Ulrich J
2014-10-21
Paper-based diagnostic assays are gaining increasing popularity for their potential application in resource-limited settings and for point-of-care screening. Achievement of high sensitivity with precision and accuracy can be challenging when using paper substrates. Herein, we implement the red-green-blue color palette of a digital camera for quantitative ratiometric transduction of nucleic acid hybridization on a paper-based platform using immobilized quantum dots (QDs) as donors in fluorescence resonance energy transfer (FRET). A nonenzymatic and reagentless means of signal enhancement for QD-FRET assays on paper substrates is based on the use of dry paper substrates for data acquisition. This approach offered at least a 10-fold higher assay sensitivity and at least a 10-fold lower limit of detection (LOD) as compared to hydrated paper substrates. The surface of paper was modified with imidazole groups to assemble a transduction interface that consisted of immobilized QD-probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as an acceptor. A hybridization event that brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs was responsible for a FRET-sensitized emission from the acceptor dye, which served as an analytical signal. A hand-held UV lamp was used as an excitation source and ratiometric analysis using an iPad camera was possible by a relative intensity analysis of the red (Cy3 photoluminescence (PL)) and green (gQD PL) color channels of the digital camera. For digital imaging using an iPad camera, the LOD of the assay in a sandwich format was 450 fmol with a dynamic range spanning 2 orders of magnitude, while an epifluorescence microscope detection platform offered a LOD of 30 fmol and a dynamic range spanning 3 orders of magnitude. The selectivity of the hybridization assay was demonstrated by detection of a single nucleotide polymorphism at a contrast ratio of 60:1. This work provides an important framework for the integration of QD-FRET methods with digital imaging for a ratiometric transduction of nucleic acid hybridization on a paper-based platform.
High frequency modal identification on noisy high-speed camera data
NASA Astrophysics Data System (ADS)
Javh, Jaka; Slavič, Janko; Boltežar, Miha
2018-01-01
Vibration measurements using optical full-field systems based on high-speed footage are typically heavily burdened by noise, as the displacement amplitudes of the vibrating structures are often very small (in the range of micrometers, depending on the structure). The modal information is troublesome to measure as the structure's response is close to, or below, the noise level of the camera-based measurement system. This paper demonstrates modal parameter identification for such noisy measurements. It is shown that by using the Least-Squares Complex-Frequency method combined with the Least-Squares Frequency-Domain method, identification at high-frequencies is still possible. By additionally incorporating a more precise sensor to identify the eigenvalues, a hybrid accelerometer/high-speed camera mode shape identification is possible even below the noise floor. An accelerometer measurement is used to identify the eigenvalues, while the camera measurement is used to produce the full-field mode shapes close to 10 kHz. The identified modal parameters improve the quality of the measured modal data and serve as a reduced model of the structure's dynamics.
NASA Astrophysics Data System (ADS)
Su, Peng; Khreishi, Manal A. H.; Su, Tianquan; Huang, Run; Dominguez, Margaret Z.; Maldonado, Alejandro; Butel, Guillaume; Wang, Yuhao; Parks, Robert E.; Burge, James H.
2014-03-01
A software configurable optical test system (SCOTS) based on deflectometry was developed at the University of Arizona for rapidly, robustly, and accurately measuring precision aspheric and freeform surfaces. SCOTS uses a camera with an external stop to realize a Hartmann test in reverse. With the external camera stop as the reference, a coordinate measuring machine can be used to calibrate the SCOTS test geometry to a high accuracy. Systematic errors from the camera are carefully investigated and controlled. Camera pupil imaging aberration is removed with the external aperture stop. Imaging aberration and other inherent errors are suppressed with an N-rotation test. The performance of the SCOTS test is demonstrated with the measurement results from a 5-m-diameter Large Synoptic Survey Telescope tertiary mirror and an 8.4-m diameter Giant Magellan Telescope primary mirror. The results show that SCOTS can be used as a large-dynamic-range, high-precision, and non-null test method for precision aspheric and freeform surfaces. The SCOTS test can achieve measurement accuracy comparable to traditional interferometric tests.
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
On the resolution of plenoptic PIV
NASA Astrophysics Data System (ADS)
Deem, Eric A.; Zhang, Yang; Cattafesta, Louis N.; Fahringer, Timothy W.; Thurow, Brian S.
2016-08-01
Plenoptic PIV offers a simple, single camera solution for volumetric velocity measurements of fluid flow. However, due to the novel manner in which the particle images are acquired and processed, few references exist to aid in determining the resolution limits of the measurements. This manuscript provides a framework for determining the spatial resolution of plenoptic PIV based on camera design and experimental parameters. This information can then be used to determine the smallest length scales of flows that are observable by plenoptic PIV, the dynamic range of plenoptic PIV, and the corresponding uncertainty in plenoptic PIV measurements. A simplified plenoptic camera is illustrated to provide the reader with a working knowledge of the method in which the light field is recorded. Then, operational considerations are addressed. This includes a derivation of the depth resolution in terms of the design parameters of the camera. Simulated volume reconstructions are presented to validate the derived limits. It is found that, while determining the lateral resolution is relatively straightforward, many factors affect the resolution along the optical axis. These factors are addressed and suggestions are proposed for improving performance.
Panorama parking assistant system with improved particle swarm optimization method
NASA Astrophysics Data System (ADS)
Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong
2013-10-01
A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.
Inferred UV Fluence Focal-Spot Profiles from Soft X-Ray Pinhole Camera Measurements on OMEGA
NASA Astrophysics Data System (ADS)
Theobald, W.; Sorce, C.; Epstein, R.; Keck, R. L.; Kellogg, C.; Kessler, T. J.; Kwiatkowski, J.; Marshall, F. J.; Seka, W.; Shvydky, A.; Stoeckl, C.
2017-10-01
The drive uniformity of OMEGA cryogenic implosions is affected by UV beamfluence variations on target, which require careful monitoring at full laser power. This is routinely performed with multiple pinhole cameras equipped with charge-injection devices (CID's) that record the x-ray emission in the 3- to 7-keV photon energy range from an Au-coated target. The technique relies on the knowledge of the relation between x-ray fluence Fx and UV fluence FUV ,Fx FUVγ , with a measured γ = 3.42 for the CID-based diagnostic and 1-ns laser pulse. It is demonstrated here that using a back-thinned charge-coupled-device camera with softer filtration for x-rays with photon energies <2 keV and well calibrated pinhole provides a lower γ 2 and a larger dynamic range in the measured UV fluence. Inferred UV fluence profiles were measured for 100-ps and 1-ns laser pulses and were compared to directly measured profiles from a UV equivalent-target-plane diagnostic. Good agreement between both techniques is reported for selected beams. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
NASA Astrophysics Data System (ADS)
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
Registration of Large Motion Blurred Images
2016-05-09
in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS
Coggins, Lewis G; Bacheler, Nathan M; Gwinn, Daniel C
2014-01-01
Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics.
Coggins, Lewis G.; Bacheler, Nathan M.; Gwinn, Daniel C.
2014-01-01
Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics. PMID:25255325
NASA Astrophysics Data System (ADS)
Rizzo, G.; Batignani, G.; Benkechkache, M. A.; Bettarini, S.; Casarosa, G.; Comotti, D.; Dalla Betta, G.-F.; Fabris, L.; Forti, F.; Grassi, M.; Lodola, L.; Malcovati, P.; Manghisoni, M.; Mendicino, R.; Morsani, F.; Paladino, A.; Pancheri, L.; Paoloni, E.; Ratti, L.; Re, V.; Traversi, G.; Vacchi, C.; Verzellesi, G.; Xu, H.
2016-07-01
The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 104 photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.
An attentive multi-camera system
NASA Astrophysics Data System (ADS)
Napoletano, Paolo; Tisato, Francesco
2014-03-01
Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.
Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong
2015-04-14
Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading
2011-01-01
Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material characteristics of the underlying structures. This is an important factor in a reliable biomechanical modelling and simulation as well as in a successful design of complex implants. PMID:21762533
Toward a digital camera to rival the human eye
NASA Astrophysics Data System (ADS)
Skorka, Orit; Joseph, Dileepan
2011-07-01
All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.
NASA Astrophysics Data System (ADS)
Mikhalev, Aleksandr; Podlesny, Stepan; Stoeva, Penka
2016-09-01
To study dynamics of the upper atmosphere, we consider results of the night sky photometry, using a color CCD camera and taking into account the night airglow and features of its spectral composition. We use night airglow observations for 2010-2015, which have been obtained at the ISTP SB RAS Geophysical Observatory (52° N, 103° E) by the camera with KODAK KAI-11002 CCD sensor. We estimate the average brightness of the night sky in R, G, B channels of the color camera for eastern Siberia with typical values ranging from ~0.008 to 0.01 erg*cm-2*s-1. Besides, we determine seasonal variations in the night sky luminosities in R, G, B channels of the color camera. In these channels, luminosities decrease in spring, increase in autumn, and have a pronounced summer maximum, which can be explained by scattered light and is associated with the location of the Geophysical Observatory. We consider geophysical phenomena with their optical effects in R, G, B channels of the color camera. For some geophysical phenomena (geomagnetic storms, sudden stratospheric warmings), we demonstrate the possibility of the quantitative relationship between enhanced signals in R and G channels and increases in intensities of discrete 557.7 and 630 nm emissions, which are predominant in the airglow spectrum.
The Research on Lucalibration of GF-4 Satellite
NASA Astrophysics Data System (ADS)
Qi, W.; Tan, W.
2018-04-01
Starting from the lunar observation requirements of the GF-4 satellite, the main index such as the resolution, the imaging field, the reflect radiance and the imaging integration time are analyzed combined with the imaging features and parameters of this camera. The analysis results show that the lunar observation of GF-4 satellite has high resolution, wide field which can image the whole moon, the radiance of the pupil which is reflected by the moon is within the dynamic range of the camera, and the lunar image quality can be guaranteed better by setting up a reasonable integration time. At the same time, the radiation transmission model of the lunar radiation calibration is trace and the radiation degree is evaluated.
Atmospheric aerosol profiling with a bistatic imaging lidar system.
Barnes, John E; Sharma, N C Parikh; Kaplan, Trevor B
2007-05-20
Atmospheric aerosols have been profiled using a simple, imaging, bistatic lidar system. A vertical laser beam is imaged onto a charge-coupled-device camera from the ground to the zenith with a wide-angle lens (CLidar). The altitudes are derived geometrically from the position of the camera and laser with submeter resolution near the ground. The system requires no overlap correction needed in monostatic lidar systems and needs a much smaller dynamic range. Nighttime measurements of both molecular and aerosol scattering were made at Mauna Loa Observatory. The CLidar aerosol total scatter compares very well with a nephelometer measuring at 10 m above the ground. The results build on earlier work that compared purely molecular scattered light to theory, and detail instrument improvements.
Richardson, Andrew D; Hufkens, Koen; Milliman, Tom; Frolking, Steve
2018-04-09
Phenology is a valuable diagnostic of ecosystem health, and has applications to environmental monitoring and management. Here, we conduct an intercomparison analysis using phenological transition dates derived from near-surface PhenoCam imagery and MODIS satellite remote sensing. We used approximately 600 site-years of data, from 128 camera sites covering a wide range of vegetation types and climate zones. During both "greenness rising" and "greenness falling" transition phases, we found generally good agreement between PhenoCam and MODIS transition dates for agricultural, deciduous forest, and grassland sites, provided that the vegetation in the camera field of view was representative of the broader landscape. The correlation between PhenoCam and MODIS transition dates was poor for evergreen forest sites. We discuss potential reasons (including sub-pixel spatial heterogeneity, flexibility of the transition date extraction method, vegetation index sensitivity in evergreen systems, and PhenoCam geolocation uncertainty) for varying agreement between time series of vegetation indices derived from PhenoCam and MODIS imagery. This analysis increases our confidence in the ability of satellite remote sensing to accurately characterize seasonal dynamics in a range of ecosystems, and provides a basis for interpreting those dynamics in the context of tangible phenological changes occurring on the ground.
Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm
NASA Astrophysics Data System (ADS)
Lahamy, H.; Lichti, D.
2011-09-01
Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
NASA Astrophysics Data System (ADS)
Ryżak, Magdalena; Beczek, Michał; Mazur, Rafał; Sochan, Agata; Bieganowski, Andrzej
2017-04-01
The phenomenon of splash, which is one of the factors causing erosion of the soil surface, is the subject of research of various scientific teams. One of efficient methods of observation and analysis of this phenomenon are high-speed cameras to measure particles at 2000 frames per second or higher. Analysis of the phenomenon of splash with the use of high-speed cameras and specialized software can reveal, among other things, the number of broken particles, their speeds, trajectories, and the distances over which they were transferred. The paper presents an attempt at evaluation of the efficiency of detection of splashed particles with the use of a set of 3 cameras (Vision Research MIRO 310) and software Dantec Dynamics Studio, using a 3D module (Volumetric PTV). In order to assess the effectiveness of estimating the number of particles, the experiment was performed on glass beads with a diameter of 0.5 mm (corresponding to the sand fraction). Water droplets with a diameter of 4.2 mm fell on a sample from a height of 1.5 m. Two types of splashed particles were observed: particle having a low range (up to 18 mm) splashed at larger angles and particles of a high range (up to 118 mm) splashed at smaller angles. The detection efficiency the number of splashed particles estimated by the software was 45 - 65% for particles with a large range. The effectiveness of the detection of particles by the software has been calculated on the basis of comparison with the number of beads that fell on the adhesive surface around the sample. This work was partly financed from the National Science Centre, Poland; project no. 2014/14/E/ST10/00851.
ePix: a class of architectures for second generation LCLS cameras
Dragone, A.; Caragiulo, P.; Markovic, B.; ...
2014-03-31
ePix is a novel class of ASIC architectures, based on a common platform, optimized to build modular scalable detectors for LCLS. The platform architecture is composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog-to-digital converters per column. It also implements a dedicated control interface and all the required support electronics to perform configuration, calibration and readout of the matrix. Based on this platform a class of front-end ASICs and several camera modules, meeting different requirements, can be developed by designing specific pixel architectures. This approach reduces development time andmore » expands the possibility of integration of detector modules with different size, shape or functionality in the same camera. The ePix platform is currently under development together with the first two integrating pixel architectures: ePix100 dedicated to ultra low noise applications and ePix10k for high dynamic range applications.« less
Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam
NASA Astrophysics Data System (ADS)
Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A. E.; Engelhardt, M.
2005-04-01
When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2×107 cm s, which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300×1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200 rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points.
Cloud cover detection combining high dynamic range sky images and ceilometer measurements
NASA Astrophysics Data System (ADS)
Román, R.; Cazorla, A.; Toledano, C.; Olmo, F. J.; Cachorro, V. E.; de Frutos, A.; Alados-Arboledas, L.
2017-11-01
This paper presents a new algorithm for cloud detection based on high dynamic range images from a sky camera and ceilometer measurements. The algorithm is also able to detect the obstruction of the sun. This algorithm, called CPC (Camera Plus Ceilometer), is based on the assumption that under cloud-free conditions the sky field must show symmetry. The symmetry criteria are applied depending on ceilometer measurements of the cloud base height. CPC algorithm is applied in two Spanish locations (Granada and Valladolid). The performance of CPC retrieving the sun conditions (obstructed or unobstructed) is analyzed in detail using as reference pyranometer measurements at Granada. CPC retrievals are in agreement with those derived from the reference pyranometer in 85% of the cases (it seems that this agreement does not depend on aerosol size or optical depth). The agreement percentage goes down to only 48% when another algorithm, based on Red-Blue Ratio (RBR), is applied to the sky camera images. The retrieved cloud cover at Granada and Valladolid is compared with that registered by trained meteorological observers. CPC cloud cover is in agreement with the reference showing a slight overestimation and a mean absolute error around 1 okta. A major advantage of the CPC algorithm with respect to the RBR method is that the determined cloud cover is independent of aerosol properties. The RBR algorithm overestimates cloud cover for coarse aerosols and high loads. Cloud cover obtained only from ceilometer shows similar results than CPC algorithm; but the horizontal distribution cannot be obtained. In addition, it has been observed that under quick and strong changes on cloud cover ceilometers retrieve a cloud cover fitting worse with the real cloud cover.
Dynamical Modeling of NGC 6397: Simulated HST Imaging
NASA Astrophysics Data System (ADS)
Dull, J. D.; Cohn, H. N.; Lugger, P. M.; Slavin, S. D.; Murphy, B. W.
1994-12-01
The proximity of NGC 6397 (2.2 kpc) provides an ideal opportunity to test current dynamical models for globular clusters with the HST Wide-Field/Planetary Camera (WFPC2)\\@. We have used a Monte Carlo algorithm to generate ensembles of simulated Planetary Camera (PC) U-band images of NGC 6397 from evolving, multi-mass Fokker-Planck models. These images, which are based on the post-repair HST-PC point-spread function, are used to develop and test analysis methods for recovering structural information from actual HST imaging. We have considered a range of exposure times up to 2.4times 10(4) s, based on our proposed HST Cycle 5 observations. Our Fokker-Planck models include energy input from dynamically-formed binaries. We have adopted a 20-group mass spectrum extending from 0.16 to 1.4 M_sun. We use theoretical luminosity functions for red giants and main sequence stars. Horizontal branch stars, blue stragglers, white dwarfs, and cataclysmic variables are also included. Simulated images are generated for cluster models at both maximal core collapse and at a post-collapse bounce. We are carrying out stellar photometry on these images using ``DAOPHOT-assisted aperture photometry'' software that we have developed. We are testing several techniques for analyzing the resulting star counts, to determine the underlying cluster structure, including parametric model fits and the nonparametric density estimation methods. Our simulated images also allow us to investigate the accuracy and completeness of methods for carrying out stellar photometry in HST Planetary Camera images of dense cluster cores.
Miniaturized unified imaging system using bio-inspired fluidic lens
NASA Astrophysics Data System (ADS)
Tsai, Frank S.; Cho, Sung Hwan; Qiao, Wen; Kim, Nam-Hyong; Lo, Yu-Hwa
2008-08-01
Miniaturized imaging systems have become ubiquitous as they are found in an ever-increasing number of devices, such as cellular phones, personal digital assistants, and web cameras. Until now, the design and fabrication methodology of such systems have not been significantly different from conventional cameras. The only established method to achieve focusing is by varying the lens distance. On the other hand, the variable-shape crystalline lens found in animal eyes offers inspiration for a more natural way of achieving an optical system with high functionality. Learning from the working concepts of the optics in the animal kingdom, we developed bio-inspired fluidic lenses for a miniature universal imager with auto-focusing, macro, and super-macro capabilities. Because of the enormous dynamic range of fluidic lenses, the miniature camera can even function as a microscope. To compensate for the image quality difference between the central vision and peripheral vision and the shape difference between a solid-state image sensor and a curved retina, we adopted a hybrid design consisting of fluidic lenses for tunability and fixed lenses for aberration and color dispersion correction. A design of the world's smallest surgical camera with 3X optical zoom capabilities is also demonstrated using the approach of hybrid lenses.
NASA Astrophysics Data System (ADS)
Pu, Yang; Alfano, Robert R.
2015-03-01
Near-infrared (NIR) dyes absorb and emit light within the range from 700 to 900 nm have several benefits in biological studies for one- and/or two-photon excitation for deeper penetration of tissues. These molecules undergo vibrational and rotational motion in the relaxation of the excited electronic states, Due to the less than ideal anisotropy behavior of NIR dyes stemming from the fluorophores elongated structures and short fluorescence lifetime in picosecond range, no significant efforts have been made to recognize the theory of these dyes in time-resolved polarization dynamics. In this study, the depolarization of the fluorescence due to emission from rotational deactivation in solution will be measured with the excitation of a linearly polarized femtosecond laser pulse and a streak camera. The theory, experiment and application of the ultrafast fluorescence polarization dynamics and anisotropy are illustrated with examples of two of the most important medical based dyes. One is NIR dye, namely Indocyanine Green (ICG) and is compared with Fluorescein which is in visible range with much longer lifetime. A set of first-order linear differential equations was developed to model fluorescence polarization dynamics of NIR dye in picosecond range. Using this model, the important parameters of ultrafast polarization spectroscopy were identified: risetime, initial time, fluorescence lifetime, and rotation times.
Synchro-ballistic recording of detonation phenomena
DOE Office of Scientific and Technical Information (OSTI.GOV)
Critchfield, R.R.; Asay, B.W.; Bdzil, J.B.
1997-09-01
Synchro-ballistic use of rotating-mirror streak cameras allows for detailed recording of high-speed events of known velocity and direction. After an introduction to the synchro-ballistic technique, this paper details two diverse applications of the technique as applied in the field of high-explosives research. In the first series of experiments detonation-front shape is recorded as the arriving detonation shock wave tilts an obliquely mounted mirror, causing reflected light to be deflected from the imaging lens. These tests were conducted for the purpose of calibrating and confirming the asymptotic Detonation Shock Dynamics (DSD) theory of Bdzil and Stewart. The phase velocities of themore » events range from ten to thirty millimeters per microsecond. Optical magnification is set for optimal use of the film`s spatial dimension and the phase velocity is adjusted to provide synchronization at the camera`s maximum writing speed. Initial calibration of the technique is undertaken using a cylindrical HE geometry over a range of charge diameters and of sufficient length-to-diameter ratio to insure a stable detonation wave. The final experiment utilizes an arc-shaped explosive charge, resulting in an asymmetric detonation-front record. The second series of experiments consists of photographing a shaped-charge jet having a velocity range of two to nine millimeters per microsecond. To accommodate the range of velocities it is necessary to fire several tests, each synchronized to a different section of the jet. The experimental apparatus consists of a vacuum chamber to preclude atmospheric ablation of the jet tip with shocked-argon back lighting to produce a shadow-graph image.« less
SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems
NASA Astrophysics Data System (ADS)
Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.
2015-02-01
Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of the 3D automotive system, operated both at night and during daytime, in both indoor and outdoor, in real traffic, scenario. The achieved long-range (up to 45m), high dynamic-range (118 dB), highspeed (over 200 fps) 3D depth measurement, and high precision (better than 90 cm at 45 m), highlight the excellent performance of this CMOS SPAD camera for automotive applications.
Film cameras or digital sensors? The challenge ahead for aerial imaging
Light, D.L.
1996-01-01
Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.
NASA Astrophysics Data System (ADS)
Michaelis, Dirk; Schroeder, Andreas
2012-11-01
Tomographic PIV has triggered vivid activity, reflected in a large number of publications, covering both: development of the technique and a wide range of fluid dynamic experiments. Maturing of tomo PIV allows the application in medium to large scale wind tunnels. Limiting factor for wind tunnel application is the small size of the measurement volume, being typically about of 50 × 50 × 15 mm3. Aim of this study is the optimization towards large measurement volumes and high spatial resolution performing cylinder wake measurements in a 1 meter wind tunnel. Main limiting factors for the volume size are the laser power and the camera sensitivity. So, a high power laser with 800 mJ per pulse is used together with low noise sCMOS cameras, mounted in forward scattering direction to gain intensity due to the Mie scattering characteristics. A mirror is used to bounce the light back, to have all cameras in forward scattering. Achievable particle density is growing with number of cameras, so eight cameras are used for a high spatial resolution. Optimizations lead to volume size of 230 × 200 × 52 mm3 = 2392 cm3, more than 60 times larger than previously. 281 × 323 × 68 vectors are calculated with spacing of 0.76 mm. The achieved measurement volume size and spatial resolution is regarded as a major step forward in the application of tomo PIV in wind tunnels. Supported by EU-project: no. 265695.
Camera traps can be heard and seen by animals.
Meek, Paul D; Ballard, Guy-Anthony; Fleming, Peter J S; Schaefer, Michael; Williams, Warwick; Falzon, Greg
2014-01-01
Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.
Feasibility of using Eastman Kodak type 3400 film for high altitude multispectral photography
NASA Technical Reports Server (NTRS)
Perry, L.
1972-01-01
A photographic test flight of the NASA RB-57F was conducted on March 25, 1972, over Houston and West Texas, to determine the suitability of Eastman Kodak type 3400 film as a replacement for type 2402 film in the Hasselblad cameras. An additional purpose was to test GAF film type 2914, a new black and white film similar to 2402, but with higher maximum gamma and greater dynamic range.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
The first satellite laser echoes recorded on the streak camera
NASA Technical Reports Server (NTRS)
Hamal, Karel; Prochazka, Ivan; Kirchner, Georg; Koidl, F.
1993-01-01
The application of the streak camera with the circular sweep for the satellite laser ranging is described. The Modular Streak Camera system employing the circular sweep option was integrated into the conventional Satellite Laser System. The experimental satellite tracking and ranging has been performed. The first satellite laser echo streak camera records are presented.
The pressure field of imploding lightbulbs
NASA Astrophysics Data System (ADS)
Czechanowski, M.; Ikeda, C.; Duncan, J. H.
2015-03-01
The implosion of A19 incandescent lightbulbs in a high-pressure water environment is studied in a 1.77-m-diameter steel tank. Underwater blast sensors are used to measure the dynamic pressure field near the lightbulbs and the implosions are photographed with a high-speed movie camera at a frame rate of 24,000 pps. The movie camera and the pressure signal recording system are synchronized to enable correlation of features in the movie frames with those in the pressure records. It is found that the gross dimensions and weight of the bulbs are very similar from one bulb to another, but the ambient water pressure at which a given bulb implodes (, called the implosion pressure) varies from 6.29 to 11.98 atmospheres, probably due to inconsistencies in the glass wall thickness and perhaps other detailed characteristics of the bulbs. The dynamic pressures (the local pressure minus , as measured by the sensors) first drop during the implosion and then reach a strong positive peak at about the time that the bulb reaches minimum volume. The peak dynamic pressure varies from 3.61 to 28.66 atmospheres. In order to explore the physics of the implosion process, the dynamic pressure signals are compared to calculations of the pressure field generated by the collapse of a spherical bubble in a weakly compressible liquid. The wide range of implosion pressures is used in combination with the calculations to explore the effect of the relative liquid compressibility and the bulb itself on the dynamic pressure field.
On the accuracy potential of focused plenoptic camera range determination in long distance operation
NASA Astrophysics Data System (ADS)
Sardemann, Hannes; Maas, Hans-Gerd
2016-04-01
Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.
Characterization of SWIR cameras by MRC measurements
NASA Astrophysics Data System (ADS)
Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.
2014-05-01
Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.
Camera-tracking gaming control device for evaluation of active wrist flexion and extension.
Shefer Eini, Dalit; Ratzon, Navah Z; Rizzo, Albert A; Yeh, Shih-Ching; Lange, Belinda; Yaffe, Batia; Daich, Alexander; Weiss, Patrice L; Kizony, Rachel
Cross sectional. Measuring wrist range of motion (ROM) is an essential procedure in hand therapy clinics. To test the reliability and validity of a dynamic ROM assessment, the Camera Wrist Tracker (CWT). Wrist flexion and extension ROM of 15 patients with distal radius fractures and 15 matched controls were assessed with the CWT and with a universal goniometer. One-way model intraclass correlation coefficient analysis indicated high test-retest reliability for extension (ICC = 0.92) and moderate reliability for flexion (ICC = 0.49). Standard error for extension was 2.45° and for flexion was 4.07°. Repeated-measures analysis revealed a significant main effect for group; ROM was greater in the control group (F[1, 28] = 47.35; P < .001). The concurrent validity of the CWT was partially supported. The results indicate that the CWT may provide highly reliable scores for dynamic wrist extension ROM, and moderately reliable scores for flexion, in people recovering from a distal radius fracture. N/A. Copyright © 2016 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects
Lambers, Martin; Kolb, Andreas
2017-01-01
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.
Bulczak, David; Lambers, Martin; Kolb, Andreas
2017-12-22
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.
Ultra-low power high-dynamic range color pixel embedding RGB to r-g chromaticity transformation
NASA Astrophysics Data System (ADS)
Lecca, Michela; Gasparini, Leonardo; Gottardi, Massimo
2014-05-01
This work describes a novel color pixel topology that converts the three chromatic components from the standard RGB space into the normalized r-g chromaticity space. This conversion is implemented with high-dynamic range and with no dc power consumption, and the auto-exposure capability of the sensor ensures to capture a high quality chromatic signal, even in presence of very bright illuminants or in the darkness. The pixel is intended to become the basic building block of a CMOS color vision sensor, targeted to ultra-low power applications for mobile devices, such as human machine interfaces, gesture recognition, face detection. The experiments show that significant improvements of the proposed pixel with respect to standard cameras in terms of energy saving and accuracy on data acquisition. An application to skin color-based description is presented.
Close-Range Photogrammetric Measurement of Static Deflections for an Aeroelastic Supercritical Wing
NASA Technical Reports Server (NTRS)
Byrdsong, Thomas A.; Adams, Richard R.; Sandford, Maynard C.
1990-01-01
Close range photogrammetric measurements were made for the lower wing surface of a full span aspect ratio 10.3 aeroelastic supercritical research wing. The measurements were made during wind tunnel tests for quasi-steady pressure distributions on the wing. The tests were conducted in the NASA Langley Transonic Dynamics Tunnel at Mach numbers up to 0.90 and dynamic pressures up to 300 pounds per square foot. Deflection data were obtained for 57 locations on the wing lower surface using dual non-metric cameras. Representative data are presented as graphical overview to show variations and trends of spar deflection with test variables. Comparative data are presented for photogrammetric and cathetometric results of measurements for the wing tip deflections. A tabulation of the basic measurements is presented in a supplement to this report.
Camera Traps Can Be Heard and Seen by Animals
Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg
2014-01-01
Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356
NASA Astrophysics Data System (ADS)
Robert, K.; Matabos, M.; Sarrazin, J.; Sarradin, P.; Lee, R. W.; Juniper, K.
2010-12-01
Hydrothermal vent environments are among the most dynamic benthic habitats in the ocean. The relative roles of physical and biological factors in shaping vent community structure remain unclear. Undersea cabled observatories offer the power and bandwidth required for high-resolution, time-series study of the dynamics of vent communities and the physico-chemical forces that influence them. The NEPTUNE Canada cabled instrument array at the Endeavour hydrothermal vents provides a unique laboratory for researchers to conduct long-term, integrated studies of hydrothermal vent ecosystem dynamics in relation to environmental variability. Beginning in September-October 2010, NEPTUNE Canada (NC) will be deploying a multi-disciplinary suite of instruments on the Endeavour Segment of the Juan de Fuca Ridge. Two camera and sensor systems will be used to study ecosystem dynamics in relation to hydrothermal discharge. These studies will make use of new experimental protocols for time-series observations that we have been developing since 2008 at other observatory sites connected to the VENUS and NC networks. These protocols include sampling design, camera calibration (i.e. structure, position, light, settings) and image analysis methodologies (see communication by Aron et al.). The camera systems to be deployed in the Main Endeavour vent field include a Sidus high definition video camera (2010) and the TEMPO-mini system (2011), designed by IFREMER (France). Real-time data from three sensors (O2, dissolved Fe, temperature) integrated with the TEMPO-mini system will enhance interpretation of imagery. For the first year of observations, a suite of internally recording temperature probes will be strategically placed in the field of view of the Sidus camera. These installations aim at monitoring variations in vent community structure and dynamics (species composition and abundances, interactions within and among species) in response to changes in environmental conditions at different temporal scales. High-resolution time-series studies also provide a mean of studying population dynamics, biological rhythms, organism growth and faunal succession. In addition to programmed time-series monitoring, the NC infrastructure will also permit manual and automated modification of observational protocols in response to natural events. This will enhance our ability to document potentially critical but short-lived environmental forces affecting vent communities.
Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho
2016-03-11
This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility.
Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho
2016-01-01
This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366
Generic Dynamic Environment Perception Using Smart Mobile Devices.
Danescu, Radu; Itu, Razvan; Petrovai, Andra
2016-10-17
The driving environment is complex and dynamic, and the attention of the driver is continuously challenged, therefore computer based assistance achieved by processing image and sensor data may increase traffic safety. While active sensors and stereovision have the advantage of obtaining 3D data directly, monocular vision is easy to set up, and can benefit from the increasing computational power of smart mobile devices, and from the fact that almost all of them come with an embedded camera. Several driving assistance application are available for mobile devices, but they are mostly targeted for simple scenarios and a limited range of obstacle shapes and poses. This paper presents a technique for generic, shape independent real-time obstacle detection for mobile devices, based on a dynamic, free form 3D representation of the environment: the particle based occupancy grid. Images acquired in real time from the smart mobile device's camera are processed by removing the perspective effect and segmenting the resulted bird-eye view image to identify candidate obstacle areas, which are then used to update the occupancy grid. The occupancy grid tracked cells are grouped into obstacles depicted as cuboids having position, size, orientation and speed. The easy to set up system is able to reliably detect most obstacles in urban traffic, and its measurement accuracy is comparable to a stereovision system.
Fuzzy logic controllers: A knowledge-based system perspective
NASA Technical Reports Server (NTRS)
Bonissone, Piero P.
1993-01-01
Over the last few years we have seen an increasing number of applications of Fuzzy Logic Controllers. These applications range from the development of auto-focus cameras, to the control of subway trains, cranes, automobile subsystems (automatic transmissions), domestic appliances, and various consumer electronic products. In summary, we consider a Fuzzy Logic Controller to be a high level language with its local semantics, interpreter, and compiler, which enables us to quickly synthesize non-linear controllers for dynamic systems.
SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darne, C; Robertson, D; Alsanea, F
2016-06-15
Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less
Range Finding with a Plenoptic Camera
2014-03-27
92 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Simulated Camera Analysis...Varying Lens Diameter . . . . . . . . . . . . . . . . 95 Simulated Camera Analysis: Varying Detector Size . . . . . . . . . . . . . . . . . 98 Simulated ...Matching Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 37 Simulated Camera Performance with SIFT
Giorgetti, Assuero; Burchielli, Silvia; Positano, Vincenzo; Kovalski, Gil; Quaranta, Angela; Genovesi, Dario; Tredici, Manuel; Duce, Valerio; Landini, Luigi; Trivella, Maria Giovanna; Marzullo, Paolo
2015-03-01
Data on the in vivo myocardial kinetics of (123)I-metaiodobenzylguanidine ((123)I-MIBG) are scarce and have always been obtained using planar acquisitions. To clarify the normal kinetics of (123)I-MIBG in vivo over time, we designed an experimental protocol using a 3-dimensional (3D) dynamic approach with a cadmium zinc telluride (CZT) camera. We studied 6 anesthetized pigs (mean body weight, 37 ± 4 kg). Left ventricular myocardial perfusion and sympathetic innervation were assessed using (99m)Tc-tetrofosmin (26 ± 6 MBq), (123)I-MIBG (54 ± 14 MBq), and a CZT camera. A normal perfusion/function match on gated SPECT was the inclusion criterion. A dynamic acquisition in list mode started simultaneously with the bolus injection of (123)I-MIBG, and data were collected every 5 min for the first 20 min and then at acquisition steps of 30, 60, 90, and 120 min. Each step was reconstructed using dedicate software and reframed (60 s/frame). On the reconstructed transaxial slice that best showed the left ventricular cavity, regions of interest were drawn to obtain myocardial and blood pool activities. Myocardial time-activity curves were generated by interpolating data between contiguous acquisition steps, corrected for radiotracer decay and injected dose, and fitted to a bicompartmental model. Time to myocardial maximum signal intensity (MSI), MSI value, radiotracer retention index (RI, myocardial activity/blood pool integral), and washout rate were calculated. The mediastinal signal was measured and fitted to a linear model. The myocardial MSI of (123)I-MIBG was reached within 5.57 ± 4.23 min (range, 2-12 min). The mean MSI was 0.426% ± 0.092%. Myocardial RI decreased over time and reached point zero at 176 ± 31 min (range, 140-229 min). The ratio between myocardial and mediastinal signal at 15 and 125 min and extrapolated at 176 and 4 h was 5.45% ± 0.61%, 4.33% ± 1.23% (not statistically significant vs. 15 min), 3.95% ± 1.46% (P < 0.03 vs. 125 min), and 3.63% ± 1.64% (P < 0.03 vs. 176 min), respectively. Mean global washout rate at 125 min was 15% ± 14% (range, 0%-34%), and extrapolated data at 176 min and 4 h were 18% ± 18% (range, 0.49%-45%) and 25% ± 23% (range, 1.7%-56.2%; not statistically significant vs. 176 min), respectively. 3D dynamic analysis of (123)I-MIBG suggests that myocardial peak uptake is reached more quickly than previously described. Myocardial RI decreases over time and, on average, is null about 3 h after injection. The combination of an early peak and variations in delayed myocardial uptake could result in a wide physiologic range of washout rates. Mediastinal activity appears to be constant over time and significantly lower than previously described in planar studies, resulting in a higher heart-to-mediastinum ratio. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Earth-orbiting extreme ultraviolet spectroscopic mission: SPRINT-A/EXCEED
NASA Astrophysics Data System (ADS)
Yoshikawa, I.; Tsuchiya, F.; Yamazaki, A.; Yoshioka, K.; Uemizu, K.; Murakami, G.; Kimura, T.; Kagitani, M.; Terada, N.; Kasaba, Y.; Sakanoi, T.; Ishii, H.; Uji, K.
2012-09-01
The EXCEED (Extreme Ultraviolet Spectroscope for Exospheric Dynamics) mission is an Earth-orbiting extreme ultraviolet (EUV) spectroscopic mission and the first in the SPRINT series being developed by ISAS/JAXA. It will be launched in the summer of 2013. EUV spectroscopy is suitable for observing tenuous gases and plasmas around planets in the solar system (e.g., Mercury, Venus, Mars, Jupiter, and Saturn). Advantage of remote sensing observation is to take a direct picture of the plasma dynamics and distinguish between spatial and temporal variability explicitly. One of the primary observation targets is an inner magnetosphere of Jupiter, whose plasma dynamics is dominated by planetary rotation. Previous observations have shown a few percents of the hot electron population in the inner magnetosphere whose temperature is 100 times higher than the background thermal electrons. Though the hot electrons have a significant impact on the energy balance in the inner magnetosphere, their generation process has not yet been elucidated. In the EUV range, a number of emission lines originate from plasmas distributed in Jupiter's inner magnetosphere. The EXCEED spectrograph is designed to have a wavelength range of 55-145 nm with minimum spectral resolution of 0.4 nm, enabling the electron temperature and ion composition in the inner magnetosphere to be determined. Another primary objective is to investigate an unresolved problem concerning the escape of the atmosphere to space. Although there have been some in-situ observations by orbiters, our knowledge is still limited. The EXCEED mission plans to make imaging observations of plasmas around Venus and Mars to determine the amounts of escaping atmosphere. The instrument's field of view (FOV) is so wide that we can get an image from the interaction region between the solar wind and planetary plasmas down to the tail region at one time. This will provide us with information about outward-flowing plasmas, e.g., their composition, rate, and dependence on solar activity. EXCEED has two mission instruments: the EUV spectrograph and a target guide camera that is sensitive to visible light. The EUV spectrograph is designed to have a wavelength range of 55-145 nm with a spectral resolution of 0.4-1.0 nm. The spectrograph slits have a FOV of 400 x 140 arcseconds (maximum). The optics of the instrument consists of a primary mirror with a diameter of 20cm, a laminar type grating, and a 5-stage micro-channel plate assembly with a resistive anode encoder. To achieve high efficiencies, the surfaces of the primary mirror and the grating are coated with CVD-SiC. Because of the large primary mirror and high efficiencies, good temporal resolution and complete spatial coverage for Io plasma torus observation is expected. Based on a feasibility study using the spectral diagnosis method, it is shown that EXCEED can determine the Io plasma torus parameters, such as the electron density, temperatures, hot electron fraction and so on, using an exposure time of 50 minutes. The target guide camera will be used to capture the target and guide the observation area of interest to the slit. Emissions from outside the slit's FOV will be reflected by the front of the slit and guided to the target guide camera. The guide camera's FOV is 240" x 240". The camera will take an image every 3 seconds and the image is sent to a mission data processor (MDP), which calculates the centroid of the image. During an observation, the bus system controls the attitude to keep the centroid position of the target in the guide camera with an accuracy of ±5 arc-seconds. With the help of the target guide camera, we will take spectral images with a long exposure time of 50 minutes and good spatial resolution of 20 arc-seconds.
Application of infrared uncooled cameras in surveillance systems
NASA Astrophysics Data System (ADS)
Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.
2013-10-01
The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.
Development of biostereometric experiments. [stereometric camera system
NASA Technical Reports Server (NTRS)
Herron, R. E.
1978-01-01
The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D.E.; Roeske, F.
We have successfully fielded a Fiber Optics Radiation Experiment system (FOREX) designed for measuring material properties at high temperatures and pressures in an underground nuclear test. The system collects light from radiating materials and transmits it through several hundred meters of optical fibers to a recording station consisting of a streak camera with film readout. The use of fiber optics provides a faster time response than can presently be obtained with equalized coaxial cables over comparable distances. Fibers also have significant cost and physical size advantages over coax cables. The streak camera achieves a much higher information density than anmore » equivalent oscilloscope system, and it also serves as the light detector. The result is a wide bandwidth high capacity system that can be fielded at a relatively low cost in manpower, space, and materials. For this experiment, the streak camera had a 120 ns time window with a 1.2 ns time resolution. Dynamic range for the system was about 1000. Beam current statistical limitations were approximately 8% for a 0.3 ns wide data point at one decade above the threshold recording intensity.« less
FOREX-A Fiber Optics Diagnostic System For Study Of Materials At High Temperatures And Pressures
NASA Astrophysics Data System (ADS)
Smith, D. E.; Roeske, F.
1983-03-01
We have successfully fielded a Fiber Optics Radiation EXperiment system (FOREX) designed for measuring material properties at high temperatures and pressures on an underground nuclear test. The system collects light from radiating materials and transmits it through several hundred meters of optical fibers to a recording station consisting of a streak camera with film readout. The use of fiber optics provides a faster time response than can presently be obtained with equalized coaxial cables over comparable distances. Fibers also have significant cost and physical size advantages over coax cables. The streak camera achieves a much higher information density than an equivalent oscilloscope system, and it also serves as the light detector. The result is a wide bandwidth high capacity system that can be fielded at a relatively low cost in manpower, space, and materials. For this experiment, the streak camera had a 120 ns time window with a 1.2 ns time resolution. Dynamic range for the system was about 1000. Beam current statistical limitations were approximately 8% for a 0.3 ns wide data point at one decade above the threshold recording intensity.
A zonal wavefront sensor with multiple detector planes
NASA Astrophysics Data System (ADS)
Pathak, Biswajit; Boruah, Bosanta R.
2018-03-01
A conventional zonal wavefront sensor estimates the wavefront from the data captured in a single detector plane using a single camera. In this paper, we introduce a zonal wavefront sensor which comprises multiple detector planes instead of a single detector plane. The proposed sensor is based on an array of custom designed plane diffraction gratings followed by a single focusing lens. The laser beam whose wavefront is to be estimated is incident on the grating array and one of the diffracted orders from each grating is focused on the detector plane. The setup, by employing a beam splitter arrangement, facilitates focusing of the diffracted beams on multiple detector planes where multiple cameras can be placed. The use of multiple cameras in the sensor can offer several advantages in the wavefront estimation. For instance, the proposed sensor can provide superior inherent centroid detection accuracy that can not be achieved by the conventional system. It can also provide enhanced dynamic range and reduced crosstalk performance. We present here the results from a proof of principle experimental arrangement that demonstrate the advantages of the proposed wavefront sensing scheme.
Virtual-stereo fringe reflection technique for specular free-form surface testing
NASA Astrophysics Data System (ADS)
Ma, Suodong; Li, Bo
2016-11-01
Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.
Depth Perception In Remote Stereoscopic Viewing Systems
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Von Sydow, Marika
1989-01-01
Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.
Radiometric infrared focal plane array imaging system for thermographic applications
NASA Technical Reports Server (NTRS)
Esposito, B. J.; Mccafferty, N.; Brown, R.; Tower, J. R.; Kosonocky, W. F.
1992-01-01
This document describes research performed under the Radiometric Infrared Focal Plane Array Imaging System for Thermographic Applications contract. This research investigated the feasibility of using platinum silicide (PtSi) Schottky-barrier infrared focal plane arrays (IR FPAs) for NASA Langley's specific radiometric thermal imaging requirements. The initial goal of this design was to develop a high spatial resolution radiometer with an NETD of 1 percent of the temperature reading over the range of 0 to 250 C. The proposed camera design developed during this study and described in this report provides: (1) high spatial resolution (full-TV resolution); (2) high thermal dynamic range (0 to 250 C); (3) the ability to image rapid, large thermal transients utilizing electronic exposure control (commandable dynamic range of 2,500,000:1 with exposure control latency of 33 ms); (4) high uniformity (0.5 percent nonuniformity after correction); and (5) high thermal resolution (0.1 C at 25 C background and 0.5 C at 250 C background).
Radiometric infrared focal plane array imaging system for thermographic applications
NASA Astrophysics Data System (ADS)
Esposito, B. J.; McCafferty, N.; Brown, R.; Tower, J. R.; Kosonocky, W. F.
1992-11-01
This document describes research performed under the Radiometric Infrared Focal Plane Array Imaging System for Thermographic Applications contract. This research investigated the feasibility of using platinum silicide (PtSi) Schottky-barrier infrared focal plane arrays (IR FPAs) for NASA Langley's specific radiometric thermal imaging requirements. The initial goal of this design was to develop a high spatial resolution radiometer with an NETD of 1 percent of the temperature reading over the range of 0 to 250 C. The proposed camera design developed during this study and described in this report provides: (1) high spatial resolution (full-TV resolution); (2) high thermal dynamic range (0 to 250 C); (3) the ability to image rapid, large thermal transients utilizing electronic exposure control (commandable dynamic range of 2,500,000:1 with exposure control latency of 33 ms); (4) high uniformity (0.5 percent nonuniformity after correction); and (5) high thermal resolution (0.1 C at 25 C background and 0.5 C at 250 C background).
Station report on the Goddard Space Flight Center (GSFC) 1.2 meter telescope facility
NASA Technical Reports Server (NTRS)
Mcgarry, Jan F.; Zagwodzki, Thomas W.; Abbott, Arnold; Degnan, John J.; Cheek, Jack W.; Chabot, Richard S.; Grolemund, David A.; Fitzgerald, Jim D.
1993-01-01
The 1.2 meter telescope system was built for the Goddard Space Flight Center (GSFC) in 1973-74 by the Kollmorgen Corporation as a highly accurate tracking telescope. The telescope is an azimuth-elevation mounted six mirror Coude system. The facility has been used for a wide range of experimentation including helioseismology, two color refractometry, lunar laser ranging, satellite laser ranging, visual tracking of rocket launches, and most recently satellite and aircraft streak camera work. The telescope is a multi-user facility housed in a two story dome with the telescope located on the second floor above the experimenter's area. Up to six experiments can be accommodated at a given time, with actual use of the telescope being determined by the location of the final Coude mirror. The telescope facility is currently one of the primary test sites for the Crustal Dynamics Network's new UNIX based telescope controller software, and is also the site of the joint Crustal Dynamics Project / Photonics Branch two color research into atmospheric refraction.
High performance gel imaging with a commercial single lens reflex camera
NASA Astrophysics Data System (ADS)
Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.
2011-03-01
A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.
Development of a drive system for a sequential space camera
NASA Technical Reports Server (NTRS)
Sharpsteen, J. T.; Solheim, C. D.; Stoap, L. J.
1976-01-01
An electronically commutated dc motor is reported for driving the camera claw and magazine, and a stepper motor is described for driving the shutter with the two motors synchronized electrically. Subsequent tests on the breadboard positively proved the concept, but further development beyond this study should be done. The breadboard testing also established that the electronically commutated motor can control speed over a wide dynamic range, and has a high torque capability for accelerating loads. This performance suggested the possibility of eliminating the clutch from the system while retaining all of the other mechanical features of the DAC, if the requirement for independent shutter speeds and frame rates can be removed. Therefore, as a final step in the study, the breadboard shutter and shutter drive were returned to the original DAC configuration, while retaining the brushless dc motor drive.
Sub-picosecond streak camera measurements at LLNL: From IR to x-rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuba, J; Shepherd, R; Booth, R
An ultra fast, sub-picosecond resolution streak camera has been recently developed at the LLNL. The camera is a versatile instrument with a wide operating wavelength range. The temporal resolution of up to 300 fs can be achieved, with routine operation at 500 fs. The streak camera has been operated in a wide wavelength range from IR to x-rays up to 2 keV. In this paper we briefly review the main design features that result in the unique properties of the streak camera and present its several scientific applications: (1) Streak camera characterization using a Michelson interferometer in visible range, (2)more » temporally resolved study of a transient x-ray laser at 14.7 nm, which enabled us to vary the x-ray laser pulse duration from {approx}2-6 ps by changing the pump laser parameters, and (3) an example of a time-resolved spectroscopy experiment with the streak camera.« less
Physical and engineering aspect of carbon beam therapy
NASA Astrophysics Data System (ADS)
Kanai, Tatsuaki; Kanematsu, Nobuyuki; Minohara, Shinichi; Yusa, Ken; Urakabe, Eriko; Mizuno, Hideyuki; Iseki, Yasushi; Kanazawa, Mitsutaka; Kitagawa, Atsushi; Tomitani, Takehiro
2003-08-01
Conformal irradiation system of HIMAC has been up-graded for a clinical trial using a technique of a layer-stacking method. The system has been developed for localizing irradiation dose to target volume more effectively than the present irradiation dose. With dynamic control of the beam modifying devices, a pair of wobbler magnets, and multileaf collimator and range shifter, during the irradiation, more conformal radiotherapy can be achieved. The system, which has to be adequately safe for patient irradiations, was constructed and tested from a viewpoint of safety and the quality of the dose localization realized. A secondary beam line has been constructed for use of radioactive beam in heavy-ion radiotherapy. Spot scanning method has been adapted for the beam delivery system of the radioactive beam. Dose distributions of the spot beam were measured and analyzed taking into account of aberration of the beam optics. Distributions of the stopped positron-emitter beam can be observed by PET. Pencil beam of the positron-emitter, about 1 mm size, can also be used for measurements ranges of the test beam in patients using positron camera. The positron camera, consisting of a pair of Anger-type scintillation detectors, has been developed for this verification before treatment. Wash-out effect of the positron-emitter was examined using the positron camera installed. In this report, present status of the HIMAC irradiation system is described in detail.
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Adaptive DOF for plenoptic cameras
NASA Astrophysics Data System (ADS)
Oberdörster, Alexander; Lensch, Hendrik P. A.
2013-03-01
Plenoptic cameras promise to provide arbitrary re-focusing through a scene after the capture. In practice, however, the refocusing range is limited by the depth of field (DOF) of the plenoptic camera. For the focused plenoptic camera, this range is given by the range of object distances for which the microimages are in focus. We propose a technique of recording light fields with an adaptive depth of focus. Between multiple exposures { or multiple recordings of the light field { the distance between the microlens array (MLA) and the image sensor is adjusted. The depth and quality of focus is chosen by changing the number of exposures and the spacing of the MLA movements. In contrast to traditional cameras, extending the DOF does not necessarily lead to an all-in-focus image. Instead, the refocus range is extended. There is full creative control about the focus depth; images with shallow or selective focus can be generated.
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
Large Area Field of View for Fast Temporal Resolution Astronomy
NASA Astrophysics Data System (ADS)
Covarrubias, Ricardo A.
2018-01-01
Scientific CMOS (sCMOS) technology is especially relevant for high temporal resolution astronomy combining high resolution, large field of view with very fast frame rates, without sacrificing ultra-low noise performance. Solar Astronomy, Near Earth Object detections, Space Debris Tracking, Transient Observations or Wavefront Sensing are among the many applications this technology can be utilized. Andor Technology is currently developing the next-generation, very large area sCMOS camera with an extremely low noise, rapid frame rates, high resolution and wide dynamic range.
Beam/seam alignment control for electron beam welding
Burkhardt, Jr., James H.; Henry, J. James; Davenport, Clyde M.
1980-01-01
This invention relates to a dynamic beam/seam alignment control system for electron beam welds utilizing video apparatus. The system includes automatic control of workpiece illumination, near infrared illumination of the workpiece to limit the range of illumination and camera sensitivity adjustment, curve fitting of seam position data to obtain an accurate measure of beam/seam alignment, and automatic beam detection and calculation of the threshold beam level from the peak beam level of the preceding video line to locate the beam or seam edges.
New Approach for Environmental Monitoring and Plant Observation Using a Light-Field Camera
NASA Astrophysics Data System (ADS)
Schima, Robert; Mollenhauer, Hannes; Grenzdörffer, Görres; Merbach, Ines; Lausch, Angela; Dietrich, Peter; Bumberger, Jan
2015-04-01
The aim of gaining a better understanding of ecosystems and the processes in nature accentuates the need for observing exactly these processes with a higher temporal and spatial resolution. In the field of environmental monitoring, an inexpensive and field applicable imaging technique to derive three-dimensional information about plants and vegetation would represent a decisive contribution to the understanding of the interactions and dynamics of ecosystems. This is particularly true for the monitoring of plant growth and the frequently mentioned lack of morphological information about the plants, e.g. plant height, vegetation canopy, leaf position or leaf arrangement. Therefore, an innovative and inexpensive light-field (plenoptic) camera, the Lytro LF, and a stereo vision system, based on two industrial cameras, were tested and evaluated as possible measurement tools for the given monitoring purpose. In this instance, the usage of a light field camera offers the promising opportunity of providing three-dimensional information without any additional requirements during the field measurements based on one single shot, which represents a substantial methodological improvement in the area of environmental research and monitoring. Since the Lytro LF was designed as a daily-life consumer camera, it does not support depth or distance estimation or rather an external triggering by default. Therefore, different technical modifications and a calibration routine had to be figured out during the preliminary study. As a result, the used light-field camera was proven suitable as a depth and distance measurement tool with a measuring range of approximately one meter. Consequently, this confirms the assumption that a light field camera holds the potential of being a promising measurement tool for environmental monitoring purposes, especially with regard to a low methodological effort in field. Within the framework of the Global Change Experimental Facility Project, founded by the Helmholtz Centre for Environmental Research, and its large-scaled field experiments to investigate the influence of the climate change on different forms of land utilization, both techniques were installed and evaluated in a long-term experiment on a pilot-scaled maize field in late 2014. Based on this, it was possible to show the growth of the plants in dependence of time, showing a good accordance to the measurements, which were carried out by hand on a weekly basis. In addition, the experiment has shown that the light-field vision approach is applicable for the monitoring of the crop growth under field conditions, although it is limited to close range applications. Since this work was intended as a proof of concept, further research is recommended, especially with respect to the automation and evaluation of data processing. Altogether, this study is addressed to researchers as an elementary groundwork to improve the usage of the introduced light field imaging technique for the monitoring of plant growth dynamics and the three-dimensional modeling of plants under field conditions.
Microfluidic flow spectrometer
NASA Astrophysics Data System (ADS)
Vázquez-Vergara, Pamela; Torres Rojas, Aimee M.; Guevara-Pantoja, Pablo E.; Corvera Poiré, Eugenia; Caballero-Robledo, Gabriel A.
2017-07-01
We present a microfluidic device which allows one to study the dynamics of oscillatory flows for a frequency range between 1 and 300 Hz. The fluid in the microdevice could be Newtonian, viscoelastic, or even a biofluid, since the device is made of PMMA, which makes it biocompatible and free of elastomeric elements. Coupling a piezoelectric to a micropiston allows one to impose periodic movement to the fluid, with zero mean flow and amplitudes of up to 20~μ m, within the microchannels in which the dynamics is studied. The use of a fast camera coupled to a microscope allows one to study the dynamics of 1~μ m tracer particles and interfaces at an image acquisition rate as fast as 5000 frames per second. The fabrication of the device is easy and cost-effective, since it is based on the use of a micromilling machine. The dynamics of a Newtonian fluid is studied as a proof of principle.
High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces
NASA Astrophysics Data System (ADS)
Jiang, Hongzhi; Zhao, Huijie; Li, Xudong
2012-10-01
This paper presents a novel 3-D scanning technique for high-reflective surfaces based on phase-shifting fringe projection method. High dynamic range fringe acquisition (HDRFA) technique is developed to process the fringe images reflected from the shiny surfaces, and generates a synthetic fringe image by fusing the raw fringe patterns, acquired with different camera exposure time and the illumination fringe intensity from the projector. Fringe image fusion algorithm is introduced to avoid saturation and under-illumination phenomenon by choosing the pixels in the raw fringes with the highest fringe modulation intensity. A method of auto-selection of HDRFA parameters is developed and largely increases the measurement automation. The synthetic fringes have higher signal-to-noise ratio (SNR) under ambient light by optimizing HDRFA parameters. Experimental results show that the proposed technique can successfully measure objects with high-reflective surfaces and is insensitive to ambient light.
Video enhancement workbench: an operational real-time video image processing system
NASA Astrophysics Data System (ADS)
Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.
1993-01-01
Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.
Color sensitivity of the multi-exposure HDR imaging process
NASA Astrophysics Data System (ADS)
Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.
2013-04-01
Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.
SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Platt, M; Platt, M; Lamba, M
2016-06-15
Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less
NASA Astrophysics Data System (ADS)
Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Kang, Xiaoyu; Wang, Guodong; Zhang, Xianghan; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin
2016-08-01
The aim of this article is to investigate the influence of a tracer injection dose (ID) and camera integration time (IT) on quantifying pharmacokinetics of Cy5.5-GX1 in gastric cancer BGC-823 cell xenografted mice. Based on three factors, including whether or not to inject free GX1, the ID of Cy5.5-GX1, and the camera IT, 32 mice were randomly divided into eight groups and received 60-min dynamic fluorescence imaging. Gurfinkel exponential model (GEXPM) and Lammertsma simplified reference tissue model (SRTM) combined with a singular value decomposition analysis were used to quantitatively analyze the acquired dynamic fluorescent images. The binding potential (Bp) and the sum of the pharmacokinetic rate constants (SKRC) of Cy5.5-GX1 were determined by the SRTM and EXPM, respectively. In the tumor region, the SKRC value exhibited an obvious trend with change in the tracer ID, but the Bp value was not sensitive to it. Both the Bp and SKRC values were independent of the camera IT. In addition, the ratio of the tumor-to-muscle region was correlated with the camera IT but was independent of the tracer ID. Dynamic fluorescence imaging in conjunction with a kinetic analysis may provide more quantitative information than static fluorescence imaging, especially for a priori information on the optimal ID of targeted probes for individual therapy.
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
Penrose high-dynamic-range imaging
NASA Astrophysics Data System (ADS)
Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian
2016-05-01
High-dynamic-range (HDR) imaging is becoming increasingly popular and widespread. The most common multishot HDR approach, based on multiple low-dynamic-range images captured with different exposures, has difficulties in handling camera and object movements. The spatially varying exposures (SVE) technology provides a solution to overcome this limitation by obtaining multiple exposures of the scene in only one shot but suffers from a loss in spatial resolution of the captured image. While aperiodic assignment of exposures has been shown to be advantageous during reconstruction in alleviating resolution loss, almost all the existing imaging sensors use the square pixel layout, which is a periodic tiling of square pixels. We propose the Penrose pixel layout, using pixels in aperiodic rhombus Penrose tiling, for HDR imaging. With the SVE technology, Penrose pixel layout has both exposure and pixel aperiodicities. To investigate its performance, we have to reconstruct HDR images in square pixel layout from Penrose raw images with SVE. Since the two pixel layouts are different, the traditional HDR reconstruction methods are not applicable. We develop a reconstruction method for Penrose pixel layout using a Gaussian mixture model for regularization. Both quantitative and qualitative results show the superiority of Penrose pixel layout over square pixel layout.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1989-01-01
A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.
Human tracking over camera networks: a review
NASA Astrophysics Data System (ADS)
Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang
2017-12-01
In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.
NASA Astrophysics Data System (ADS)
Zhao, Jiaye; Wen, Huihui; Liu, Zhanwei; Rong, Jili; Xie, Huimin
2018-05-01
Three-dimensional (3D) deformation measurements are a key issue in experimental mechanics. In this paper, a displacement field correlation (DFC) method to measure centrosymmetric 3D dynamic deformation using a single camera is proposed for the first time. When 3D deformation information is collected by a camera at a tilted angle, the measured displacement fields are coupling fields of both the in-plane and out-of-plane displacements. The features of the coupling field are analysed in detail, and a decoupling algorithm based on DFC is proposed. The 3D deformation to be measured can be inverted and reconstructed using only one coupling field. The accuracy of this method was validated by a high-speed impact experiment that simulated an underwater explosion. The experimental results show that the approach proposed in this paper can be used in 3D deformation measurements with higher sensitivity and accuracy, and is especially suitable for high-speed centrosymmetric deformation. In addition, this method avoids the non-synchronisation problem associated with using a pair of high-speed cameras, as is common in 3D dynamic measurements.
NASA Astrophysics Data System (ADS)
Sanchez-Lavega, A.; Hueso, R.; Perez-Hoyos, S.; Iñurrigarro, P.; Mendikoa, I.; Rojas, J. F.
2016-12-01
We present the results of a long term campaign between September 2015 and August 2016 of imaging of Jupiter's cloud morphology and zonal winds in the 0.38 - 1.7 μm wavelength spectral range. We use PlanetCam lucky imaging camera at the 2.2m telescope at Calar Alto Observatory in Spain, and for the optical range, the contribution of a network of observers to the Planetary Virtual Observatory Laboratory database (PVOL-IOPW at http://pvol.ehu.eus). We have complemented the study with Hubble Space Telescope WFC3 camera images taken in the 0.275 - 0.89 μm wavelength spectral range during the OPAL program on 9 February 2016. The PlanetCam images have been calibrated in radiance using spectrophotometric standard stars providing absolute reflectivity across the disk in a large series of broadband and narrowband filters sensitive to the altitude distribution and size of aerosols above the ammonia cloud level, and to the spectral dependence of the chromophore coloring agents. The cloud morphology evolution has been studied with an horizontal resolution ranging from 150 to 1000 km. Zonal wind profiles have been retrieved along the whole observing period from tracking cloud motions that span the latitude range from -80° to +77º. Combining all these results we characterized the 3D-dynamical state and cloud and haze distribution in Jupiter's atmosphere in the altitude range between 10 mbar and 1.5 bar before and during Juno initial exploration.
Visual control of robots using range images.
Pomares, Jorge; Gil, Pablo; Torres, Fernando
2010-01-01
In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.
Yurduseven, Okan; Marks, Daniel L; Fromenteze, Thomas; Smith, David R
2018-03-05
We present a reconfigurable, dynamic beam steering holographic metasurface aperture to synthesize a microwave camera at K-band frequencies. The aperture consists of a 1D printed microstrip transmission line with the front surface patterned into an array of slot-shaped subwavelength metamaterial elements (or meta-elements) dynamically tuned between "ON" and "OFF" states using PIN diodes. The proposed aperture synthesizes a desired radiation pattern by converting the waveguide-mode to a free space radiation by means of a binary modulation scheme. This is achieved in a holographic manner; by interacting the waveguide-mode (reference-wave) with the metasurface layer (hologram layer). It is shown by means of full-wave simulations that using the developed metasurface aperture, the radiated wavefronts can be engineered in an all-electronic manner without the need for complex phase-shifting circuits or mechanical scanning apparatus. Using the dynamic beam steering capability of the developed antenna, we synthesize a Mills-Cross composite aperture, forming a single-frequency all-electronic microwave camera.
Nuclear medicine imaging system
Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J.; Rowe, R. Wanda; Zubal, I. George
1986-01-07
A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.
Nuclear medicine imaging system
Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J. C.; Rowe, R. Wanda; Zubal, I. George
1986-01-01
A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.
Research on range-gated laser active imaging seeker
NASA Astrophysics Data System (ADS)
You, Mu; Wang, PengHui; Tan, DongJie
2013-09-01
Compared with other imaging methods such as millimeter wave imaging, infrared imaging and visible light imaging, laser imaging provides both a 2-D array of reflected intensity data as well as 2-D array of range data, which is the most important data for use in autonomous target acquisition .In terms of application, it can be widely used in military fields such as radar, guidance and fuse. In this paper, we present a laser active imaging seeker system based on range-gated laser transmitter and sensor technology .The seeker system presented here consist of two important part, one is laser image system, which uses a negative lens to diverge the light from a pulse laser to flood illuminate a target, return light is collected by a camera lens, each laser pulse triggers the camera delay and shutter. The other is stabilization gimbals, which is designed to be a rotatable structure both in azimuth and elevation angles. The laser image system consists of transmitter and receiver. The transmitter is based on diode pumped solid-state lasers that are passively Q-switched at 532nm wavelength. A visible wavelength was chosen because the receiver uses a Gen III image intensifier tube with a spectral sensitivity limited to wavelengths less than 900nm.The receiver is image intensifier tube's micro channel plate coupled into high sensitivity charge coupled device camera. The image has been taken at range over one kilometer and can be taken at much longer range in better weather. Image frame frequency can be changed according to requirement of guidance with modifiable range gate, The instantaneous field of views of the system was found to be 2×2 deg. Since completion of system integration, the seeker system has gone through a series of tests both in the lab and in the outdoor field. Two different kinds of buildings have been chosen as target, which is located at range from 200m up to 1000m.To simulate dynamic process of range change between missile and target, the seeker system has been placed on the truck vehicle running along the road in an expected speed. The test result shows qualified image and good performance of the seeker system.
CATE 2016 Indonesia: Camera, Software, and User Interface
NASA Astrophysics Data System (ADS)
Kovac, S. A.; Jensen, L.; Hare, H. S.; Mitchell, A. M.; McKay, M. A.; Bosh, R.; Watson, Z.; Penn, M.
2016-12-01
The Citizen Continental-America Telescopic Eclipse (Citizen CATE) Experiment will use a fleet of 60 identical telescopes across the United States to image the inner solar corona during the 2017 total solar eclipse. For a proof of concept, five sites were hosted along the path of totality during the 2016 total solar eclipse in Indonesia. Tanjung Pandan, Belitung, Indonesia was the first site to experience totality. This site had the best seeing conditions and focus, resulting in the highest quality images. This site proved that the equipment that is going to be used is capable of recording high quality images of the solar corona. Because 60 sites will be funded, each set up needs to be cost effective. This requires us to use an inexpensive camera, which consequently has a small dynamic range. To compensate for the corona's intensity drop off factor of 1,000, images are taken at seven frames per second, at exposures 0.4ms, 1.3ms, 4.0ms, 13ms, 40ms, 130ms, and 400ms. Using MatLab software, we are able to capture a high dynamic range with an Arduino that controls the 2448 x 2048 CMOS camera. A major component of this project is to train average citizens to use the software, meaning it needs to be as user friendly as possible. The CATE team is currently working with MathWorks to create a graphic user interface (GUI) that will make data collection run smoothly. This interface will include tabs for alignment, focus, calibration data, drift data, GPS, totality, and a quick look function. This work was made possible through the National Solar Observatory Research Experiences for Undergraduates (REU) Program, which is funded by the National Science Foundation (NSF). The NSO Training for 2017 Citizen CATE Experiment, funded by NASA (NASA NNX16AB92A), also provided support for this project. The National Solar Observatory is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the NSF.
NASA Astrophysics Data System (ADS)
Garnello, A.; Dye, D. G.; Bogle, R.; Hough, M.; Raab, N.; Dominguez, S.; Rich, V. I.; Crill, P. M.; Saleska, S. R.
2016-12-01
Global climate models predict a 50% - 85% decrease in permafrost area in northern regions by 2100 due to increased temperature and precipitation variability, potentially releasing large stores of carbon as greenhouse gases (GHG) due to microbial activity. Linking belowground biogeochemical processes with observable above ground plant dynamics would greatly increase the ability to track and model GHG emissions from permafrost thaw, but current research has yet to satisfactorily develop this link. We hypothesized that seasonal patterns in peatland biogeochemistry manifests itself as observable plant phenology due to the tight coupling resulting from plant-microbial interactions. We tested this by using an automated, tower-based camera to acquire daily composite (red, green, blue) and near infrared (NIR) images of a thawing permafrost peatland site near Abisko, Sweden. The images encompassed a range of exposures which were merged into high-dynamic-range images, a novel application to remote sensing of plant phenology. The 2016 growing season camera images are accompanied by mid-to-late season CH4 and CO2 fluxes measured from soil collars, and by early-mid-late season peat core samples of the composition of microbial communities and key metabolic genes, and of the organic matter and trace gas composition of peat porewater. Additionally, nearby automated gas flux chambers measured sub-hourly fluxes of CO2 and CH4 from the peat, which will also be incorporated into analysis of relationships between seasonal camera-derived vegetation indices and gas fluxes from habitats with different vegetation types. While remote sensing is a proven method in observing plant phenology, this technology has yet to be combined with soil biogeochemical and microbial community data in regions of permafrost thaw. Establishing a high resolution phenology monitoring system linked to soil biogeochemical processes in subarctic peatlands will advance the understanding of how observable patterns in plant phenology can be used to monitor permafrost thaw and ecosystem carbon cycling.
Close-range photogrammetry with video cameras
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1985-01-01
Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.
Close-Range Photogrammetry with Video Cameras
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1983-01-01
Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.
Generic Dynamic Environment Perception Using Smart Mobile Devices
Danescu, Radu; Itu, Razvan; Petrovai, Andra
2016-01-01
The driving environment is complex and dynamic, and the attention of the driver is continuously challenged, therefore computer based assistance achieved by processing image and sensor data may increase traffic safety. While active sensors and stereovision have the advantage of obtaining 3D data directly, monocular vision is easy to set up, and can benefit from the increasing computational power of smart mobile devices, and from the fact that almost all of them come with an embedded camera. Several driving assistance application are available for mobile devices, but they are mostly targeted for simple scenarios and a limited range of obstacle shapes and poses. This paper presents a technique for generic, shape independent real-time obstacle detection for mobile devices, based on a dynamic, free form 3D representation of the environment: the particle based occupancy grid. Images acquired in real time from the smart mobile device’s camera are processed by removing the perspective effect and segmenting the resulted bird-eye view image to identify candidate obstacle areas, which are then used to update the occupancy grid. The occupancy grid tracked cells are grouped into obstacles depicted as cuboids having position, size, orientation and speed. The easy to set up system is able to reliably detect most obstacles in urban traffic, and its measurement accuracy is comparable to a stereovision system. PMID:27763501
Coherent infrared imaging camera (CIRIC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.
1995-07-01
New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less
Onboard TDI stage estimation and calibration using SNR analysis
NASA Astrophysics Data System (ADS)
Haghshenas, Javad
2017-09-01
Electro-Optical design of a push-broom space camera for a Low Earth Orbit (LEO) remote sensing satellite is performed based on the noise analysis of TDI sensors for very high GSDs and low light level missions. It is well demonstrated that the CCD TDI mode of operation provides increased photosensitivity relative to a linear CCD array, without the sacrifice of spatial resolution. However, for satellite imaging, in order to utilize the advantages which the TDI mode of operation offers, attention should be given to the parameters which affect the image quality of TDI sensors such as jitters, vibrations, noises and etc. A predefined TDI stages may not properly satisfy image quality requirement of the satellite camera. Furthermore, in order to use the whole dynamic range of the sensor, imager must be capable to set the TDI stages in every shots based on the affecting parameters. This paper deals with the optimal estimation and setting the stages based on tradeoffs among MTF, noises and SNR. On-board SNR estimation is simulated using the atmosphere analysis based on the MODTRAN algorithm in PcModWin software. According to the noises models, we have proposed a formulation to estimate TDI stages in such a way to satisfy the system SNR requirement. On the other hand, MTF requirement must be satisfy in the same manner. A proper combination of both parameters will guaranty the full dynamic range usage along with the high SNR and image quality.
Rohmer, Kai; Jendersie, Johannes; Grosch, Thorsten
2017-11-01
Augmented Reality offers many applications today, especially on mobile devices. Due to the lack of mobile hardware for illumination measurements, photorealistic rendering with consistent appearance of virtual objects is still an area of active research. In this paper, we present a full two-stage pipeline for environment acquisition and augmentation of live camera images using a mobile device with a depth sensor. We show how to directly work on a recorded 3D point cloud of the real environment containing high dynamic range color values. For unknown and automatically changing camera settings, a color compensation method is introduced. Based on this, we show photorealistic augmentations using variants of differential light simulation techniques. The presented methods are tailored for mobile devices and run at interactive frame rates. However, our methods are scalable to trade performance for quality and can produce quality renderings on desktop hardware.
Protection performance evaluation regarding imaging sensors hardened against laser dazzling
NASA Astrophysics Data System (ADS)
Ritt, Gunnar; Koerber, Michael; Forster, Daniel; Eberle, Bernd
2015-05-01
Electro-optical imaging sensors are widely distributed and used for many different purposes, including civil security and military operations. However, laser irradiation can easily disturb their operational capability. Thus, an adequate protection mechanism for electro-optical sensors against dazzling and damaging is highly desirable. Different protection technologies exist now, but none of them satisfies the operational requirements without any constraints. In order to evaluate the performance of various laser protection measures, we present two different approaches based on triangle orientation discrimination on the one hand and structural similarity on the other hand. For both approaches, image analysis algorithms are applied to images taken of a standard test scene with triangular test patterns which is superimposed by dazzling laser light of various irradiance levels. The evaluation methods are applied to three different sensors: a standard complementary metal oxide semiconductor camera, a high dynamic range camera with a nonlinear response curve, and a sensor hardened against laser dazzling.
Optical digital microscopy for cyto- and hematological studies in vitro
NASA Astrophysics Data System (ADS)
Ganilova, Yu. A.; Dolmashkin, A. A.; Doubrovski, V. A.; Yanina, I. Yu.; Tuchin, V. V.
2013-08-01
The dependence of the spatial resolution and field of view of an optical microscope equipped with a CCD camera on the objective magnification has been experimentally investigated. Measurement of these characteristics has shown that a spatial resolution of 20-25 px/μm at a field of view of about 110 μm is quite realistic; this resolution is acceptable for a detailed study of the processes occurring in cell. It is proposed to expand the dynamic range of digital camera by measuring and approximating its light characteristics with subsequent plotting of the corresponding calibration curve. The biological objects of study were human adipose tissue cells, as well as erythrocytes and their immune complexes in human blood; both objects have been investigated in vitro. Application of optical digital microscopy for solving specific problems of cytology and hematology can be useful in both biomedical studies in experiments with objects of nonbiological origin.
Optical measurement of high-temperature melt flow rate.
Bizjan, Benjamin; Širok, Brane; Chen, Jinpeng
2018-05-20
This paper presents an optical method and system for contactless measurement of the mass flow rate of melts by digital cameras. The proposed method is based on reconstruction of melt stream geometry and flow velocity calculation by cross correlation, and is very cost-effective due its modest hardware requirements. Using a laboratory test rig with a small inductive melting pot and reference mass flow rate measurement by weighing, the proposed method was demonstrated to have an excellent dynamic response (0.1 s order of magnitude) while producing deviations from the reference of about 5% in the steady-state flow regime. Similar results were obtained in an industrial stone wool production line for two repeated measurements. Our method was tested in a wide range of melt flow rates (0.05-1.2 kg/s) and did not require very fast cameras (120 frames per second would be sufficient for most industrial applications).
Intelligent imaging systems for automotive applications
NASA Astrophysics Data System (ADS)
Thompson, Chris; Huang, Yingping; Fu, Shan
2004-03-01
In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issues.
Simultaneous multicolor imaging of wide-field epi-fluorescence microscopy with four-bucket detection
Park, Kwan Seob; Kim, Dong Uk; Lee, Jooran; Kim, Geon Hee; Chang, Ki Soo
2016-01-01
We demonstrate simultaneous imaging of multiple fluorophores using wide-field epi-fluorescence microscopy with a monochrome camera. The intensities of the three lasers are modulated by a sinusoidal waveform in order to excite each fluorophore with the same modulation frequency and a different time-delay. Then, the modulated fluorescence emissions are simultaneously detected by a camera operating at four times the excitation frequency. We show that two different fluorescence beads having crosstalk can be clearly separated using digital processing based on the phase information. In addition, multiple organelles within multi-stained single cells are shown with the phase mapping method, demonstrating an improved dynamic range and contrast compared to the conventional fluorescence image. These findings suggest that wide-field epi-fluorescence microscopy with four-bucket detection could be utilized for high-contrast multicolor imaging applications such as drug delivery and fluorescence in situ hybridization. PMID:27375944
Large-Format Dual-Counter Pixelated X-Ray Detector Platform: Phase II Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Adam; Williams, George; Huntington, Andrew
2016-10-10
Within the program, a Voxtel led team demonstrated both prototype (48 x 48, 130-μm pitch, VX-798) and full-format (192 x 192, 100-μm pitch, VX-810) versions of a high-dynamic-range, x-ray photon-counting (HDR-XPC) sensor. Within the program the following tasks were completed: 1) integration and evaluation of the VX-798 prototype camera at the Advanced Photon Source beamline at Argonne National Labs; 2) the design, simulation, and fabrication of the full-format VX-810 ROIC was completed; 3) fabrication of thick, fully depleted silicon photodiodes optimized for x-ray photon collection; 4) hybridization of the VX-810 ROIC to the photodiode array in the creation of themore » optically sensitive FPA (FPA), and 4) development of an evaluation camera to enable electrical and optical characterization of the sensor.« less
NASA Astrophysics Data System (ADS)
Lempe, B.; Taudt, C.; Baselt, T.; Rudek, F.; Maschke, R.; Basan, F.; Hartmann, P.
2014-02-01
The production of complex titanium components for various industries using laser welding processes has received growing attention in recent years. It is important to know whether the result of the cohesive joint meets the quality requirements of standardization and ultimately the customer requirements. Erroneous weld seams can have fatal consequences especially in the field of car manufacturing and medicine technology. To meet these requirements, a real-time process control system has been developed which determines the welding quality through a locally resolved temperature profile. By analyzing the resulting weld plasma received data is used to verify the stability of the laser welding process. The determination of the temperature profile is done by the detection of the emitted electromagnetic radiation from the material in a range of 500 nm to 1100 nm. As detectors, special high dynamic range CMOS cameras are used. As the emissivity of titanium depends on the wavelength, the surface and the angle of radiation, measuring the temperature is a problem. To solve these a special pyrometer setting with two cameras is used. That enables the compensation of these effects by calculating the difference between the respective pixels on simultaneously recorded images. Two spectral regions with the same emissivity are detected. Therefore the degree of emission and surface effects are compensated and canceled out of the calculation. Using the spatially resolved temperature distribution the weld geometry can be determined and the laser process can be controlled. The active readjustment of parameters such as laser power, feed rate and inert gas injection increases the quality of the welding process and decreases the number of defective goods.
Swap intensified WDR CMOS module for I2/LWIR fusion
NASA Astrophysics Data System (ADS)
Ni, Yang; Noguier, Vincent
2015-05-01
The combination of high resolution visible-near-infrared low light sensor and moderate resolution uncooled thermal sensor provides an efficient way for multi-task night vision. Tremendous progress has been made on uncooled thermal sensors (a-Si, VOx, etc.). It's possible to make a miniature uncooled thermal camera module in a tiny 1cm3 cube with <1W power consumption. For silicon based solid-state low light CCD/CMOS sensors have observed also a constant progress in terms of readout noise, dark current, resolution and frame rate. In contrast to thermal sensing which is intrinsic day&night operational, the silicon based solid-state sensors are not yet capable to do the night vision performance required by defense and critical surveillance applications. Readout noise, dark current are 2 major obstacles. The low dynamic range at high sensitivity mode of silicon sensors is also an important limiting factor, which leads to recognition failure due to local or global saturations & blooming. In this context, the image intensifier based solution is still attractive for the following reasons: 1) high gain and ultra-low dark current; 2) wide dynamic range and 3) ultra-low power consumption. With high electron gain and ultra low dark current of image intensifier, the only requirement on the silicon image pickup device are resolution, dynamic range and power consumption. In this paper, we present a SWAP intensified Wide Dynamic Range CMOS module for night vision applications, especially for I2/LWIR fusion. This module is based on a dedicated CMOS image sensor using solar-cell mode photodiode logarithmic pixel design which covers a huge dynamic range (> 140dB) without saturation and blooming. The ultra-wide dynamic range image from this new generation logarithmic sensor can be used directly without any image processing and provide an instant light accommodation. The complete module is slightly bigger than a simple ANVIS format I2 tube with <500mW power consumption.
The Multidimensional Integrated Intelligent Imaging project (MI-3)
NASA Astrophysics Data System (ADS)
Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.
2009-06-01
MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.
Measurement of luminance noise and chromaticity noise of LCDs with a colorimeter and a color camera
NASA Astrophysics Data System (ADS)
Roehrig, H.; Dallas, W. J.; Krupinski, E. A.; Redford, Gary R.
2007-09-01
This communication focuses on physical evaluation of image quality of displays for applications in medical imaging. In particular we were interested in luminance noise as well as chromaticity noise of LCDs. Luminance noise has been encountered in the study of monochrome LCDs for some time, but chromaticity noise is a new type of noise which has been encountered first when monochrome and color LCDs were compared in an ROC study. In this present study one color and one monochrome 3 M-pixel LCDs were studied. Both were DICOM calibrated with equal dynamic range. We used a Konica Minolta Chroma Meter CS-200 as well as a Foveon color camera to estimate luminance and chrominance variations of the displays. We also used a simulation experiment to estimate luminance noise. The measurements with the colorimeter were consistent. The measurements with the Foveon color camera were very preliminary as color cameras had never been used for image quality measurements. However they were extremely promising. The measurements with the colorimeter and the simulation results showed that the luminance and chromaticity noise of the color LCD were larger than that of the monochrome LCD. Under the condition that an adequate calibration method and image QA/QC program for color displays are available, we expect color LCDs may be ready for radiology in very near future.
Multi-sensor fusion over the World Trade Center disaster site
NASA Astrophysics Data System (ADS)
Rodarmel, Craig; Scott, Lawrence; Simerlink, Deborah A.; Walker, Jeffrey
2002-09-01
The immense size and scope of the rescue and clean-up of the World Trade Center site created a need for data that would provide a total overview of the disaster area. To fulfill this need, the New York State Office for Technology (NYSOFT) contracted with EarthData International to collect airborne remote sensing data over Ground Zero with an airborne light detection and ranging (LIDAR) sensor, a high-resolution digital camera, and a thermal camera. The LIDAR data provided a three-dimensional elevation model of the ground surface that was used for volumetric calculations and also in the orthorectification of the digital images. The digital camera provided high-resolution imagery over the site to aide the rescuers in placement of equipment and other assets. In addition, the digital imagery was used to georeference the thermal imagery and also provided the visual background for the thermal data. The thermal camera aided in the location and tracking of underground fires. The combination of data from these three sensors provided the emergency crews with a timely, accurate overview containing a wealth of information of the rapidly changing disaster site. Because of the dynamic nature of the site, the data was acquired on a daily basis, processed, and turned over to NYSOFT within twelve hours of the collection. During processing, the three datasets were combined and georeferenced to allow them to be inserted into the client's geographic information systems.
NASA Astrophysics Data System (ADS)
Chatterjee, Abhijit; Verma, Anurag
2016-05-01
The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.
Instrumental Response Model and Detrending for the Dark Energy Camera
Bernstein, G. M.; Abbott, T. M. C.; Desai, S.; ...
2017-09-14
We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less
Instrumental Response Model and Detrending for the Dark Energy Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, G. M.; Abbott, T. M. C.; Desai, S.
We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less
Single-camera three-dimensional tracking of natural particulate and zooplankton
NASA Astrophysics Data System (ADS)
Troutman, Valerie A.; Dabiri, John O.
2018-07-01
We develop and characterize an image processing algorithm to adapt single-camera defocusing digital particle image velocimetry (DDPIV) for three-dimensional (3D) particle tracking velocimetry (PTV) of natural particulates, such as those present in the ocean. The conventional DDPIV technique is extended to facilitate tracking of non-uniform, non-spherical particles within a volume depth an order of magnitude larger than current single-camera applications (i.e. 10 cm × 10 cm × 24 cm depth) by a dynamic template matching method. This 2D cross-correlation method does not rely on precise determination of the centroid of the tracked objects. To accommodate the broad range of particle number densities found in natural marine environments, the performance of the measurement technique at higher particle densities has been improved by utilizing the time-history of tracked objects to inform 3D reconstruction. The developed processing algorithms were analyzed using synthetically generated images of flow induced by Hill’s spherical vortex, and the capabilities of the measurement technique were demonstrated empirically through volumetric reconstructions of the 3D trajectories of particles and highly non-spherical, 5 mm zooplankton.
Optical flow estimation on image sequences with differently exposed frames
NASA Astrophysics Data System (ADS)
Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin
2015-09-01
Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.
Brown, David M; Juarez, Juan C; Brown, Andrea M
2013-12-01
A laser differential image-motion monitor (DIMM) system was designed and constructed as part of a turbulence characterization suite during the DARPA free-space optical experimental network experiment (FOENEX) program. The developed link measurement system measures the atmospheric coherence length (r0), atmospheric scintillation, and power in the bucket for the 1550 nm band. DIMM measurements are made with two separate apertures coupled to a single InGaAs camera. The angle of arrival (AoA) for the wavefront at each aperture can be calculated based on focal spot movements imaged by the camera. By utilizing a single camera for the simultaneous measurement of the focal spots, the correlation of the variance in the AoA allows a straightforward computation of r0 as in traditional DIMM systems. Standard measurements of scintillation and power in the bucket are made with the same apertures by redirecting a percentage of the incoming signals to InGaAs detectors integrated with logarithmic amplifiers for high sensitivity and high dynamic range. By leveraging two, small apertures, the instrument forms a small size and weight configuration for mounting to actively tracking laser communication terminals for characterizing link performance.
Meteor44 Video Meteor Photometry
NASA Technical Reports Server (NTRS)
Swift, Wesley R.; Suggs, Robert M.; Cooke, William J.
2004-01-01
Meteor44 is a software system developed at MSFC for the calibration and analysis of video meteor data. The dynamic range of the (8bit) video data is extended by approximately 4 magnitudes for both meteors and stellar images using saturation compensation. Camera and lens specific saturation compensation coefficients are derived from artificial variable star laboratory measurements. Saturation compensation significantly increases the number of meteors with measured intensity and improves the estimation of meteoroid mass distribution. Astrometry is automated to determine each image s plate coefficient using appropriate star catalogs. The images are simultaneously intensity calibrated from the contained stars to determine the photon sensitivity and the saturation level referenced above the atmosphere. The camera s spectral response is used to compensate for stellar color index and typical meteor spectra in order to report meteor light curves in traditional visual magnitude units. Recent efforts include improved camera calibration procedures, long focal length "streak" meteor photome&y and two-station track determination. Meteor44 has been used to analyze data from the 2001.2002 and 2003 MSFC Leonid observational campaigns as well as several lesser showers. The software is interactive and can be demonstrated using data from recent Leonid campaigns.
Time-resolved soft-x-ray studies of energy transport in layered and planar laser-driven targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stradling, G.L.
New low-energy x-ray diagnostic techniques are used to explore energy-transport processes in laser heated plasmas. Streak cameras are used to provide 15-psec time-resolution measurements of subkeV x-ray emission. A very thin (50 ..mu..g/cm/sup 2/) carbon substrate provides a low-energy x-ray transparent window to the transmission photocathode of this soft x-ray streak camera. Active differential vacuum pumping of the instrument is required. The use of high-sensitivity, low secondary-electron energy-spread CsI photocathodes in x-ray streak cameras is also described. Significant increases in sensitivity with only a small and intermittant decrease in dynamic range were observed. These coherent, complementary advances in subkeV, time-resolvedmore » x-ray diagnostic capability are applied to energy-transport investigations of 1.06-..mu..m laser plasmas. Both solid disk targets of a variety of Z's as well as Be-on-Al layered-disk targets were irradiated with 700-psec laser pulses of selected intensity between 3 x 10/sup 14/ W/cm/sup 2/ and 1 x 10/sup 15/ W/cm/sup 2/.« less
Best practices to optimize intraoperative photography.
Gaujoux, Sébastien; Ceribelli, Cecilia; Goudard, Geoffrey; Khayat, Antoine; Leconte, Mahaut; Massault, Pierre-Philippe; Balagué, Julie; Dousset, Bertrand
2016-04-01
Intraoperative photography is used extensively for communication, research, or teaching. The objective of the present work was to define, using a standardized methodology and literature review, the best technical conditions for intraoperative photography. Using either a smartphone camera, a bridge camera, or a single-lens reflex (SLR) camera, photographs were taken under various standard conditions by a professional photographer. All images were independently assessed blinded to technical conditions to define the best shooting conditions and methods. For better photographs, an SLR camera with manual settings should be used. Photographs should be centered and taken vertically and orthogonal to the surgical field with a linear scale to avoid error in perspective. The shooting distance should be about 75 cm using an 80-100-mm focal lens. Flash should be avoided and scialytic low-powered light should be used without focus. The operative field should be clean, wet surfaces should be avoided, and metal instruments should be hidden to avoid reflections. For SLR camera, International Organization for Standardization speed should be as low as possible, autofocus area selection mode should be on single point AF, shutter speed should be above 1/100 second, and aperture should be as narrow as possible, above f/8. For smartphone, use high dynamic range setting if available, use of flash, digital filter, effect apps, and digital zoom is not recommended. If a few basic technical rules are known and applied, high-quality photographs can be taken by amateur photographers and fit the standards accepted in clinical practice, academic communication, and publications. Copyright © 2016 Elsevier Inc. All rights reserved.
Development of low-cost high-performance multispectral camera system at Banpil
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.
2014-05-01
Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.
2017-10-01
Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.
Odden, Morten; Linnell, John D. C.; Odden, John
2017-01-01
Sarcoptic mange is a widely distributed disease that affects numerous mammalian species. We used camera traps to investigate the apparent prevalence and spatiotemporal dynamics of sarcoptic mange in a red fox population in southeastern Norway. We monitored red foxes for five years using 305 camera traps distributed across an 18000 km2 area. A total of 6581 fox events were examined to visually identify mange compatible lesions. We investigated factors associated with the occurrence of mange by using logistic models within a Bayesian framework, whereas the spatiotemporal dynamics of the disease were analysed with space-time scan statistics. The apparent prevalence of the disease fluctuated over the study period with a mean of 3.15% and credible interval [1.25, 6.37], and our best logistic model explaining the presence of red foxes with mange-compatible lesions included time since the beginning of the study and the interaction between distance to settlement and season as explanatory variables. The scan analyses detected several potential clusters of the disease that varied in persistence and size, and the locations in the cluster with the highest probability were closer to human settlements than the other survey locations. Our results indicate that red foxes in an advanced stage of the disease are most likely found closer to human settlements during periods of low wild prey availability (winter). We discuss different potential causes. Furthermore, the disease appears to follow a pattern of small localized outbreaks rather than sporadic isolated events. PMID:28423011
Camera pose estimation for augmented reality in a small indoor dynamic scene
NASA Astrophysics Data System (ADS)
Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad
2017-09-01
Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.
Evaluation of large format electron bombarded virtual phase CCDs as ultraviolet imaging detectors
NASA Technical Reports Server (NTRS)
Opal, Chet B.; Carruthers, George R.
1989-01-01
In conjunction with an external UV-sensitive cathode, an electron-bombarded CCD may be used as a high quantum efficiency/wide dynamic range photon-counting UV detector. Results are presented for the case of a 1024 x 1024, 18-micron square pixel virtual phase CCD used with an electromagnetically focused f/2 Schmidt camera, which yields excellent simgle-photoevent discrimination and counting efficiency. Attention is given to the vacuum-chamber arrangement used to conduct system tests and the CCD electronics and data-acquisition systems employed.
Updating the Synchrotron Radiation Monitor at TLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuo, C. H.; Hsu, S. Y.; Wang, C. J.
2007-01-19
The synchrotron radiation monitor provides useful information to support routine operation and physics experiments using the beam. Precisely knowing the profile of the beam helps to improve machine performance. The synchrotron radiation monitor at the Taiwan Light Source (TLS) was recently upgraded. The optics and modeling were improved to increase the accuracy of measurement in the small beam size. A high-performance IEEE-1394 digital CCD camera was used to improve the quality of images and extend the dynamic range of measurement. The image analysis is also improved. This report summarizes status and results.
Acousto-Optic Applications for Multichannel Adaptive Optical Processor
1992-06-01
AO cell and the two- channel line-scan camera system described in Subsection 4.1. The AO material for this IntraAction AOD-70 device was flint glass (n...Single-Channel 1.68 (flint glass ) 60,.0 AO Cell Multichannel 2.26 (TeO 2) 20.0 AO Cell Beam splitter 1.515 ( glass ) 50.8 Multichannel correlation was...Tone Intermodulation Dynamic Ranges of Longitudinal TeO2 Bragg Cells for Several Acoustic Power Densities 4-92 f f2 f 3 1 t SOURCE: Reference 21 TR-92
NASA Astrophysics Data System (ADS)
Caragiulo, P.; Dragone, A.; Markovic, B.; Herbst, R.; Nishimura, K.; Reese, B.; Herrmann, S.; Hart, P.; Blaj, G.; Segal, J.; Tomada, A.; Hasi, J.; Carini, G.; Kenney, C.; Haller, G.
2015-05-01
ePix10k is a variant of a novel class of integrating pixel ASICs architectures optimized for the processing of signals in second generation LINAC Coherent Light Source (LCLS) X-Ray cameras. The ASIC is optimized for high dynamic range application requiring high spatial resolution and fast frame rates. ePix ASICs are based on a common platform composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog to digital converters per column. The ePix10k variant has 100um×100um pixels arranged in a 176×192 matrix, a resolution of 140e- r.m.s. and a signal range of 3.5pC (10k photons at 8keV). In its final version it will be able to sustain a frame rate of 2kHz. A first prototype has been fabricated and characterized. Performance in terms of noise, linearity, uniformity, cross-talk, together with preliminary measurements with bump bonded sensors are reported here.
Low Noise Camera for Suborbital Science Applications
NASA Technical Reports Server (NTRS)
Hyde, David; Robertson, Bryan; Holloway, Todd
2015-01-01
Low-cost, commercial-off-the-shelf- (COTS-) based science cameras are intended for lab use only and are not suitable for flight deployment as they are difficult to ruggedize and repackage into instruments. Also, COTS implementation may not be suitable since mission science objectives are tied to specific measurement requirements, and often require performance beyond that required by the commercial market. Custom camera development for each application is cost prohibitive for the International Space Station (ISS) or midrange science payloads due to nonrecurring expenses ($2,000 K) for ground-up camera electronics design. While each new science mission has a different suite of requirements for camera performance (detector noise, speed of image acquisition, charge-coupled device (CCD) size, operation temperature, packaging, etc.), the analog-to-digital conversion, power supply, and communications can be standardized to accommodate many different applications. The low noise camera for suborbital applications is a rugged standard camera platform that can accommodate a range of detector types and science requirements for use in inexpensive to mid range payloads supporting Earth science, solar physics, robotic vision, or astronomy experiments. Cameras developed on this platform have demonstrated the performance found in custom flight cameras at a price per camera more than an order of magnitude lower.
Analysis of Performance of Stereoscopic-Vision Software
NASA Technical Reports Server (NTRS)
Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert
2007-01-01
A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.
NASA Technical Reports Server (NTRS)
Ko, William L.; Gong, Leslie
2000-01-01
To visually record the initial free flight event of the Hyper-X research flight vehicle immediately after separation from the Pegasus(registered) booster rocket, a video camera was mounted on the bulkhead of the adapter through which Hyper-X rides on Pegasus. The video camera was shielded by a protecting camera window made of heat-resistant quartz material. When Hyper-X separates from Pegasus, this camera window will be suddenly exposed to Mach 7 stagnation thermal shock and dynamic pressure loading (aerothermal loading). To examine the structural integrity, thermoelastic analysis was performed, and the stress distributions in the camera windows were calculated. The critical stress point where the tensile stress reaches a maximum value for each camera window was identified, and the maximum tensile stress level at that critical point was found to be considerably lower than the tensile failure stress of the camera window material.
NASA Technical Reports Server (NTRS)
Marsh, J. G.; Douglas, B. C.; Walls, D. M.
1974-01-01
Laser and camera data taken during the International Satellite Geodesy Experiment (ISAGEX) were used in dynamical solutions to obtain center-of-mass coordinates for the Astro-Soviet camera sites at Helwan, Egypt, and Oulan Bator, Mongolia, as well as the East European camera sites at Potsdam, German Democratic Republic, and Ondrejov, Czechoslovakia. The results are accurate to about 20m in each coordinate. The orbit of PEOLE (i=15) was also determined from ISAGEX data. Mean Kepler elements suitable for geodynamic investigations are presented.
QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †
Ni, Yang
2018-01-01
In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903
QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.
Ni, Yang
2018-02-14
In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.
Plasma crystal dynamics measured with a three-dimensional plenoptic camera
NASA Astrophysics Data System (ADS)
Jambor, M.; Nosenko, V.; Zhdanov, S. K.; Thomas, H. M.
2016-03-01
Three-dimensional (3D) imaging of a single-layer plasma crystal was performed using a commercial plenoptic camera. To enhance the out-of-plane oscillations of particles in the crystal, the mode-coupling instability (MCI) was triggered in it by lowering the discharge power below a threshold. 3D coordinates of all particles in the crystal were extracted from the recorded videos. All three fundamental wave modes of the plasma crystal were calculated from these data. In the out-of-plane spectrum, only the MCI-induced hot spots (corresponding to the unstable hybrid mode) were resolved. The results are in agreement with theory and show that plenoptic cameras can be used to measure the 3D dynamics of plasma crystals.
Plasma crystal dynamics measured with a three-dimensional plenoptic camera.
Jambor, M; Nosenko, V; Zhdanov, S K; Thomas, H M
2016-03-01
Three-dimensional (3D) imaging of a single-layer plasma crystal was performed using a commercial plenoptic camera. To enhance the out-of-plane oscillations of particles in the crystal, the mode-coupling instability (MCI) was triggered in it by lowering the discharge power below a threshold. 3D coordinates of all particles in the crystal were extracted from the recorded videos. All three fundamental wave modes of the plasma crystal were calculated from these data. In the out-of-plane spectrum, only the MCI-induced hot spots (corresponding to the unstable hybrid mode) were resolved. The results are in agreement with theory and show that plenoptic cameras can be used to measure the 3D dynamics of plasma crystals.
Resource Allocation in Dynamic Environments
2012-10-01
Utility Curve for the TOC Camera 42 Figure 20: Utility Curves for Ground Vehicle Camera and Squad Camera 43 Figure 21: Facial - Recognition Utility...A Facial - Recognition Server (FRS) can receive images from smartphones the squads use, compare them to a local database, and then return the...fallback. In addition, each squad has the ability to capture images with a smartphone and send them to a Facial - Recognition Server in the TOC to
CMOS detector arrays in a virtual 10-kilopixel camera for coherent terahertz real-time imaging.
Boppel, Sebastian; Lisauskas, Alvydas; Max, Alexander; Krozer, Viktor; Roskos, Hartmut G
2012-02-15
We demonstrate the principle applicability of antenna-coupled complementary metal oxide semiconductor (CMOS) field-effect transistor arrays as cameras for real-time coherent imaging at 591.4 GHz. By scanning a few detectors across the image plane, we synthesize a focal-plane array of 100×100 pixels with an active area of 20×20 mm2, which is applied to imaging in transmission and reflection geometries. Individual detector pixels exhibit a voltage conversion loss of 24 dB and a noise figure of 41 dB for 16 μW of the local oscillator (LO) drive. For object illumination, we use a radio-frequency (RF) source with 432 μW at 590 GHz. Coherent detection is realized by quasioptical superposition of the image and the LO beam with 247 μW. At an effective frame rate of 17 Hz, we achieve a maximum dynamic range of 30 dB in the center of the image and more than 20 dB within a disk of 18 mm diameter. The system has been used for surface reconstruction resolving a height difference in the μm range.
Time-lapse camera observations of gas piston activity at Pu`u `Ō`ō, Kīlauea volcano, Hawai`i
NASA Astrophysics Data System (ADS)
Orr, Tim R.; Rea, James C.
2012-12-01
Gas pistoning is a type of eruptive behavior described first at Kīlauea volcano and characterized by the (commonly) cyclic rise and fall of the lava surface within a volcanic vent or lava lake. Though recognized for decades, its cause continues to be debated, and determining why and when it occurs has important implications for understanding vesiculation and outgassing processes at basaltic volcanoes. Here, we describe gas piston activity that occurred at the Pu`u `Ō`ō cone, in Kīlauea's east rift zone, during June 2006. Direct, detailed measurements of lava level, made from time-lapse camera images captured at close range, show that the gas pistons during the study period lasted from 2 to 60 min, had volumes ranging from 14 to 104 m3, displayed a slowing rise rate of the lava surface, and had an average gas release duration of 49 s. Our data are inconsistent with gas pistoning models that invoke gas slug rise or a dynamic pressure balance but are compatible with models which appeal to gas accumulation and loss near the top of the lava column, possibly through the generation and collapse of a foam layer.
Plenoptic Imager for Automated Surface Navigation
NASA Technical Reports Server (NTRS)
Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael
2010-01-01
An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.
NASA Astrophysics Data System (ADS)
Moriya, Gentaro; Chikatsu, Hirofumi
2011-07-01
Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.
Quality assurance of a gimbaled head swing verification using feature point tracking.
Miura, Hideharu; Ozawa, Shuichi; Enosaki, Tsubasa; Kawakubo, Atsushi; Hosono, Fumika; Yamada, Kiyoshi; Nagata, Yasushi
2017-01-01
To perform dynamic tumor tracking (DTT) for clinical applications safely and accurately, gimbaled head swing verification is important. We propose a quantitative gimbaled head swing verification method for daily quality assurance (QA), which uses feature point tracking and a web camera. The web camera was placed on a couch at the same position for every gimbaled head swing verification, and could move based on a determined input function (sinusoidal patterns; amplitude: ± 20 mm; cycle: 3 s) in the pan and tilt directions at isocenter plane. Two continuous images were then analyzed for each feature point using the pyramidal Lucas-Kanade (LK) method, which is an optical flow estimation algorithm. We used a tapped hole as a feature point of the gimbaled head. The period and amplitude were analyzed to acquire a quantitative gimbaled head swing value for daily QA. The mean ± SD of the period were 3.00 ± 0.03 (range: 3.00-3.07) s and 3.00 ± 0.02 (range: 3.00-3.07) s in the pan and tilt directions, respectively. The mean ± SD of the relative displacement were 19.7 ± 0.08 (range: 19.6-19.8) mm and 18.9 ± 0.2 (range: 18.4-19.5) mm in the pan and tilt directions, respectively. The gimbaled head swing was reliable for DTT. We propose a quantitative gimbaled head swing verification method for daily QA using the feature point tracking method and a web camera. Our method can quantitatively assess the gimbaled head swing for daily QA from baseline values, measured at the time of acceptance and commissioning. © 2016 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Live HDR video streaming on commodity hardware
NASA Astrophysics Data System (ADS)
McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan
2015-09-01
High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.
Electrically optofluidic zoom system with a large zoom range and high-resolution image.
Li, Lei; Yuan, Rong-Ying; Wang, Jin-Hui; Wang, Qiong-Hua
2017-09-18
We report an electrically controlled optofluidic zoom system which can achieve a large continuous zoom change and high-resolution image. The zoom system consists of an optofluidic zoom objective and a switchable light path which are controlled by two liquid optical shutters. The proposed zoom system can achieve a large tunable focal length range from 36mm to 92mm. And in this tuning range, the zoom system can correct aberrations dynamically, thus the image resolution is high. Due to large zoom range, the proposed imaging system incorporates both camera configuration and telescope configuration into one system. In addition, the whole system is electrically controlled by three electrowetting liquid lenses and two liquid optical shutters, therefore, the proposed system is very compact and free of mechanical moving parts. The proposed zoom system has potential to take place of conventional zoom systems.
Light field analysis and its applications in adaptive optics and surveillance systems
NASA Astrophysics Data System (ADS)
Eslami, Mohammed Ali
An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications. After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system. As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies.
The NASA - Arc 10/20 micron camera
NASA Technical Reports Server (NTRS)
Roellig, T. L.; Cooper, R.; Deutsch, L. K.; Mccreight, C.; Mckelvey, M.; Pendleton, Y. J.; Witteborn, F. C.; Yuen, L.; Mcmahon, T.; Werner, M. W.
1994-01-01
A new infrared camera (AIR Camera) has been developed at NASA - Ames Research Center for observations from ground-based telescopes. The heart of the camera is a Hughes 58 x 62 pixel Arsenic-doped Silicon detector array that has the spectral sensitivity range to allow observations in both the 10 and 20 micron atmospheric windows.
Multi-Angle Snowflake Camera Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuefer, Martin; Bailey, J.
2016-07-01
The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less
Advanced illumination control algorithm for medical endoscopy applications
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Morgado-Dias, F.
2015-05-01
CMOS image sensor manufacturer, AWAIBA, is providing the world's smallest digital camera modules to the world market for minimally invasive surgery and one time use endoscopic equipment. Based on the world's smallest digital camera head and the evaluation board provided to it, the aim of this paper is to demonstrate an advanced fast response dynamic control algorithm of the illumination LED source coupled to the camera head, over the LED drivers embedded on the evaluation board. Cost efficient and small size endoscopic camera modules nowadays embed minimal size image sensors capable of not only adjusting gain and exposure time but also LED illumination with adjustable illumination power. The LED illumination power has to be dynamically adjusted while navigating the endoscope over changing illumination conditions of several orders of magnitude within fractions of the second to guarantee a smooth viewing experience. The algorithm is centered on the pixel analysis of selected ROIs enabling it to dynamically adjust the illumination intensity based on the measured pixel saturation level. The control core was developed in VHDL and tested in a laboratory environment over changing light conditions. The obtained results show that it is capable of achieving correction speeds under 1 s while maintaining a static error below 3% relative to the total number of pixels on the image. The result of this work will allow the integration of millimeter sized high brightness LED sources on minimal form factor cameras enabling its use in endoscopic surgical robotic or micro invasive surgery.
Rudin, Stephen; Kuhls, Andrew T.; Yadava, Girijesh K.; Josan, Gaurav C.; Wu, Ye; Chityala, Ravishankar N.; Rangwala, Hussain S.; Ciprian Ionita, N.; Hoffmann, Kenneth R.; Bednarek, Daniel R.
2011-01-01
New cone-beam computed tomographic (CBCT) mammography system designs are presented where the detectors provide high spatial resolution, high sensitivity, low noise, wide dynamic range, negligible lag and high frame rates similar to features required for high performance fluoroscopy detectors. The x-ray detectors consist of a phosphor coupled by a fiber-optic taper to either a high gain image light amplifier (LA) then CCD camera or to an electron multiplying CCD. When a square-array of such detectors is used, a field-of-view (FOV) to 20 × 20 cm can be obtained where the images have pixel-resolution of 100 µm or better. To achieve practical CBCT mammography scan-times, 30 fps may be acquired with quantum limited (noise free) performance below 0.2 µR detector exposure per frame. Because of the flexible voltage controlled gain of the LA’s and EMCCDs, large detector dynamic range is also achievable. Features of such detector systems with arrays of either generation 2 (Gen 2) or 3 (Gen 3) LAs optically coupled to CCD cameras or arrays of EMCCDs coupled directly are compared. Quantum accounting analysis is done for a variety of such designs where either the lowest number of information carriers off the LA photo-cathode or electrons released in the EMCCDs per x-ray absorbed in the phosphor are large enough to imply no quantum sink for the design. These new LA- or EMCCD-based systems could lead to vastly improved CBCT mammography, ROI-CT, or fluoroscopy performance compared to systems using flat panels. PMID:21297904
NASA Astrophysics Data System (ADS)
Rudin, Stephen; Kuhls, Andrew T.; Yadava, Girijesh K.; Josan, Gaurav C.; Wu, Ye; Chityala, Ravishankar N.; Rangwala, Hussain S.; Ionita, N. Ciprian; Hoffmann, Kenneth R.; Bednarek, Daniel R.
2006-03-01
New cone-beam computed tomographic (CBCT) mammography system designs are presented where the detectors provide high spatial resolution, high sensitivity, low noise, wide dynamic range, negligible lag and high frame rates similar to features required for high performance fluoroscopy detectors. The x-ray detectors consist of a phosphor coupled by a fiber-optic taper to either a high gain image light amplifier (LA) then CCD camera or to an electron multiplying CCD. When a square-array of such detectors is used, a field-of-view (FOV) to 20 x 20 cm can be obtained where the images have pixel-resolution of 100 μm or better. To achieve practical CBCT mammography scan-times, 30 fps may be acquired with quantum limited (noise free) performance below 0.2 μR detector exposure per frame. Because of the flexible voltage controlled gain of the LA's and EMCCDs, large detector dynamic range is also achievable. Features of such detector systems with arrays of either generation 2 (Gen 2) or 3 (Gen 3) LAs optically coupled to CCD cameras or arrays of EMCCDs coupled directly are compared. Quantum accounting analysis is done for a variety of such designs where either the lowest number of information carriers off the LA photo-cathode or electrons released in the EMCCDs per x-ray absorbed in the phosphor are large enough to imply no quantum sink for the design. These new LA- or EMCCD-based systems could lead to vastly improved CBCT mammography, ROI-CT, or fluoroscopy performance compared to systems using flat panels.
A real-time monitoring system for night glare protection
NASA Astrophysics Data System (ADS)
Ma, Jun; Ni, Xuxiang
2010-11-01
When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.
Low-cost laser speckle contrast imaging of blood flow using a webcam.
Richards, Lisa M; Kazmi, S M Shams; Davis, Janel L; Olin, Katherine E; Dunn, Andrew K
2013-01-01
Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion.
Low-cost laser speckle contrast imaging of blood flow using a webcam
Richards, Lisa M.; Kazmi, S. M. Shams; Davis, Janel L.; Olin, Katherine E.; Dunn, Andrew K.
2013-01-01
Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion. PMID:24156082
Positron emission particle tracking using a modular positron camera
NASA Astrophysics Data System (ADS)
Parker, D. J.; Leadbeater, T. W.; Fan, X.; Hausard, M. N.; Ingram, A.; Yang, Z.
2009-06-01
The technique of positron emission particle tracking (PEPT), developed at Birmingham in the early 1990s, enables a radioactively labelled tracer particle to be accurately tracked as it moves between the detectors of a "positron camera". In 1999 the original Birmingham positron camera, which consisted of a pair of MWPCs, was replaced by a system comprising two NaI(Tl) gamma camera heads operating in coincidence. This system has been successfully used for PEPT studies of a wide range of granular and fluid flow processes. More recently a modular positron camera has been developed using a number of the bismuth germanate (BGO) block detectors from standard PET scanners (CTI ECAT 930 and 950 series). This camera has flexible geometry, is transportable, and is capable of delivering high data rates. This paper presents simple models of its performance, and initial experience of its use in a range of geometries and applications.
Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system.
Dixon, W E; Dawson, D M; Zergeroglu, E; Behal, A
2001-01-01
This paper considers the problem of position/orientation tracking control of wheeled mobile robots via visual servoing in the presence of parametric uncertainty associated with the mechanical dynamics and the camera system. Specifically, we design an adaptive controller that compensates for uncertain camera and mechanical parameters and ensures global asymptotic position/orientation tracking. Simulation and experimental results are included to illustrate the performance of the control law.
Bioluminescent Antibodies for Point‐of‐Care Diagnostics
Xue, Lin; Yu, Qiuliyang; Griss, Rudolf; Schena, Alberto
2017-01-01
Abstract We introduce a general method to transform antibodies into ratiometric, bioluminescent sensor proteins for the no‐wash quantification of analytes. Our approach is based on the genetic fusion of antibody fragments to NanoLuc luciferase and SNAP‐tag, the latter being labeled with a synthetic fluorescent competitor of the antigen. Binding of the antigen, here synthetic drugs, by the sensor displaces the tethered fluorescent competitor from the antibody and disrupts bioluminescent resonance energy transfer (BRET) between the luciferase and fluorophore. The semisynthetic sensors display a tunable response range (submicromolar to submillimolar) and large dynamic range (ΔR max>500 %), and they permit the quantification of analytes through spotting of the samples onto paper followed by analysis with a digital camera. PMID:28510347
NASA Astrophysics Data System (ADS)
Waltham, N.; Beardsley, S.; Clapp, M.; Lang, J.; Jerram, P.; Pool, P.; Auker, G.; Morris, D.; Duncan, D.
2017-11-01
Solar Dynamics Observatory (SDO) is imaging the Sun in many wavelengths near simultaneously and with a resolution ten times higher than the average high-definition television. In this paper we describe our innovative systems approach to the design of the CCD cameras for two of SDO's remote sensing instruments, the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI). Both instruments share use of a custom-designed 16 million pixel science-grade CCD and common camera readout electronics. A prime requirement was for the CCD to operate with significantly lower drive voltages than before, motivated by our wish to simplify the design of the camera readout electronics. Here, the challenge lies in the design of circuitry to drive the CCD's highly capacitive electrodes and to digitize its analogue video output signal with low noise and to high precision. The challenge is greatly exacerbated when forced to work with only fully space-qualified, radiation-tolerant components. We describe our systems approach to the design of the AIA and HMI CCD and camera electronics, and the engineering solutions that enabled us to comply with both mission and instrument science requirements.
SFDT-1 Camera Pointing and Sun-Exposure Analysis and Flight Performance
NASA Technical Reports Server (NTRS)
White, Joseph; Dutta, Soumyo; Striepe, Scott
2015-01-01
The Supersonic Flight Dynamics Test (SFDT) vehicle was developed to advance and test technologies of NASA's Low Density Supersonic Decelerator (LDSD) Technology Demonstration Mission. The first flight test (SFDT-1) occurred on June 28, 2014. In order to optimize the usefulness of the camera data, analysis was performed to optimize parachute visibility in the camera field of view during deployment and inflation and to determine the probability of sun-exposure issues with the cameras given the vehicle heading and launch time. This paper documents the analysis, results and comparison with flight video of SFDT-1.
Enhancing swimming pool safety by the use of range-imaging cameras
NASA Astrophysics Data System (ADS)
Geerardyn, D.; Boulanger, S.; Kuijk, M.
2015-05-01
Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.
ERIC Educational Resources Information Center
Brochu, Michel
1983-01-01
In August, 1981, National Aeronautics and Space Administration launched Dynamics Explorer 1 into polar orbit equipped with three cameras built to view the Northern Lights. The cameras can photograph aurora borealis' faint light without being blinded by the earth's bright dayside. Photographs taken by the satellite are provided. (JN)
Multi-Purpose Crew Vehicle Camera Asset Planning: Imagery Previsualization
NASA Technical Reports Server (NTRS)
Beaulieu, K.
2014-01-01
Using JSC-developed and other industry-standard off-the-shelf 3D modeling, animation, and rendering software packages, the Image Science Analysis Group (ISAG) supports Orion Project imagery planning efforts through dynamic 3D simulation and realistic previsualization of ground-, vehicle-, and air-based camera output.
Bayesian inference in camera trapping studies for a class of spatial capture-recapture models
Royle, J. Andrew; Karanth, K. Ullas; Gopalaswamy, Arjun M.; Kumar, N. Samba
2009-01-01
We develop a class of models for inference about abundance or density using spatial capture-recapture data from studies based on camera trapping and related methods. The model is a hierarchical model composed of two components: a point process model describing the distribution of individuals in space (or their home range centers) and a model describing the observation of individuals in traps. We suppose that trap- and individual-specific capture probabilities are a function of distance between individual home range centers and trap locations. We show that the models can be regarded as generalized linear mixed models, where the individual home range centers are random effects. We adopt a Bayesian framework for inference under these models using a formulation based on data augmentation. We apply the models to camera trapping data on tigers from the Nagarahole Reserve, India, collected over 48 nights in 2006. For this study, 120 camera locations were used, but cameras were only operational at 30 locations during any given sample occasion. Movement of traps is common in many camera-trapping studies and represents an important feature of the observation model that we address explicitly in our application.
Streak camera receiver definition study
NASA Technical Reports Server (NTRS)
Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.
1990-01-01
Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.
Applications of digital image acquisition in anthropometry
NASA Technical Reports Server (NTRS)
Woolford, B.; Lewis, J. L.
1981-01-01
A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.
New feature of the neutron color image intensifier
NASA Astrophysics Data System (ADS)
Nittoh, Koichi; Konagai, Chikara; Noji, Takashi; Miyabe, Keisuke
2009-06-01
We developed prototype neutron color image intensifiers with high-sensitivity, wide dynamic range and long-life characteristics. In the prototype intensifier (Gd-Type 1), a terbium-activated Gd 2O 2S is used as the input-screen phosphor. In the upgraded model (Gd-Type 2), Gd 2O 3 and CsI:Na are vacuum deposited to form the phosphor layer, which improved the sensitivity and the spatial uniformity. A europium-activated Y 2O 2S multi-color scintillator, emitting red, green and blue photons with different intensities, is utilized as the output screen of the intensifier. By combining this image intensifier with a suitably tuned high-sensitive color CCD camera, higher sensitivity and wider dynamic range could be simultaneously attained than that of the conventional P20-phosphor-type image intensifier. The results of experiments at the JRR-3M neutron radiography irradiation port (flux: 1.5×10 8 n/cm 2/s) showed that these neutron color image intensifiers can clearly image dynamic phenomena with a 30 frame/s video picture. It is expected that the color image intensifier will be used as a new two-dimensional neutron sensor in new application fields.
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
Intermittent stick-slip dynamics during the peeling of an adhesive tape from a roller.
Cortet, Pierre-Philippe; Dalbe, Marie-Julie; Guerra, Claudia; Cohen, Caroline; Ciccotti, Matteo; Santucci, Stéphane; Vanel, Loïc
2013-02-01
We study experimentally the fracture dynamics during the peeling at a constant velocity of a roller adhesive tape mounted on a freely rotating pulley. Thanks to a high speed camera, we measure, in an intermediate range of peeling velocities, high frequency oscillations between phases of slow and rapid propagation of the peeling fracture. This so-called stick-slip regime is well known as the consequence of a decreasing fracture energy of the adhesive in a certain range of peeling velocity coupled to the elasticity of the peeled tape. Simultaneously with stick slip, we observe low frequency oscillations of the adhesive roller angular velocity which are the consequence of a pendular instability of the roller submitted to the peeling force. The stick-slip dynamics is shown to become intermittent due to these slow pendular oscillations which produce a quasistatic oscillation of the peeling angle while keeping constant the peeling fracture velocity (averaged over each stick-slip cycle). The observed correlation between the mean peeling angle and the stick-slip amplitude questions the validity of the usually admitted independence with the peeling angle of the fracture energy of adhesives.
Pre-impact fall detection system using dynamic threshold and 3D bounding box
NASA Astrophysics Data System (ADS)
Otanasap, Nuth; Boonbrahm, Poonpong
2017-02-01
Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.
Three dimensional measurement with an electrically tunable focused plenoptic camera
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2017-03-01
A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.
Three dimensional measurement with an electrically tunable focused plenoptic camera.
Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2017-03-01
A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.
Trägårdh, Johanna; Gersen, Henkjan
2013-07-15
We show how a combination of near-field scanning optical microscopy with crossed beam spectral interferometry allows a local measurement of the spectral phase and amplitude of light propagating in photonic structures. The method only requires measurement at the single point of interest and at a reference point, to correct for the relative phase of the interferometer branches, to retrieve the dispersion properties of the sample. Furthermore, since the measurement is performed in the spectral domain, the spectral phase and amplitude could be retrieved from a single camera frame, here in 70 ms for a signal power of less than 100 pW limited by the dynamic range of the 8-bit camera. The method is substantially faster than most previous time-resolved NSOM methods that are based on time-domain interferometry, which also reduced problems with drift. We demonstrate how the method can be used to measure the refractive index and group velocity in a waveguide structure.
Ocelot (Leopardus pardalis) Density in Central Amazonia.
Rocha, Daniel Gomes da; Sollmann, Rahel; Ramalho, Emiliano Esterci; Ilha, Renata; Tan, Cedric K W
2016-01-01
Ocelots (Leopardus pardalis) are presumed to be the most abundant of the wild cats throughout their distribution range and to play an important role in the dynamics of sympatric small-felid populations. However, ocelot ecological information is limited, particularly for the Amazon. We conducted three camera-trap surveys during three consecutive dry seasons to estimate ocelot density in Amanã Reserve, Central Amazonia, Brazil. We implemented a spatial capture-recapture (SCR) model that shared detection parameters among surveys. A total effort of 7020 camera-trap days resulted in 93 independent ocelot records. The estimate of ocelot density in Amanã Reserve (24.84 ± SE 6.27 ocelots per 100 km2) was lower than at other sites in the Amazon and also lower than that expected from a correlation of density with latitude and rainfall. We also discuss the importance of using common parameters for survey scenarios with low recapture rates. This is the first density estimate for ocelots in the Brazilian Amazon, which is an important stronghold for the species.
USDA-ARS?s Scientific Manuscript database
The proliferation of tower-mounted cameras co-located with eddy covariance instrumentation provides a novel opportunity to better understand the relationship between canopy phenology and the seasonality of canopy photosynthesis. In this paper, we describe the abilities and limitations of webcams to ...
Evaluating methods for controlling depth perception in stereoscopic cinematography
NASA Astrophysics Data System (ADS)
Sun, Geng; Holliman, Nick
2009-02-01
Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.
NASA Technical Reports Server (NTRS)
Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.
2011-01-01
We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.
Rapid estimation of frequency response functions by close-range photogrammetry
NASA Technical Reports Server (NTRS)
Tripp, J. S.
1985-01-01
The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.
UVMAS: Venus ultraviolet-visual mapping spectrometer
NASA Astrophysics Data System (ADS)
Bellucci, G.; Zasova, L.; Altieri, F.; Nuccilli, F.; Ignatiev, N.; Moroz, V.; Khatuntsev, I.; Korablev, O.; Rodin, A.
This paper summarizes the capabilities and technical solutions of an Ultraviolet Visual Mapping Spectrometer designed for remote sensing of Venus from a planetary orbiter. The UVMAS consists of a multichannel camera with a spectral range 0.19 << 0.49 μm which acquires data in several spectral channels (up to 400) with a spectral resolution of 0.58 nm. The instantaneous field of view of the instrument is 0.244 × 0.244 mrad. These characteristics allow: a) to study the upper clouds dynamics and chemistry; b) giving constraints on the unknown absorber; c) observation of the night side airglow.
Formation Process of Non-Neutral Plasmas by Multiple Electron Beams on BX-U
NASA Astrophysics Data System (ADS)
Sanpei, Akio; Himura, Haruhiko; Masamune, Sadao
An imaging diagnostic system, which is composed of a handmade phosphor screen and a high-speed camera, has been applied to identify the dynamics of multiple electron beams on BX-U. The relaxation process of those toward a non-neutral plasma is experimentally identified. Also, the radial density profile of the plasma is measured as a function of time. Assuming that the plasma is a spheroidal shape, the value of electron density ne is in the range between 2.2 × 106 and 4.4 × 108 cm-3 on BX-U.
Sky camera geometric calibration using solar observations
Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan
2016-09-05
A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
Measuring water level in rivers and lakes from lightweight Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Bandini, Filippo; Jakobsen, Jakob; Olesen, Daniel; Reyna-Gutierrez, Jose Antonio; Bauer-Gottwein, Peter
2017-05-01
The assessment of hydrologic dynamics in rivers, lakes, reservoirs and wetlands requires measurements of water level, its temporal and spatial derivatives, and the extent and dynamics of open water surfaces. Motivated by the declining number of ground-based measurement stations, research efforts have been devoted to the retrieval of these hydraulic properties from spaceborne platforms in the past few decades. However, due to coarse spatial and temporal resolutions, spaceborne missions have several limitations when assessing the water level of terrestrial surface water bodies and determining complex water dynamics. Unmanned Aerial Vehicles (UAVs) can fill the gap between spaceborne and ground-based observations, and provide high spatial resolution and dense temporal coverage data, in quick turn-around time, using flexible payload design. This study focused on categorizing and testing sensors, which comply with the weight constraint of small UAVs (around 1.5 kg), capable of measuring the range to water surface. Subtracting the measured range from the vertical position retrieved by the onboard Global Navigation Satellite System (GNSS) receiver, we can determine the water level (orthometric height). Three different ranging payloads, which consisted of a radar, a sonar and an in-house developed camera-based laser distance sensor (CLDS), have been evaluated in terms of accuracy, precision, maximum ranging distance and beam divergence. After numerous flights, the relative accuracy of the overall system was estimated. A ranging accuracy better than 0.5% of the range and a maximum ranging distance of 60 m were achieved with the radar. The CLDS showed the lowest beam divergence, which is required to avoid contamination of the signal from interfering surroundings for narrow fields of view. With the GNSS system delivering a relative vertical accuracy better than 3-5 cm, water level can be retrieved with an overall accuracy better than 5-7 cm.
The Sensor Irony: How Reliance on Sensor Technology is Limiting Our View of the Battlefield
2010-05-10
thermal ) camera, as well as a laser illuminator/range finder.73 Similar to the MQ- 1 , the MQ-9 Reaper is primarily a strike asset for emerging targets...Wescam 14TS. 1 Both systems have an Electro-optical (daylight) TV camera, an Infra-red ( thermal ) camera, as well as a laser illuminator/range finder...Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour
Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing
2014-06-01
price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure
Video sensor with range measurement capability
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)
2008-01-01
A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.
Prol, Fabricio dos Santos; El Issaoui, Aimad; Hakala, Teemu
2018-01-01
The use of Personal Mobile Terrestrial System (PMTS) has increased considerably for mobile mapping applications because these systems offer dynamic data acquisition with ground perspective in places where the use of wheeled platforms is unfeasible, such as forests and indoor buildings. PMTS has become more popular with emerging technologies, such as miniaturized navigation sensors and off-the-shelf omnidirectional cameras, which enable low-cost mobile mapping approaches. However, most of these sensors have not been developed for high-accuracy metric purposes and therefore require rigorous methods of data acquisition and data processing to obtain satisfactory results for some mapping applications. To contribute to the development of light, low-cost PMTS and potential applications of these off-the-shelf sensors for forest mapping, this paper presents a low-cost PMTS approach comprising an omnidirectional camera with off-the-shelf navigation systems and its evaluation in a forest environment. Experimental assessments showed that the integrated sensor orientation approach using navigation data as the initial information can increase the trajectory accuracy, especially in covered areas. The point cloud generated with the PMTS data had accuracy consistent with the Ground Sample Distance (GSD) range of omnidirectional images (3.5–7 cm). These results are consistent with those obtained for other PMTS approaches. PMID:29522467
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-01-01
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-12-26
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.
Electronic camera-management system for 35-mm and 70-mm film cameras
NASA Astrophysics Data System (ADS)
Nielsen, Allan
1993-01-01
Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.
Versatile microsecond movie camera
NASA Astrophysics Data System (ADS)
Dreyfus, R. W.
1980-03-01
A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.
Cheetah: A high frame rate, high resolution SWIR image camera
NASA Astrophysics Data System (ADS)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
The superiority of L3-CCDs in the high-flux and wide dynamic range regimes
NASA Astrophysics Data System (ADS)
Butler, Raymond F.; Sheehan, Brendan J.
2008-02-01
Low Light Level CCD (L3-CCD) cameras have received much attention for high cadence astronomical imaging applications. Efforts to date have concentrated on exploiting them for two scenarios: post-exposure image sharpening and ``lucky imaging'', and rapid variability in astrophysically interesting sources. We demonstrate their marked superiority in a third distinct scenario: observing in the high-flux and wide dynamic range regimes. We realized that the unique features of L3-CCDs would make them ideal for maximizing signal-to-noise in observations of bright objects (whether variable or not), and for high dynamic range scenarios such as faint targets embedded in a crowded field of bright objects. Conventional CCDs have drawbacks in such regimes, due to a poor duty cycle-the combination of short exposure times (for time-series sampling or to avoid saturation) and extended readout times (for minimizing readout noise). For different telescope sizes, we use detailed models to show that a range of conventional imaging systems are photometrically out-performed across a wide range of object brightness, once the operational parameters of the L3-CCD are carefully set. The cross-over fluxes, above which the L3-CCD is operationally superior, are surprisingly faint-even for modest telescope apertures. We also show that the use of L3-CCDs is the optimum strategy for minimizing atmospheric scintillation noise in photometric observations employing a given telescope aperture. This is particularly significant, since scintillation can be the largest source of error in timeseries photometry. These results should prompt a new direction in developing imaging instrumentation solutions for observatories.
Low-voltage 96 dB snapshot CMOS image sensor with 4.5 nW power dissipation per pixel.
Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander
2012-01-01
Modern "smart" CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage "smart" image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel.
Low-Voltage 96 dB Snapshot CMOS Image Sensor with 4.5 nW Power Dissipation per Pixel
Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander
2012-01-01
Modern “smart” CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage “smart” image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel. PMID:23112588
Present status of the Japanese Venus climate orbiter
NASA Astrophysics Data System (ADS)
Nakamura, M.; Imamura, T.; Abe, T.; Ishii, N.
The code name of 24th science spacecraft of ISAS/JAXA is Planet-C. It is the first Venus Climate Orbiter (VCO) of Japan. The ministry of finance of Japan finally agreed to start phase B study of VCO from this April, 2004. We plan 1-2 years phase B study followed by 2 years of flight model integration. The spacecraft will be launched between 2009 and 2010. After arriving Venus, 2 years of operation is expected. VCO will complemet the ESA's Venus Express mission which have several spectrometers and will reveal the composition of the Venusian atmosphere. On the other hand, VCO is designed to reveal the details of the atmospheric motion on Venus and approach the dynamics of the Venusian climate. Cooperation between Japanese VCO and ESA's Venus Express, in the colaboration framework of U.S., Europian, and Japanese scienctist is very important. To elucidate the driving mechanism of the 4-days super-rotation is one of our main targets. We have 4 cameras to take snap shots of the planets in different wave lengths. They are the IR1 camera (1 micron-meter), the IR2 camera (2.4 micron-meter), the LIR camera (10-12 micron-meter), and the UVI camera (340nm). They are attached to the side panel of the 3-axis stabilized spacecraft, and are directed to Venus with the spacecraft's attitude control. Snap shots are expected to be taken every 2 hours. The spacecraft has an orbit of 300km x 13Rv (Venusian radii) with 172 degrees inclination. Orbital period is 30 hours. The angular position of the spacecraft on this orbit is synchronized for 20 hours at its apoapsis with the global atmospheric circulation at the altitude of 50km, thus the snap shots of every 2 hours will be the images of the same side of the atmosphere. In addition to these 4 cameras, we have a Lightning and Airglow camera (LAC) in visible range. This will be operated when the orbiter is close to the planet.
Observations of the Perseids 2012 using SPOSH cameras
NASA Astrophysics Data System (ADS)
Margonis, A.; Flohrer, J.; Christou, A.; Elgner, S.; Oberst, J.
2012-09-01
The Perseids are one of the most prominent annual meteor showers occurring every summer when the stream of dust particles, originating from Halley-type comet 109P/Swift-Tuttle, intersects the orbital path of the Earth. The dense core of this stream passes Earth's orbit on the 12th of August producing the maximum number of meteors. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) organize observing campaigns every summer monitoring the Perseids activity. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [0]. The SPOSH camera has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract and it is designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera features a highly sensitive backilluminated 1024x1024 CCD chip and a high dynamic range of 14 bits. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal). Figure 1: A meteor captured by the SPOSH cameras simultaneously during the last 2011 observing campaign in Greece. The horizon including surrounding mountains can be seen in the image corners as a result of the large FOV of the camera. The observations will be made on the Greek Peloponnese peninsula monitoring the post-peak activity of the Perseids during a one-week period around the August New Moon (14th to 21st). Two SPOSH cameras will be deployed in two remote sites in high altitudes for the triangulation of meteor trajectories captured at both stations simultaneously. The observations during this time interval will give us the possibility to study the poorly-observed postmaximum branch of the Perseid stream and compare the results with datasets from previous campaigns which covered different periods of this long-lived meteor shower. The acquired data will be processed using dedicated software for meteor data reduction developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories and photometric properties of the processed double-station meteors will be presented at the conference. Furthermore, a first order statistical analysis of the meteors processed during the 2011 and the new 2012 campaigns will be presented [0].
Advanced Video Guidance Sensor (AVGS) Development Testing
NASA Technical Reports Server (NTRS)
Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.
2004-01-01
NASA's Marshall Space Flight Center was the driving force behind the development of the Advanced Video Guidance Sensor, an active sensor system that provides near-range sensor data as part of an automatic rendezvous and docking system. The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state camera to detect the return from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The AVGS will fly as part of the Demonstration of Autonomous Rendezvous Technologies (DART) in October, 2004. This development effort has required a great deal of testing of various sorts at every phase of development. Some of the test efforts included optical characterization of performance with the intended target, thermal vacuum testing, performance tests in long range vacuum facilities, EMI/EMC tests, and performance testing in dynamic situations. The sensor has been shown to track a target at ranges of up to 300 meters, both in vacuum and ambient conditions, to survive and operate during the thermal vacuum cycling specific to the DART mission, to handle EM1 well, and to perform well in dynamic situations.
Analysis of the variation of range parameters of thermal cameras
NASA Astrophysics Data System (ADS)
Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał
2016-10-01
Measured range characteristics may vary considerably (up to several dozen percent) between different samples of the same camera type. The question is whether the manufacturing process somehow lacks repeatability or the commonly used measurement procedures themselves need improvement. The presented paper attempts to deal with the aforementioned question. The measurement method has been thoroughly analyzed as well as the measurement test bed. Camera components (such as detector and optics) have also been analyzed and their key parameters have been measured, including noise figures of the entire system. Laboratory measurements are the most precise method used to determine range parameters of a thermal camera. However, in order to obtain reliable results several important conditions have to be fulfilled. One must have the test equipment capable of measurement accuracy (uncertainty) significantly better than the magnitudes of measured quantities. The measurements must be performed in a controlled environment thus excluding the influence of varying environmental conditions. The personnel must be well-trained, experienced in testing the thermal imaging devices and familiar with the applied measurement procedures. The measurement data recorded for several dozen of cooled thermal cameras (from one of leading camera manufacturers) have been the basis of the presented analysis. The measurements were conducted in the accredited research laboratory of Institute of Optoelectronics (Military University of Technology).
Through the Creator's Eyes: Using the Subjective Camera to Study Craft Creativity
ERIC Educational Resources Information Center
Glaveanu, Vlad Petre; Lahlou, Saadi
2012-01-01
This article addresses a methodological gap in the study of creativity: the difficulty of capturing the microgenesis of creative action in ways that would reflect both its psychological and behavioral dynamics. It explores the use of subjective camera (subcam) by research participants as part of an adapted Subjective Evidence-Based Ethnography…
NASA Technical Reports Server (NTRS)
Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)
1985-01-01
Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.
Flow visualization by mobile phone cameras
NASA Astrophysics Data System (ADS)
Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.
2016-06-01
Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.
Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A
2014-04-01
Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.
Prasad, Dilip K; Rajan, Deepu; Rachmawati, Lily; Rajabally, Eshan; Quek, Chai
2016-12-01
This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement.
Time-lapse camera observations of gas piston activity at Pu‘u ‘Ō‘ō, Kīlauea volcano, Hawai‘i
Orr, Tim R.; Rea, James
2012-01-01
Gas pistoning is a type of eruptive behavior described first at Kīlauea volcano and characterized by the (commonly) cyclic rise and fall of the lava surface within a volcanic vent or lava lake. Though recognized for decades, its cause continues to be debated, and determining why and when it occurs has important implications for understanding vesiculation and outgassing processes at basaltic volcanoes. Here, we describe gas piston activity that occurred at the Pu‘u ‘Ō‘ō cone, in Kīlauea’s east rift zone, during June 2006. Direct, detailed measurements of lava level, made from time-lapse camera images captured at close range, show that the gas pistons during the study period lasted from 2 to 60 min, had volumes ranging from 14 to 104 m3, displayed a slowing rise rate of the lava surface, and had an average gas release duration of 49 s. Our data are inconsistent with gas pistoning models that invoke gas slug rise or a dynamic pressure balance but are compatible with models which appeal to gas accumulation and loss near the top of the lava column, possibly through the generation and collapse of a foam layer.
Deep and wide photometry of two open clusters NGC 1245 and NGC 2506: dynamical evolution and halo
NASA Astrophysics Data System (ADS)
Lee, S. H.; Kang, Y.-W.; Ann, H. B.
2013-06-01
We studied the structure of two old open clusters, NGC 1245 and NGC 2506, from a wide and deep VI photometry data acquired using the CFH12K CCD camera at Canada-France-Hawaii Telescope. We devised a new method for assigning cluster membership probability to individual stars using both spatial positions and positions in the colour-magnitude diagram. From analyses of the luminosity functions at several cluster-centric radii and the radial surface density profiles derived from stars with different luminosity ranges, we found that the two clusters are dynamically relaxed to drive significant mass segregation and evaporation of some fraction of low-mass stars. There seems to be a signature of tidal tail in NGC 1245 but the signal is too low to be confirmed.
Bioluminescent Antibodies for Point-of-Care Diagnostics.
Xue, Lin; Yu, Qiuliyang; Griss, Rudolf; Schena, Alberto; Johnsson, Kai
2017-06-12
We introduce a general method to transform antibodies into ratiometric, bioluminescent sensor proteins for the no-wash quantification of analytes. Our approach is based on the genetic fusion of antibody fragments to NanoLuc luciferase and SNAP-tag, the latter being labeled with a synthetic fluorescent competitor of the antigen. Binding of the antigen, here synthetic drugs, by the sensor displaces the tethered fluorescent competitor from the antibody and disrupts bioluminescent resonance energy transfer (BRET) between the luciferase and fluorophore. The semisynthetic sensors display a tunable response range (submicromolar to submillimolar) and large dynamic range (ΔR max >500 %), and they permit the quantification of analytes through spotting of the samples onto paper followed by analysis with a digital camera. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
NASA Astrophysics Data System (ADS)
Dekemper, Emmanuel; Vanhamel, Jurgen; Van Opstal, Bert; Fussen, Didier
2016-12-01
The abundance of NO2 in the boundary layer relates to air quality and pollution source monitoring. Observing the spatiotemporal distribution of NO2 above well-delimited (flue gas stacks, volcanoes, ships) or more extended sources (cities) allows for applications such as monitoring emission fluxes or studying the plume dynamic chemistry and its transport. So far, most attempts to map the NO2 field from the ground have been made with visible-light scanning grating spectrometers. Benefiting from a high retrieval accuracy, they only achieve a relatively low spatiotemporal resolution that hampers the detection of dynamic features. We present a new type of passive remote sensing instrument aiming at the measurement of the 2-D distributions of NO2 slant column densities (SCDs) with a high spatiotemporal resolution. The measurement principle has strong similarities with the popular filter-based SO2 camera as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. Contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. The NO2 camera capabilities are demonstrated by imaging the NO2 abundance in the plume of a coal-fired power plant. During this experiment, the 2-D distribution of the NO2 SCD was retrieved with a temporal resolution of 3 min and a spatial sampling of 50 cm (over a 250 × 250 m2 area). The detection limit was close to 5 × 1016 molecules cm-2, with a maximum detected SCD of 4 × 1017 molecules cm-2. Illustrating the added value of the NO2 camera measurements, the data reveal the dynamics of the NO to NO2 conversion in the early plume with an unprecedent resolution: from its release in the air, and for 100 m upwards, the observed NO2 plume concentration increased at a rate of 0.75-1.25 g s-1. In joint campaigns with SO2 cameras, the NO2 camera could also help in removing the bias introduced by the NO2 interference with the SO2 spectrum.
Adaptive foveated single-pixel imaging with dynamic supersampling
Phillips, David B.; Sun, Ming-Jie; Taylor, Jonathan M.; Edgar, Matthew P.; Barnett, Stephen M.; Gibson, Graham M.; Padgett, Miles J.
2017-01-01
In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom—a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements. PMID:28439538
A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications.
Moeys, Diederik Paul; Corradi, Federico; Li, Chenghan; Bamford, Simeon A; Longinotti, Luca; Voigt, Fabian F; Berry, Stewart; Taverni, Gemma; Helmchen, Fritjof; Delbruck, Tobi
2018-02-01
Applications requiring detection of small visual contrast require high sensitivity. Event cameras can provide higher dynamic range (DR) and reduce data rate and latency, but most existing event cameras have limited sensitivity. This paper presents the results of a 180-nm Towerjazz CIS process vision sensor called SDAVIS192. It outputs temporal contrast dynamic vision sensor (DVS) events and conventional active pixel sensor frames. The SDAVIS192 improves on previous DAVIS sensors with higher sensitivity for temporal contrast. The temporal contrast thresholds can be set down to 1% for negative changes in logarithmic intensity (OFF events) and down to 3.5% for positive changes (ON events). The achievement is possible through the adoption of an in-pixel preamplification stage. This preamplifier reduces the effective intrascene DR of the sensor (70 dB for OFF and 50 dB for ON), but an automated operating region control allows up to at least 110-dB DR for OFF events. A second contribution of this paper is the development of characterization methodology for measuring DVS event detection thresholds by incorporating a measure of signal-to-noise ratio (SNR). At average SNR of 30 dB, the DVS temporal contrast threshold fixed pattern noise is measured to be 0.3%-0.8% temporal contrast. Results comparing monochrome and RGBW color filter array DVS events are presented. The higher sensitivity of SDAVIS192 make this sensor potentially useful for calcium imaging, as shown in a recording from cultured neurons expressing calcium sensitive green fluorescent protein GCaMP6f.
Research on a solid state-streak camera based on an electro-optic crystal
NASA Astrophysics Data System (ADS)
Wang, Chen; Liu, Baiyu; Bai, Yonglin; Bai, Xiaohong; Tian, Jinshou; Yang, Wenzheng; Xian, Ouyang
2006-06-01
With excellent temporal resolution ranging from nanosecond to sub-picoseconds, a streak camera is widely utilized in measuring ultrafast light phenomena, such as detecting synchrotron radiation, examining inertial confinement fusion target, and making measurements of laser-induced discharge. In combination with appropriate optics or spectroscope, the streak camera delivers intensity vs. position (or wavelength) information on the ultrafast process. The current streak camera is based on a sweep electric pulse and an image converting tube with a wavelength-sensitive photocathode ranging from the x-ray to near infrared region. This kind of streak camera is comparatively costly and complex. This paper describes the design and performance of a new-style streak camera based on an electro-optic crystal with large electro-optic coefficient. Crystal streak camera accomplishes the goal of time resolution by direct photon beam deflection using the electro-optic effect which can replace the current streak camera from the visible to near infrared region. After computer-aided simulation, we design a crystal streak camera which has the potential of time resolution between 1ns and 10ns.Some further improvements in sweep electric circuits, a crystal with a larger electro-optic coefficient, for example LN (γ 33=33.6×10 -12m/v) and the optimal optic system may lead to better time resolution less than 1ns.
Camera Operator and Videographer
ERIC Educational Resources Information Center
Moore, Pam
2007-01-01
Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…
Conceptual Design and Dynamics Testing and Modeling of a Mars Tumbleweed Rover
NASA Technical Reports Server (NTRS)
Calhoun Philip C.; Harris, Steven B.; Raiszadeh, Behzad; Zaleski, Kristina D.
2005-01-01
The NASA Langley Research Center has been developing a novel concept for a Mars planetary rover called the Mars Tumbleweed. This concept utilizes the wind to propel the rover along the Mars surface, bringing it the potential to cover vast distances not possible with current Mars rover technology. This vehicle, in its deployed configuration, must be large and lightweight to provide the ratio of drag force to rolling resistance necessary to initiate motion from rest on the Mars surface. One Tumbleweed design concept that satisfies these considerations is called the Eggbeater-Dandelion. This paper describes the basic design considerations and a proposed dynamics model of the concept for use in simulation studies. It includes a summary of rolling/bouncing dynamics tests that used videogrammetry to better understand, characterize, and validate the dynamics model assumptions, especially the effective rolling resistance in bouncing/rolling dynamic conditions. The dynamics test used cameras to capture the motion of 32 targets affixed to a test article s outer structure. Proper placement of the cameras and alignment of their respective fields of view provided adequate image resolution of multiple targets along the trajectory as the test article proceeded down the ramp. Image processing of the frames from multiple cameras was used to determine the target positions. Position data from a set of these test runs was compared with results of a three dimensional, flexible dynamics model. Model input parameters were adjusted to match the test data for runs conducted. This process presented herein provided the means to characterize the dynamics and validate the simulation of the Eggbeater-Dandelion concept. The simulation model was used to demonstrate full scale Tumbleweed motion from a stationary condition on a flat-sloped terrain using representative Mars environment parameters.
An Automatic Portable Telecine Camera.
1978-08-01
five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the
Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher
2016-01-01
Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.
NASA Astrophysics Data System (ADS)
Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.
2014-09-01
Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.
Dynamic laser beam shaping for material processing using hybrid holograms
NASA Astrophysics Data System (ADS)
Liu, Dun; Wang, Yutao; Zhai, Zhongsheng; Fang, Zheng; Tao, Qing; Perrie, Walter; Edwarson, Stuart P.; Dearden, Geoff
2018-06-01
A high quality, dynamic laser beam shaping method is demonstrated by displaying a series of hybrid holograms onto a spatial light modulator (SLM), while each one of the holograms consists of a binary grating and a geometric mask. A diffraction effect around the shaped beam has been significantly reduced. Beam profiles of arbitrary shape, such as square, ring, triangle, pentagon and hexagon, can be conveniently obtained by loading the corresponding holograms on the SLM. The shaped beam can be reconstructed in the range of 0.5 mm at the image plane. Ablation on a polished stainless steel sample at the image plane are consistent with the beam shape at the diffraction near-field. The ±1st order and higher order beams can be completely removed when the grating period is smaller than 160 μm. The local energy ratio of the shaped beam observed by the CCD camera is up to 77.67%. Dynamic processing at 25 Hz using different shapes has also been achieved.
Synchro-ballistic recording of detonation phenomena
NASA Astrophysics Data System (ADS)
Critchfield, Robert R.; Asay, Blaine W.; Bdzil, John B.; Davis, William C.; Ferm, Eric N.; Idar, Deanne J.
1997-12-01
Synchro-ballistic use of rotating-mirror streak cameras allows for detailed recording of high-speed events of known velocity and direction. After an introduction to the synchro-ballistic technique, this paper details two diverse applications of the technique as applied in the field of high-explosives research. In the first series of experiments detonation-front shape is recorded as the arriving detonation shock wave tilts an obliquely mounted mirror, causing reflected light to be deflected from the imaging lens. These tests were conducted for the purpose of calibrating and confirming the asymptotic detonation shock dynamics (DSD) theory of Bdzil and Stewart. The phase velocities of the events range from ten to thirty millimeters per microsecond. Optical magnification is set for optimal use of the film's spatial dimension and the phase velocity is adjusted to provide synchronization at the camera's maximum writing speed. Initial calibration of the technique is undertaken using a cylindrical HE geometry over a range of charge diameters and of sufficient length-to- diameter ratio to insure a stable detonation wave. The final experiment utilizes an arc-shaped explosive charge, resulting in an asymmetric denotation-front record. The second series of experiments consists of photographing a shaped-charge jet having a velocity range of two to nine millimeters per microsecond. To accommodate the range of velocities it is necessary to fire several tests, each synchronized to a different section of the jet. The experimental apparatus consists of a vacuum chamber to preclude atmospheric ablation of the jet tip with shocked-argon back lighting to produce a shadow-graph image.
Dynamic frequency-domain interferometer for absolute distance measurements with high resolution
NASA Astrophysics Data System (ADS)
Weng, Jidong; Liu, Shenggang; Ma, Heli; Tao, Tianjiong; Wang, Xiang; Liu, Cangli; Tan, Hua
2014-11-01
A unique dynamic frequency-domain interferometer for absolute distance measurement has been developed recently. This paper presents the working principle of the new interferometric system, which uses a photonic crystal fiber to transmit the wide-spectrum light beams and a high-speed streak camera or frame camera to record the interference stripes. Preliminary measurements of harmonic vibrations of a speaker, driven by a radio, and the changes in the tip clearance of a rotating gear wheel show that this new type of interferometer has the ability to perform absolute distance measurements both with high time- and distance-resolution.
Dust measurements in tokamaks (invited).
Rudakov, D L; Yu, J H; Boedo, J A; Hollmann, E M; Krasheninnikov, S I; Moyer, R A; Muller, S H; Pigarov, A Yu; Rosenberg, M; Smirnov, R D; West, W P; Boivin, R L; Bray, B D; Brooks, N H; Hyatt, A W; Wong, C P C; Roquemore, A L; Skinner, C H; Solomon, W M; Ratynskaia, S; Fenstermacher, M E; Groth, M; Lasnier, C J; McLean, A G; Stangeby, P C
2008-10-01
Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers, visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 microm in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C(2) dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudakov, D. L.; Yu, J. H.; Boedo, J. A.
Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers,more » visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 {mu}m in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C{sub 2} dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.« less
Juno observes the dynamics of Jupiter's atmosphere
NASA Astrophysics Data System (ADS)
Ingersoll, Andrew P.; Juno Science Team
2017-10-01
Jupiter is a photogenic planet, but our knowledge of the deep atmosphere is limited. Remote sensing observations have traditionally probed within and above the cloud tops, which are in the 0.5-1.0 bar pressure range. Dynamical models have focused on explaining this data set. Microwave observations from Earth probe down to the 5-10 bar range, which overlaps with the predicted base of the water cloud. The Galileo probe yielded data on winds, composition, temperature gradients, clouds, radiant flux, and lightning down to 22 bars, but only at one place on the planet. Further, the traditional observations are constrained to cover low and middle latitudes. In contrast, Juno's camera and infrared radiometer, JunoCam and JIRAM, have yielded images of the poles that show cyclonic vortices in polygonal arrangements. Juno's microwave radiometer yields latitude-altitude cross sections that show dynamical features of the ammonia distribution down to 50-100 bars. And Jupiter's gravity field yields information about the winds at thousands of km depth, where the pressures are tens of kbars. In this talk I will summarize the Juno observations that pertain to the dynamics of Jupiter's atmosphere and I will offer some of my own interpretations. The new data raise as many questions as answers, but that is as it should be. As Ed Stone said during a Voyager encounter, "If we knew all the answers before we got there, we wouldn't be learning anything."
NASA Astrophysics Data System (ADS)
Göhler, Benjamin; Lutzmann, Peter
2017-10-01
Primarily, a laser gated-viewing (GV) system provides range-gated 2D images without any range resolution within the range gate. By combining two GV images with slightly different gate positions, 3D information within a part of the range gate can be obtained. The depth resolution is higher (super-resolution) than the minimal gate shift step size in a tomographic sequence of the scene. For a state-of-the-art system with a typical frame rate of 20 Hz, the time difference between the two required GV images is 50 ms which may be too long in a dynamic scenario with moving objects. Therefore, we have applied this approach to the reset and signal level images of a new short-wave infrared (SWIR) GV camera whose read-out integrated circuit supports correlated double sampling (CDS) actually intended for the reduction of kTC noise (reset noise). These images are extracted from only one single laser pulse with a marginal time difference in between. The SWIR GV camera consists of 640 x 512 avalanche photodiodes based on mercury cadmium telluride with a pixel pitch of 15 μm. A Q-switched, flash lamp pumped solid-state laser with 1.57 μm wavelength (OPO), 52 mJ pulse energy after beam shaping, 7 ns pulse length and 20 Hz pulse repetition frequency is used for flash illumination. In this paper, the experimental set-up is described and the operating principle of CDS is explained. The method of deriving super-resolution depth information from a GV system by using CDS is introduced and optimized. Further, the range accuracy is estimated from measured image data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fraser, Wesley C.; Brown, Michael E.; Glass, Florian, E-mail: wesley.fraser@nrc.ca
2015-05-01
Here, we present additional photometry of targets observed as part of the Hubble Wide Field Camera 3 (WFC3) Test of Surfaces in the Outer Solar System. Twelve targets were re-observed with the WFC3 in the optical and NIR wavebands designed to complement those used during the first visit. Additionally, all of the observations originally presented by Fraser and Brown were reanalyzed through the same updated photometry pipeline. A re-analysis of the optical and NIR color distribution reveals a bifurcated optical color distribution and only two identifiable spectral classes, each of which occupies a broad range of colors and has correlatedmore » optical and NIR colors, in agreement with our previous findings. We report the detection of significant spectral variations on five targets which cannot be attributed to photometry errors, cosmic rays, point-spread function or sensitivity variations, or other image artifacts capable of explaining the magnitude of the variation. The spectrally variable objects are found to have a broad range of dynamical classes and absolute magnitudes, exhibit a broad range of apparent magnitude variations, and are found in both compositional classes. The spectrally variable objects with sufficiently accurate colors for spectral classification maintain their membership, belonging to the same class at both epochs. 2005 TV189 exhibits a sufficiently broad difference in color at the two epochs that span the full range of colors of the neutral class. This strongly argues that the neutral class is one single class with a broad range of colors, rather than the combination of multiple overlapping classes.« less
Studies on dynamic behavior of rotating mirrors
NASA Astrophysics Data System (ADS)
Li, Jingzhen; Sun, Fengshan; Gong, Xiangdong; Huang, Hongbin; Tian, Jie
2005-02-01
A rotating mirror is a kernel unit in a Miller-type high speed camera, which is both as an imaging element in optical path and as an element to implement ultrahigh speed photography. According to Schardin"s Principle, information capacity of an ultrahigh speed camera with rotating mirror depends on primary wavelength of lighting used by the camera and limit linear velocity on edge of the rotating-mirror: the latter is related to material (including specifications in technology), cross-section shape and lateral structure of rotating mirror. In this manuscript dynamic behavior of high strength aluminium alloy rotating mirrors is studied, from which it is preliminarily shown that an aluminium alloy rotating mirror can be absolutely used as replacement for a steel rotating-mirror or a titanium alloy rotating-mirror in framing photographic systems, and it could be also used as a substitute for a beryllium rotating-mirror in streak photographic systems.
Dynamic Human Body Modeling Using a Single RGB Camera.
Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan
2016-03-18
In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.
Dynamic Human Body Modeling Using a Single RGB Camera
Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan
2016-01-01
In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones. PMID:26999159
Saletti, Dominique
2017-01-01
Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
The recent introduction of inexpensive high-speed cameras offers a new experimental approach to many simple but fast-occurring events in physics. In this paper, the authors present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects…
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Positioning sensor by combining optical projection and photogrammetry
NASA Astrophysics Data System (ADS)
Zheng, Benrui
Six spatial parameters, (x, y, z) for translation, and pitch, roll, and yaw for rotation, are used to describe the 3-dimensional position and orientation of a rigid body---the 6 degrees of freedom (DOF). The ability to measure these parameters is required in a diverse range of applications including machine tool metrology, robot calibration, motion control, motion analysis, and reconstructive surgery. However, there are limitations associated with the currently available measurement systems. Shortcomings include some of the following: short dynamic range, limited accuracy, line of sight restrictions, and capital cost. The objective of this dissertation was to develop a new metrology system that overcomes line of sight restrictions, reduces system costs, allows large dynamic range and has the potential to provide high measurement accuracy. The new metrology system proposed in this dissertation is based on a combination of photogrammetry and optical pattern projection. This system has the potential to enable real-time measurement of a small lightweight module's location. The module generates an optical pattern that is observable on the surrounding walls, and photogrammetry is used to measure the absolute coordinates of features in the projected optical pattern with respect to a defined global coordinate system. By combining these absolute coordinates with the known angular information of the optical projection beams, a minimization algorithm can be used to extract the absolute coordinates and angular orientation of the module itself. The feasibility of the proposed metrology system was first proved through preliminary experimental tests. By using a module with a 7x7 dot matrix pattern, experimental agreement of 1 to 5 parts in 103 was obtained by translating the module over 0.9 m and by rotating it through 60°. The proposed metrology system was modeled through numerical simulations and factors affecting the uncertainty of the measurement were investigated. The simulation results demonstrate that optimum design of the projected pattern gives a lower associated measurement uncertainty than is possible by direct photogrammetric measurement with traditional tie points alone. Based on the simulation results, a few improvements have been made to the proposed metrology systems. These improvements include using a module with larger full view angle and larger number of dots, performing angle calibration for the module, using a virtual camera approach to determine the module location and employing multiple coordinates system for large range rotation measurement. With the new proposed virtual camera approach, experimental agreement at the level of 3 parts in 104 was observed for the one dimension translation test. The virtual camera approach is faster than the algorithm and an additional minimization analysis is no longer needed. In addition, the virtual camera approach offers an additional benefit that it is no longer necessary to identify all dots in the pattern and so is more amenable to use in realistic and usually complicated environments. A preliminary rotation test over 120° was conducted by tying three coordinate systems together. It was observed that the absolute values of the angle differences between the measured angle and the encoder reading are smaller than 0.23° for all measurements. It is found that this proposed metrology system has the ability to measure larger angle range (up to 360°) by using multiple coordinate systems. The uncertainty analysis of the proposed system was performed through Monte Carlo simulation and it was demonstrated that the experimental results are consistent with the analysis.
Zhang, Zhuang; Zhao, Rujin; Liu, Enhai; Yan, Kun; Ma, Yuebo
2018-06-15
This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.
Electro-optical system for gunshot detection: analysis, concept, and performance
NASA Astrophysics Data System (ADS)
Kastek, M.; Dulski, R.; Madura, H.; Trzaskawka, P.; Bieszczad, G.; Sosnowski, T.
2011-08-01
The paper discusses technical possibilities to build an effective electro-optical sensor unit for sniper detection using infrared cameras. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. At first, the analysis was presented of three distinguished phases of sniper activity: before, during and after the shot. On the basis of experimental data the parameters defining the relevant sniper signatures were determined which are essential in assessing the capability of infrared camera to detect sniper activity. A sniper body and muzzle flash were analyzed as targets and the descriptions of phenomena which make it possible to detect sniper activities in infrared spectra as well as analysis of physical limitations were performed. The analyzed infrared systems were simulated using NVTherm software. The calculations for several cameras, equipped with different lenses and detector types were performed. The simulation of detection ranges was performed for the selected scenarios of sniper detection tasks. After the analysis of simulation results, the technical specifications of infrared sniper detection system were discussed, required to provide assumed detection range. Finally the infrared camera setup was proposed which can detected sniper from 1000 meters range.
Near infrared photography with a vacuum-cold camera. [Orion nebula observation
NASA Technical Reports Server (NTRS)
Rossano, G. S.; Russell, R. W.; Cornett, R. H.
1980-01-01
Sensitized cooled plates have been obtained of the Orion nebula region and of Sh2-149 in the wavelength ranges 8000 A-9000 A and 9,000 A-11,000 A with a recently designed and constructed vacuum-cold camera. Sensitization procedures are described and the camera design is presented.
Dynamic light scattering microscopy
NASA Astrophysics Data System (ADS)
Dzakpasu, Rhonda
An optical microscope technique, dynamic light scattering microscopy (DLSM) that images dynamically scattered light fluctuation decay rates is introduced. Using physical optics we show theoretically that within the optical resolution of the microscope, relative motions between scattering centers are sufficient to produce significant phase variations resulting in interference intensity fluctuations in the image plane. The time scale for these intensity fluctuations is predicted. The spatial coherence distance defining the average distance between constructive and destructive interference in the image plane is calculated and compared with the pixel size. We experimentally tested DLSM on polystyrene latex nanospheres and living macrophage cells. In order to record these rapid fluctuations, on a slow progressive scan CCD camera, we used a thin laser line of illumination on the sample such that only a single column of pixels in the CCD camera is illuminated. This allowed the use of the rate of the column-by-column readout transfer process as the acquisition rate of the camera. This manipulation increased the data acquisition rate by at least an order of magnitude in comparison to conventional CCD cameras rates defined by frames/s. Analysis of the observed fluctuations provides information regarding the rates of motion of the scattering centers. These rates, acquired from each position on the sample are used to create a spatial map of the fluctuation decay rates. Our experiments show that with this technique, we are able to achieve a good signal-to-noise ratio and can monitor fast intensity fluctuations, on the order of milliseconds. DLSM appears to provide dynamic information about fast motions within cells at a sub-optical resolution scale and provides a new kind of spatial contrast.
Longitudinal bunch shaping of picosecond high-charge MeV electron beams
Beaudoin, B. L.; Thangaraj, J. C. T.; Edstrom, Jr., D.; ...
2016-10-20
With ever increasing demands for intensities in modern accelerators, the understanding of space-charge effects becomes crucial. Herein are presented measurements of optically shaped picosecond-long electron beams in a superconducting L-band linac over a wide range of charges, from 0.2 nC to 3.4 nC. At low charges, the shape of the electron beam is preserved, while at higher charge densities, modulations on the beam convert to energy modulations. Here, energy profile measurements using a spectrometer and time profile measurements using a streak camera reveal the dynamics of longitudinal space-charge on MeV-scale electron beams.
The robot's eyes - Stereo vision system for automated scene analysis
NASA Technical Reports Server (NTRS)
Williams, D. S.
1977-01-01
Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.
Performances Of The New Streak Camera TSN 506
NASA Astrophysics Data System (ADS)
Nodenot, P.; Imhoff, C.; Bouchu, M.; Cavailler, C.; Fleurot, N.; Launspach, J.
1985-02-01
The number of streack cameras used in research laboratory has been continuously increased du-ring the past years. The increasing of this type of equipment is due to the development of various measurement techniques in the nanosecond and picosecond range. Among the many different applications, we would mention detonics chronometry measurement, measurement of the speed of matter by means of Doppler-laser interferometry, laser and plasma diagnostics associated with laser-matter interaction. The old range of cameras have been remodelled, in order to standardize and rationalize the production of ultrafast cinematography instruments, to produce a single camera known as TSN 506. Tne TSN 506 is composed of an electronic control unit, built around the image converter tube it can be fitted with a nanosecond sweep circuit covering the whole range from 1 ms to 200 ns or with a picosecond circuit providing streak durations from 1 to 100 ns. We shall describe the main electronic and opto-electronic performance of the TSN 506 operating in these two temporal fields.
Data filtering with support vector machines in geometric camera calibration.
Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C
2010-02-01
The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.
High-frame-rate infrared and visible cameras for test range instrumentation
NASA Astrophysics Data System (ADS)
Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.
1995-09-01
Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.
Overview of the Multi-Spectral Imager on the NEAR spacecraft
NASA Astrophysics Data System (ADS)
Hawkins, S. E., III
1996-07-01
The Multi-Spectral Imager on the Near Earth Asteroid Rendezvous (NEAR) spacecraft is a 1 Hz frame rate CCD camera sensitive in the visible and near infrared bands (~400-1100 nm). MSI is the primary instrument on the spacecraft to determine morphology and composition of the surface of asteroid 433 Eros. In addition, the camera will be used to assist in navigation to the asteroid. The instrument uses refractive optics and has an eight position spectral filter wheel to select different wavelength bands. The MSI optical focal length of 168 mm gives a 2.9 ° × 2.25 ° field of view. The CCD is passively cooled and the 537×244 pixel array output is digitized to 12 bits. Electronic shuttering increases the effective dynamic range of the instrument by more than a factor of 100. A one-time deployable cover protects the instrument during ground testing operations and launch. A reduced aperture viewport permits full field of view imaging while the cover is in place. A Data Processing Unit (DPU) provides the digital interface between the spacecraft and the Camera Head and uses an RTX2010 processor. The DPU provides an eight frame image buffer, lossy and lossless data compression routines, and automatic exposure control. An overview of the instrument is presented and design parameters and trade-offs are discussed.
Digital holographic interferometry for characterizing deformable mirrors in aero-optics
NASA Astrophysics Data System (ADS)
Trolinger, James D.; Hess, Cecil F.; Razavi, Payam; Furlong, Cosme
2016-08-01
Measuring and understanding the transient behavior of a surface with high spatial and temporal resolution are required in many areas of science. This paper describes the development and application of a high-speed, high-dynamic range, digital holographic interferometer for high-speed surface contouring with fractional wavelength precision and high-spatial resolution. The specific application under investigation here is to characterize deformable mirrors (DM) employed in aero-optics. The developed instrument was shown capable of contouring a deformable mirror with extremely high-resolution at frequencies exceeding 40 kHz. We demonstrated two different procedures for characterizing the mechanical response of a surface to a wide variety of input forces, one that employs a high-speed digital camera and a second that employs a low-speed, low-cost digital camera. The latter is achieved by cycling the DM actuators with a step input, producing a transient that typically lasts up to a millisecond before reaching equilibrium. Recordings are made at increasing times after the DM initiation from zero to equilibrium to analyze the transient. Because the wave functions are stored and reconstructable, they can be compared with each other to produce contours including absolute, difference, and velocity. High-speed digital cameras recorded the wave functions during a single transient at rates exceeding 40 kHz. We concluded that either method is fully capable of characterizing a typical DM to the extent required by aero-optical engineers.
Imaging of breast cancer with mid- and long-wave infrared camera.
Joro, R; Lääperi, A-L; Dastidar, P; Soimakallio, S; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Järvenpää, R
2008-01-01
In this novel study the breasts of 15 women with palpable breast cancer were preoperatively imaged with three technically different infrared (IR) cameras - micro bolometer (MB), quantum well (QWIP) and photo voltaic (PV) - to compare their ability to differentiate breast cancer from normal tissue. The IR images were processed, the data for frequency analysis were collected from dynamic IR images by pixel-based analysis and from each image selectively windowed regional analysis was carried out, based on angiogenesis and nitric oxide production of cancer tissue causing vasomotor and cardiogenic frequency differences compared to normal tissue. Our results show that the GaAs QWIP camera and the InSb PV camera demonstrate the frequency difference between normal and cancerous breast tissue; the PV camera more clearly. With selected image processing operations more detailed frequency analyses could be applied to the suspicious area. The MB camera was not suitable for tissue differentiation, as the difference between noise and effective signal was unsatisfactory.
NASA Technical Reports Server (NTRS)
Reece, J. S.; Marsh, J.
1973-01-01
Simultaneous observations of the GEOS-I and II flashing lamps by the NASA MOTS and SPEOPT cameras on the North American Datum (NAD) were analyzed using geometrical techniques to provide an adjustment of the station coordinates. Two separate adjustments were obtained. An optical data only solution was computed in which the solution scale was provided by the Rosman-Mojave distance obtained from a dynamic station solution. In a second adjustment, scaling was provided by processing simultaneous laser ranging data from Greenbelt and Wallops Island in a combined optical-laser solution. Comparisons of these results with previous GSFC dynamical solutions indicate an rms agreement on the order of 4 meters or better in each coordinate. Comparison with a detailed gravimetric geoid of North America yields agreement of 3 meters or better for mainland U.S. stations and 7 and 3 meters, respectively, for Bermuda and Puerto Rico.
Far ultraviolet wide field imaging and photometry - Spartan-202 Mark II Far Ultraviolet Camera
NASA Technical Reports Server (NTRS)
Carruthers, George R.; Heckathorn, Harry M.; Opal, Chet B.; Witt, Adolf N.; Henize, Karl G.
1988-01-01
The U.S. Naval Research Laboratory' Mark II Far Ultraviolet Camera, which is expected to be a primary scientific instrument aboard the Spartan-202 Space Shuttle mission, is described. This camera is intended to obtain FUV wide-field imagery of stars and extended celestial objects, including diffuse nebulae and nearby galaxies. The observations will support the HST by providing FUV photometry of calibration objects. The Mark II camera is an electrographic Schmidt camera with an aperture of 15 cm, a focal length of 30.5 cm, and sensitivity in the 1230-1600 A wavelength range.
NASA Astrophysics Data System (ADS)
Pozzi, Paolo; Wilding, Dean; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel
2018-02-01
In this work, we present a new confocal laser scanning microscope capable to perform sensorless wavefront optimization in real time. The device is a parallelized laser scanning microscope in which the excitation light is structured in a lattice of spots by a spatial light modulator, while a deformable mirror provides aberration correction and scanning. A binary DMD is positioned in an image plane of the detection optical path, acting as a dynamic array of reflective confocal pinholes, images by a high performance cmos camera. A second camera detects images of the light rejected by the pinholes for sensorless aberration correction.
SWIR, VIS and LWIR observer performance against handheld objects: a comparison
NASA Astrophysics Data System (ADS)
Adomeit, Uwe
2016-10-01
The short wave infrared spectral range caused interest to be used in day and night time military and security applications in the last years. This necessitates performance assessment of SWIR imaging equipment in comparison to the one operating in the visual (VIS) and thermal infrared (LWIR) spectral range. In the military context (nominal) range is the main performance criteria. Discriminating friend from foe is one of the main tasks in today's asymmetric scenarios and so personnel, human activities and handheld objects are used as targets to estimate ranges. The later was also used for an experiment at Fraunhofer IOSB to get a first impression how the SWIR performs compared to VIS and LWIR. A human consecutively carrying one of nine different civil or military objects was recorded from five different ranges in the three spectral ranges. For the visual spectral range a 3-chip color-camera was used, the SWIR range was covered by an InGaAs-camera and the LWIR by an uncooled bolometer. It was ascertained that the nominal spatial resolution of the three cameras was in the same magnitude in order to enable an unbiased assessment. Daytime conditions were selected for data acquisition to separate the observer performance from illumination conditions and to some extend also camera performance. From the recorded data, a perception experiment was prepared. It was conducted as a nine-alternative forced choice, unlimited observation time test with 15 observers participating. Before the experiment, the observers were trained on close range target data. Outcome of the experiment was the average probability of identification versus range between camera and target. The comparison of the range performance achieved in the three spectral bands gave a mixed result. On one hand a ranking VIS / SWIR / LWIR in decreasing order can be seen in the data, but on the other hand only the difference between VIS and the other bands is statistically significant. Additionally it was not possible to explain the outcome with typical contrast metrics. Probably form is more important than contrast here as long as the contrast is generally high enough. These results were unexpected and need further exploration.
The Viking parachute qualification test technique.
NASA Technical Reports Server (NTRS)
Raper, J. L.; Lundstrom, R. R.; Michel, F. C.
1973-01-01
The parachute system for NASA's Viking '75 Mars lander was flight qualified in four high-altitude flight tests at the White Sands Missile range (WSMR). A balloon system lifted a full-scale simulated Viking spacecraft to an altitude where a varying number of rocket motors were used to propel the high drag, lifting test vehicle to test conditions which would simulate the range of entry conditions expected at Mars. A ground-commanded cold gas pointing system located on the balloon system provided powered vehicle azimuth control to insure that the flight trajectory remained within the WSMR boundaries. A unique ground-based computer-radar system was employed to monitor inflight performance of the powered vehicle and insure that command ignition of the parachute mortar occurred at the required test conditions of Mach number and dynamic pressure. Performance data were obtained from cameras, telemetry, and radar.
An improved method to estimate reflectance parameters for high dynamic range imaging
NASA Astrophysics Data System (ADS)
Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro
2008-01-01
Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
Modeling of a microchannel plate working in pulsed mode
NASA Astrophysics Data System (ADS)
Secroun, Aurelia; Mens, Alain; Segre, Jacques; Assous, Franck; Piault, Emmanuel; Rebuffie, Jean-Claude
1997-05-01
MicroChannel Plates (MCPs) are used in high speed cinematography systems such as MCP framing cameras and streak camera readouts. In order to know the dynamic range or the signal to noise ratio that are available in these devices, a good knowledge of the performances of the MCP is essential. The point of interest of our simulation is the working mode of the microchannel plate--that is light pulsed mode--, in which the signal level is relatively high and its duration can be shorter than the time needed to replenish the wall of the channel, when other papers mainly studied night vision applications with weak continuous and nearly single electron input signal. Also our method allows the simulation of saturation phenomena due to the large number of electrons involved, whereas the discrete models previously used for simulating pulsed mode might not be properly adapted. Here are presented the choices made in modeling the microchannel, more specifically as for the physics laws, the secondary emission parameters and the 3D- geometry. In a last part first results are shown.
Robust range estimation with a monocular camera for vision-based forward collision warning system.
Park, Ki-Yeong; Hwang, Sun-Young
2014-01-01
We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.
Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System
2014-01-01
We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344
[3D visualization and analysis of vocal fold dynamics].
Bohr, C; Döllinger, M; Kniesburges, S; Traxdorf, M
2016-04-01
Visual investigation methods of the larynx mainly allow for the two-dimensional presentation of the three-dimensional structures of the vocal fold dynamics. The vertical component of the vocal fold dynamics is often neglected, yielding a loss of information. The latest studies show that the vertical dynamic components are in the range of the medio-lateral dynamics and play a significant role within the phonation process. This work presents a method for future 3D reconstruction and visualization of endoscopically recorded vocal fold dynamics. The setup contains a high-speed camera (HSC) and a laser projection system (LPS). The LPS projects a regular grid on the vocal fold surfaces and in combination with the HSC allows a three-dimensional reconstruction of the vocal fold surface. Hence, quantitative information on displacements and velocities can be provided. The applicability of the method is presented for one ex-vivo human larynx, one ex-vivo porcine larynx and one synthetic silicone larynx. The setup introduced allows the reconstruction of the entire visible vocal fold surfaces for each oscillation status. This enables a detailed analysis of the three dimensional dynamics (i. e. displacements, velocities, accelerations) of the vocal folds. The next goal is the miniaturization of the LPS to allow clinical in-vivo analysis in humans. We anticipate new insight on dependencies between 3D dynamic behavior and the quality of the acoustic outcome for healthy and disordered phonation.
Application of phase matching autofocus in airborne long-range oblique photography camera
NASA Astrophysics Data System (ADS)
Petrushevsky, Vladimir; Guberman, Asaf
2014-06-01
The Condor2 long-range oblique photography (LOROP) camera is mounted in an aerodynamically shaped pod carried by a fast jet aircraft. Large aperture, dual-band (EO/MWIR) camera is equipped with TDI focal plane arrays and provides high-resolution imagery of extended areas at long stand-off ranges, at day and night. Front Ritchey-Chretien optics is made of highly stable materials. However, the camera temperature varies considerably in flight conditions. Moreover, a composite-material structure of the reflective objective undergoes gradual dehumidification in dry nitrogen atmosphere inside the pod, causing some small decrease of the structure length. The temperature and humidity effects change a distance between the mirrors by just a few microns. The distance change is small but nevertheless it alters the camera's infinity focus setpoint significantly, especially in the EO band. To realize the optics' resolution potential, the optimal focus shall be constantly maintained. In-flight best focus calibration and temperature-based open-loop focus control give mostly satisfactory performance. To get even better focusing precision, a closed-loop phase-matching autofocus method was developed for the camera. The method makes use of an existing beamsharer prism FPA arrangement where aperture partition exists inherently in an area of overlap between the adjacent detectors. The defocus is proportional to an image phase shift in the area of overlap. Low-pass filtering of raw defocus estimate reduces random errors related to variable scene content. Closed-loop control converges robustly to precise focus position. The algorithm uses the temperature- and range-based focus prediction as an initial guess for the closed-loop phase-matching control. The autofocus algorithm achieves excellent results and works robustly in various conditions of scene illumination and contrast.
Design principles and applications of a cooled CCD camera for electron microscopy.
Faruqi, A R
1998-01-01
Cooled CCD cameras offer a number of advantages in recording electron microscope images with CCDs rather than film which include: immediate availability of the image in a digital format suitable for further computer processing, high dynamic range, excellent linearity and a high detective quantum efficiency for recording electrons. In one important respect however, film has superior properties: the spatial resolution of CCD detectors tested so far (in terms of point spread function or modulation transfer function) are inferior to film and a great deal of our effort has been spent in designing detectors with improved spatial resolution. Various instrumental contributions to spatial resolution have been analysed and in this paper we discuss the contribution of the phosphor-fibre optics system in this measurement. We have evaluated the performance of a number of detector components and parameters, e.g. different phosphors (and a scintillator), optical coupling with lens or fibre optics with various demagnification factors, to improve the detector performance. The camera described in this paper, which is based on this analysis, uses a tapered fibre optics coupling between the phosphor and the CCD and is installed on a Philips CM12 electron microscope equipped to perform cryo-microscopy. The main use of the camera so far has been in recording electron diffraction patterns from two dimensional crystals of bacteriorhodopsin--from wild type and from different trapped states during the photocycle. As one example of the type of data obtained with the CCD camera a two dimensional Fourier projection map from the trapped O-state is also included. With faster computers, it will soon be possible to undertake this type of work on an on-line basis. Also, with improvements in detector size and resolution, CCD detectors, already ideal for diffraction, will be able to compete with film in the recording of high resolution images.
Person and gesture tracking with smart stereo cameras
NASA Astrophysics Data System (ADS)
Gordon, Gaile; Chen, Xiangrong; Buck, Ron
2008-02-01
Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.
Free-viewpoint video of human actors using multiple handheld Kinects.
Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian
2013-10-01
We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.
Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras
NASA Technical Reports Server (NTRS)
Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary
2011-01-01
TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants
Laser Technology in Interplanetary Exploration: The Past and the Future
NASA Technical Reports Server (NTRS)
Smith, David E.
2000-01-01
Laser technology has been used in planetary exploration for many years but it has only been in the last decade that laser altimeters and ranging systems have been selected as flight instruments alongside cameras, spectrometers, magnetometers, etc. Today we have an active laser system operating at Mars and another destined for the asteroid Eros. A few years ago a laser ranging system on the Clementine mission changed much of our thinking about the moon and in a few years laser altimeters will be on their way to Mercury, and also to Europa. Along with the increased capabilities and reliability of laser systems has came the realization that precision ranging to the surface of planetary bodies from orbiting spacecraft enables more scientific problems to be addressed, including many associated with planetary rotation, librations, and tides. In addition, new Earth-based laser ranging systems working with similar systems on other planetary bodies in an asynchronous transponder mode will be able to make interplanetary ranging measurements at the few cm level and will advance our understanding of solar system dynamics and relativistic physics.
Streak camera based SLR receiver for two color atmospheric measurements
NASA Technical Reports Server (NTRS)
Varghese, Thomas K.; Clarke, Christopher; Oldham, Thomas; Selden, Michael
1993-01-01
To realize accurate two-color differential measurements, an image digitizing system with variable spatial resolution was designed, built, and integrated to a photon-counting picosecond streak camera, yielding a temporal scan resolution better than 300 femtosecond/pixel. The streak camera is configured to operate with 3 spatial channels; two of these support green (532 nm) and uv (355 nm) while the third accommodates reference pulses (764 nm) for real-time calibration. Critical parameters affecting differential timing accuracy such as pulse width and shape, number of received photons, streak camera/imaging system nonlinearities, dynamic range, and noise characteristics were investigated to optimize the system for accurate differential delay measurements. The streak camera output image consists of three image fields, each field is 1024 pixels along the time axis and 16 pixels across the spatial axis. Each of the image fields may be independently positioned across the spatial axis. Two of the image fields are used for the two wavelengths used in the experiment; the third window measures the temporal separation of a pair of diode laser pulses which verify the streak camera sweep speed for each data frame. The sum of the 16 pixel intensities across each of the 1024 temporal positions for the three data windows is used to extract the three waveforms. The waveform data is processed using an iterative three-point running average filter (10 to 30 iterations are used) to remove high-frequency structure. The pulse pair separations are determined using the half-max and centroid type analysis. Rigorous experimental verification has demonstrated that this simplified process provides the best measurement accuracy. To calibrate the receiver system sweep, two laser pulses with precisely known temporal separation are scanned along the full length of the sweep axis. The experimental measurements are then modeled using polynomial regression to obtain a best fit to the data. Data aggregation using normal point approach has provided accurate data fitting techniques and is found to be much more convenient than using the full rate single shot data. The systematic errors from this model have been found to be less than 3 ps for normal points.
Range camera on conveyor belts: estimating size distribution and systematic errors due to occlusion
NASA Astrophysics Data System (ADS)
Blomquist, Mats; Wernersson, Ake V.
1999-11-01
When range cameras are used for analyzing irregular material on a conveyor belt there will be complications like missing segments caused by occlusion. Also, a number of range discontinuities will be present. In a frame work towards stochastic geometry, conditions are found for the cases when range discontinuities take place. The test objects in this paper are pellets for the steel industry. An illuminating laser plane will give range discontinuities at the edges of each individual object. These discontinuities are used to detect and measure the chord created by the intersection of the laser plane and the object. From the measured chords we derive the average diameter and its variance. An improved method is to use a pair of parallel illuminating light planes to extract two chords. The estimation error for this method is not larger than the natural shape fluctuations (the difference in diameter) for the pellets. The laser- camera optronics is sensitive enough both for material on a conveyor belt and free falling material leaving the conveyor.
Ando, Koki; Yamaguchi, Mitsutaka; Yamamoto, Seiichi; Toshito, Toshiyuki; Kawachi, Naoki
2017-06-21
Imaging of secondary electron bremsstrahlung x-ray emitted during proton irradiation is a possible method for measurement of the proton beam distribution in phantom. However, it is not clear that the method is used for range estimation of protons. For this purpose, we developed a low-energy x-ray camera and conducted imaging of the bremsstrahlung x-ray produced during irradiation of proton beams. We used a 20 mm × 20 mm × 1 mm finely grooved GAGG scintillator that was optically coupled to a one-inch square high quantum efficiency (HQE)-type position-sensitive photomultiplier tube to form an imaging detector. The imaging detector was encased in a 2 cm-thick tungsten container, and a pinhole collimator was attached to its camera head. After performance of the camera was evaluated, secondary electron bremsstrahlung x-ray imaging was conducted during irradiation of the proton beams for three different proton energies, and the results were compared with Monte Carlo simulation as well as calculated value. The system spatial resolution and sensitivity of the developed x-ray camera with 1.5 mm-diameter pinhole collimator were estimated to be 32 mm FWHM and 5.2 × 10 -7 for ~35 keV x-ray photons at 100 cm from the collimator surface, respectively. We could image the proton beam tracks by measuring the secondary electron bremsstrahlung x-ray during irradiation of the proton beams, and the ranges for different proton energies could be estimated from the images. The measured ranges from the images were well matched with the Monte Carlo simulation, and slightly smaller than the calculated values. We confirmed that the imaging of the secondary electron bremsstrahlung x-ray emitted during proton irradiation with the developed x-ray camera has the potential to be a new tool for proton range estimations.
Design and fabrication of an autonomous rendezvous and docking sensor using off-the-shelf hardware
NASA Technical Reports Server (NTRS)
Grimm, Gary E.; Bryan, Thomas C.; Howard, Richard T.; Book, Michael L.
1991-01-01
NASA Marshall Space Flight Center (MSFC) has developed and tested an engineering model of an automated rendezvous and docking sensor system composed of a video camera ringed with laser diodes at two wavelengths and a standard remote manipulator system target that has been modified with retro-reflective tape and 830 and 780 mm optical filters. TRW has provided additional engineering analysis, design, and manufacturing support, resulting in a robust, low cost, automated rendezvous and docking sensor design. We have addressed the issue of space qualification using off-the-shelf hardware components. We have also addressed the performance problems of increased signal to noise ratio, increased range, increased frame rate, graceful degradation through component redundancy, and improved range calibration. Next year, we will build a breadboard of this sensor. The phenomenology of the background scene of a target vehicle as viewed against earth and space backgrounds under various lighting conditions will be simulated using the TRW Dynamic Scene Generator Facility (DSGF). Solar illumination angles of the target vehicle and candidate docking target ranging from eclipse to full sun will be explored. The sensor will be transportable for testing at the MSFC Flight Robotics Laboratory (EB24) using the Dynamic Overhead Telerobotic Simulator (DOTS).
Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.
2008-01-01
Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.
Saotome, Naoya; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji
2016-04-01
Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors' facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. The results of this study demonstrate that the authors' range check system is capable of quick and easy range verification with sufficient accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saotome, Naoya, E-mail: naosao@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke
Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator blockmore » and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.« less
Lawrence L.C. Jones; Martin G. Raphael
1993-01-01
Inexpensive camera systems have been successfully used to detect the occurrence of martens, fishers, and other wildlife species. The use of cameras is becoming widespread, and we give suggestions for standardizing techniques so that comparisons of data can occur across the geographic range of the target species. Details are given on equipment needs, setting up the...
NASA Astrophysics Data System (ADS)
Kadosh, Itai; Sarusi, Gabby
2017-10-01
The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.
NASA Astrophysics Data System (ADS)
Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.
2017-02-01
In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.
COUGAR: a liquid nitrogen cooled InGaAs camera for astronomy and electro-luminescence
NASA Astrophysics Data System (ADS)
Van Bogget, Urbain; Vervenne, Vincent; Vinella, Rosa Maria; van der Zanden, Koen; Merken, Patrick; Vermeiren, Jan
2014-06-01
A SWIR FPA was designed and manufactured with 640*512 pixels, 20 μm pitch and InGaAs detectors for electroluminescence characterization and astronomical applications in the [0.9 - 1.55 μm] range. The FPA is mounted in a liquid nitrogen dewar and is operated by a low noise frontend electronics. One of the biggest problem in designing sensors and cameras for electro-luminescence measurements is the autoillumination of the detectors by the readout circuit. Besides of proper shielding of the detectors, the ROIC shall be optimized for minimal electrical activity during the integration time of the very-weak signals coming from the circuit under test. For this reason a SFD (or Source Follower per Detector) architecture (like in the Hawaii sensor) was selected, resulting in a background limited performance of the detector. The pixel has a (somewhat arbitrary) full well capacity of 400 000 e- and a sensitivity of 2.17 μV/e-. The dark signal is app. 1 e-/pixel/sec and with the appropriate Fowler sampling the dark noise lowers below 5 e-rms. The power consumption of the circuit is limited 2 mW, allowing more than 24 hours of operation on less than 1 l of liquid nitrogen. The FPA is equipped with 4 outputs (optional readout on one single channel) and is capable of achieving 3 frames per second. Due to the non-destructive readout it is possible to determine in a dynamic way the optimal integration time for each observation. The Cougar camera is equipped with ultra-low noise power supply and bias lines; the electronics contain also a 24 bit AD converter to fully exploit the sensitivity of the FPA and the camera.
A Microsoft Kinect-Based Point-of-Care Gait Assessment Framework for Multiple Sclerosis Patients.
Gholami, Farnood; Trojan, Daria A; Kovecses, Jozsef; Haddad, Wassim M; Gholami, Behnood
2017-09-01
Gait impairment is a prevalent and important difficulty for patients with multiple sclerosis (MS), a common neurological disorder. An easy to use tool to objectively evaluate gait in MS patients in a clinical setting can assist clinicians to perform an objective assessment. The overall objective of this study is to develop a framework to quantify gait abnormalities in MS patients using the Microsoft Kinect for the Windows sensor; an inexpensive, easy to use, portable camera. Specifically, we aim to evaluate its feasibility for utilization in a clinical setting, assess its reliability, evaluate the validity of gait indices obtained, and evaluate a novel set of gait indices based on the concept of dynamic time warping. In this study, ten ambulatory MS patients, and ten age and sex-matched normal controls were studied at one session in a clinical setting with gait assessment using a Kinect camera. The expanded disability status scale (EDSS) clinical ambulation score was calculated for the MS subjects, and patients completed the Multiple Sclerosis walking scale (MSWS). Based on this study, we established the potential feasibility of using a Microsoft Kinect camera in a clinical setting. Seven out of the eight gait indices obtained using the proposed method were reliable with intraclass correlation coefficients ranging from 0.61 to 0.99. All eight MS gait indices were significantly different from those of the controls (p-values less than 0.05). Finally, seven out of the eight MS gait indices were correlated with the objective and subjective gait measures (Pearson's correlation coefficients greater than 0.40). This study shows that the Kinect camera is an easy to use tool to assess gait in MS patients in a clinical setting.
2016-01-01
Digital single-molecule technologies are expanding diagnostic capabilities, enabling the ultrasensitive quantification of targets, such as viral load in HIV and hepatitis C infections, by directly counting single molecules. Replacing fluorescent readout with a robust visual readout that can be captured by any unmodified cell phone camera will facilitate the global distribution of diagnostic tests, including in limited-resource settings where the need is greatest. This paper describes a methodology for developing a visual readout system for digital single-molecule amplification of RNA and DNA by (i) selecting colorimetric amplification-indicator dyes that are compatible with the spectral sensitivity of standard mobile phones, and (ii) identifying an optimal ratiometric image-process for a selected dye to achieve a readout that is robust to lighting conditions and camera hardware and provides unambiguous quantitative results, even for colorblind users. We also include an analysis of the limitations of this methodology, and provide a microfluidic approach that can be applied to expand dynamic range and improve reaction performance, allowing ultrasensitive, quantitative measurements at volumes as low as 5 nL. We validate this methodology using SlipChip-based digital single-molecule isothermal amplification with λDNA as a model and hepatitis C viral RNA as a clinically relevant target. The innovative combination of isothermal amplification chemistry in the presence of a judiciously chosen indicator dye and ratiometric image processing with SlipChip technology allowed the sequence-specific visual readout of single nucleic acid molecules in nanoliter volumes with an unmodified cell phone camera. When paired with devices that integrate sample preparation and nucleic acid amplification, this hardware-agnostic approach will increase the affordability and the distribution of quantitative diagnostic and environmental tests. PMID:26900709
Investigation of high power impulse magnetron sputtering (HIPIMS) discharge using fast ICCD camera
NASA Astrophysics Data System (ADS)
Hecimovic, Ante
2012-10-01
High power impulse magnetron sputtering (HIPIMS) combines impulse glow discharges at power levels up to the MW range with conventional magnetron cathodes to achieve a highly ionised sputtered flux. The dynamics of the HIPIMS discharge was investigated using fast Intensified Charge Coupled Device (ICCD) camera. In the first experiment the HIPIMS plasma was recorded from the side with goal to analyse the plasma intensity using Abel inversion to obtain the emissivity maps of the plasma species. Resulting emissivity maps provide the information on the spatial distribution of Ar and sputtered material and evolution of the plasma chemistry above the cathode. In the second experiment the plasma emission was recorded with camera facing the target. The images show that the HIPIMS plasma develops drift wave type instabilities characterized by well defined regions of high and low plasma emissivity along the racetrack of the magnetron. The instabilities cause periodic shifts in the floating potential. The structures rotate in ExB direction at velocities of 10 kms-1 and frequencies up to 200 kHz. The high emissivity regions comprise Ar and metal ion emission with strong Ar and metal neutral emission depletion. A detailed analysis of the temporal evolution of the saturated instabilities using four consequently triggered fast ICCD cameras is presented. Furthermore working gas pressure and discharge current variation showed that the shape and the speed of the instability strongly depend on the working gas and target material combination. In order to better understand the mechanism of the instability, different optical interference band pass filters (of metal and gas atom, and ion lines) were used to observe the spatial distribution of each species within the instability.
Development of an Ultra-Violet Digital Camera for Volcanic Sulfur Dioxide Imaging
NASA Astrophysics Data System (ADS)
Bluth, G. J.; Shannon, J. M.; Watson, I. M.; Prata, F. J.; Realmuto, V. J.
2006-12-01
In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images of volcanic SO2 plumes were collected at four active volcanoes with persistent passive degassing: Villarrica, located in Chile, and Santiaguito, Fuego, and Pacaya, located in Guatemala. Images were collected from distances ranging between 4 and 28 km away, with crisp detection up to approximately 16 km. Camera set-up time in the field ranges from 5-10 minutes and images can be recorded in as rapidly as 10-second intervals. Variable in-plume concentrations can be observed and accurate plume speeds (or rise rates) can readily be determined by tracing individual portions of the plume within sequential images. Initial fluxes computed from camera images require a correction for the effects of environmental light scattered into the field of view. At Fuego volcano, simultaneous measurements of corrected SO2 fluxes with the camera and a Correlation Spectrometer (COSPEC) agreed within 25 percent. Experiments at the other sites were equally encouraging, and demonstrated the camera's ability to detect SO2 under demanding meteorological conditions. This early work has shown great success in imaging SO2 plumes and offers promise for volcano monitoring due to its rapid deployment and data processing capabilities, relatively low cost, and improved interpretation afforded by synoptic plume coverage from a range of distances.
Monitoring and Modeling the Impact of Grazers Using Visual, Remote and Traditional Field Techniques
NASA Astrophysics Data System (ADS)
Roadknight, C. M.; Marshall, I. W.; Rose, R. J.
2009-04-01
The relationship between wild and domestic animals and the landscape they graze upon is important to soil erosion studies because they are a strong influence on vegetation cover (a key control on the rate of overland flow runoff), and also because the grazers contribute directly to sediment transport via carriage and indirectly by exposing fresh soil by trampling and burrowing/excavating. Quantifying the impacts of these effects on soil erosion and their dependence on grazing intensity, in complex semi-natural habitats has proved difficult. This is due to lack of manpower to collect sufficient data and weak standardization of data collection between observers. The advent of cheaper and more sophisticated digital camera technology and GPS tracking devices has lead to an increase in the amount of habitat monitoring information that is being collected. We report on the use of automated trail cameras to continuously capture images of grazer (sheep, rabbits, deer) activity in a variety of habitats at the Moor House nature reserve in northern England. As well as grazer activity these cameras also give valuable information on key climatic soil erosion factors such as snow, rain and wind and plant growth and thus allow the importance of a range of grazer activities and the grazing intensity to be estimated. GPS collars and more well established survey methods (erosion monitoring, dung counting and vegetation surveys) are being used to generate a detailed representation of land usage and plan camera siting. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the data processing time and increase focus on important subsets in the collected data. We also present a land usage model that estimates grazing intensity, grazer behaviours and their impact on soil coverage at sites where cameras have not been deployed, based on generalising from camera sites to other sites with similar morphology and ecology, where the GPS tracks indicate similar levels of grazer activity. This is ongoing research with results continually feeding back to the data collection regimes in terms of camera placement. This all makes a valuable contribution to the debate about the dynamics of grazing behaviour and its impact on soil erosion.
NASA Technical Reports Server (NTRS)
Wood, E. H.
1972-01-01
Developments in the following areas are discussed: television camera in dynamic angiography, dynamic computer generated displays for study of the human left ventricle, and status report on the work statement for the sixth year. A list of publications for the period 1 October 1971 to 1 October 1972 is included.
Local adaptive tone mapping for video enhancement
NASA Astrophysics Data System (ADS)
Lachine, Vladimir; Dai, Min (.
2015-03-01
As new technologies like High Dynamic Range cameras, AMOLED and high resolution displays emerge on consumer electronics market, it becomes very important to deliver the best picture quality for mobile devices. Tone Mapping (TM) is a popular technique to enhance visual quality. However, the traditional implementation of Tone Mapping procedure is limited by pixel's value to value mapping, and the performance is restricted in terms of local sharpness and colorfulness. To overcome the drawbacks of traditional TM, we propose a spatial-frequency based framework in this paper. In the proposed solution, intensity component of an input video/image signal is split on low pass filtered (LPF) and high pass filtered (HPF) bands. Tone Mapping (TM) function is applied to LPF band to improve the global contrast/brightness, and HPF band is added back afterwards to keep the local contrast. The HPF band may be adjusted by a coring function to avoid noise boosting and signal overshooting. Colorfulness of an original image may be preserved or enhanced by chroma components correction by means of saturation function. Localized content adaptation is further improved by dividing an image to a set of non-overlapped regions and modifying each region individually. The suggested framework allows users to implement a wide range of tone mapping applications with perceptional local sharpness and colorfulness preserved or enhanced. Corresponding hardware circuit may be integrated in camera, video or display pipeline with minimal hardware budget
Concave Surround Optics for Rapid Multi-View Imaging
2006-11-01
thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
Detecting personnel around UGVs using stereo vision
NASA Astrophysics Data System (ADS)
Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.
2008-04-01
Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.
Variable Shadow Screens for Imaging Optical Devices
NASA Technical Reports Server (NTRS)
Lu, Ed; Chretien, Jean L.
2004-01-01
Variable shadow screens have been proposed for reducing the apparent brightnesses of very bright light sources relative to other sources within the fields of view of diverse imaging optical devices, including video and film cameras and optical devices for imaging directly into the human eye. In other words, variable shadow screens would increase the effective dynamic ranges of such devices. Traditionally, imaging sensors are protected against excessive brightness by use of dark filters and/or reduction of iris diameters. These traditional means do not increase dynamic range; they reduce the ability to view or image dimmer features of an image because they reduce the brightness of all parts of an image by the same factor. On the other hand, a variable shadow screen would darken only the excessively bright parts of an image. For example, dim objects in a field of view that included the setting Sun or bright headlights could be seen more readily in a picture taken through a variable shadow screen than in a picture of the same scene taken through a dark filter or a narrowed iris. The figure depicts one of many potential variations of the basic concept of the variable shadow screen. The shadow screen would be a normally transparent liquid-crystal matrix placed in front of a focal-plane array of photodetectors in a charge-coupled-device video camera. The shadow screen would be placed far enough from the focal plane so as not to disrupt the focal-plane image to an unacceptable degree, yet close enough so that the out-of-focus shadows cast by the screen would still be effective in darkening the brightest parts of the image. The image detected by the photodetector array itself would be used as feedback to drive the variable shadow screen: The video output of the camera would be processed by suitable analog and/or digital electronic circuitry to generate a negative partial version of the image to be impressed on the shadow screen. The parts of the shadow screen in front of those parts of the image with brightness below a specified threshold would be left transparent; the parts of the shadow screen in front of those parts of the image where the brightness exceeded the threshold would be darkened by an amount that would increase with the excess above the threshold.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Ross, William N; Miyazaki, Kenichi; Popovic, Marko A; Zecevic, Dejan
2015-04-01
Dynamic calcium and voltage imaging is a major tool in modern cellular neuroscience. Since the beginning of their use over 40 years ago, there have been major improvements in indicators, microscopes, imaging systems, and computers. While cutting edge research has trended toward the use of genetically encoded calcium or voltage indicators, two-photon microscopes, and in vivo preparations, it is worth noting that some questions still may be best approached using more classical methodologies and preparations. In this review, we highlight a few examples in neurons where the combination of charge-coupled device (CCD) imaging and classical organic indicators has revealed information that has so far been more informative than results using the more modern systems. These experiments take advantage of the high frame rates, sensitivity, and spatial integration of the best CCD cameras. These cameras can respond to the faster kinetics of organic voltage and calcium indicators, which closely reflect the fast dynamics of the underlying cellular events.
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.
2015-10-01
Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.
NASA Astrophysics Data System (ADS)
Khalifa, Aly A.; Aly, Hussein A.; El-Sherif, Ashraf F.
2016-02-01
Near infrared (NIR) dynamic scene projection systems are used to perform hardware in-the-loop (HWIL) testing of a unit under test operating in the NIR band. The common and complex requirement of a class of these units is a dynamic scene that is spatio-temporal variant. In this paper we apply and investigate active external modulation of NIR laser in different ranges of temporal frequencies. We use digital micromirror devices (DMDs) integrated as the core of a NIR projection system to generate these dynamic scenes. We deploy the spatial pattern to the DMD controller to simultaneously yield the required amplitude by pulse width modulation (PWM) of the mirror elements as well as the spatio-temporal pattern. Desired modulation and coding of high stable, high power visible (Red laser at 640 nm) and NIR (Diode laser at 976 nm) using the combination of different optical masks based on DMD were achieved. These spatial versatile active coding strategies for both low and high frequencies in the range of kHz for irradiance of different targets were generated by our system and recorded using VIS-NIR fast cameras. The temporally-modulated laser pulse traces were measured using array of fast response photodetectors. Finally using a high resolution spectrometer, we evaluated the NIR dynamic scene projection system response in terms of preserving the wavelength and band spread of the NIR source after projection.
Calibration Techniques for Accurate Measurements by Underwater Camera Systems
Shortis, Mark
2015-01-01
Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Solid-state framing camera with multiple time frames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, K. L.; Stewart, R. E.; Steele, P. T.
2013-10-07
A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.
Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
Beam measurements using visible synchrotron light at NSLS2 storage ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Weixing, E-mail: chengwx@bnl.gov; Bacha, Bel; Singh, Om
2016-07-27
Visible Synchrotron Light Monitor (SLM) diagnostic beamline has been designed and constructed at NSLS2 storage ring, to characterize the electron beam profile at various machine conditions. Due to the excellent alignment, SLM beamline was able to see the first visible light when beam was circulating the ring for the first turn. The beamline has been commissioned for the past year. Besides a normal CCD camera to monitor the beam profile, streak camera and gated camera are used to measure the longitudinal and transverse profile to understand the beam dynamics. Measurement results from these cameras will be presented in this paper.more » A time correlated single photon counting system (TCSPC) has also been setup to measure the single bunch purity.« less
Time-resolved spectra of dense plasma focus using spectrometer, streak camera, and CCD combination.
Goldin, F J; Meehan, B T; Hagen, E C; Wilkins, P R
2010-10-01
A time-resolving spectrographic instrument has been assembled with the primary components of a spectrometer, image-converting streak camera, and CCD recording camera, for the primary purpose of diagnosing highly dynamic plasmas. A collection lens defines the sampled region and couples light from the plasma into a step index, multimode fiber which leads to the spectrometer. The output spectrum is focused onto the photocathode of the streak camera, the output of which is proximity-coupled to the CCD. The spectrometer configuration is essentially Czerny-Turner, but off-the-shelf Nikon refraction lenses, rather than mirrors, are used for practicality and flexibility. Only recently assembled, the instrument requires significant refinement, but has now taken data on both bridge wire and dense plasma focus experiments.
Characterization of dynamic droplet impaction and deposit formation on leaf surfaces
USDA-ARS?s Scientific Manuscript database
Elucidation of droplet dynamic impaction and deposition formation on leaf surfaces would assist to optimize application strategies, improve biological control efficiency, and minimize pesticide waste. A custom-designed system consisting of two high-speed digital cameras and a uniform-size droplet ge...
Exploring of PST-TBPM in Monitoring Bridge Dynamic Deflection in Vibration
NASA Astrophysics Data System (ADS)
Zhang, Guojian; Liu, Shengzhen; Zhao, Tonglong; Yu, Chengxin
2018-01-01
This study adopts digital photography to monitor bridge dynamic deflection in vibration. Digital photography used in this study is based on PST-TBPM (photographing scale transformation-time baseline parallax method). Firstly, a digital camera is used to monitor the bridge in static as a zero image. Then, the digital camera is used to monitor the bridge in vibration every three seconds as the successive images. Based on the reference system, PST-TBPM is used to calculate the images to obtain the bridge dynamic deflection in vibration. Results show that the average measurement accuracies are 0.615 pixels and 0.79 pixels in X and Z direction. The maximal deflection of the bridge is 7.14 pixels. PST-TBPM is valid in solving the problem-the photographing direction not perpendicular to the bridge. Digital photography used in this study can assess the bridge health through monitoring the bridge dynamic deflection in vibration. The deformation trend curves depicted over time also can warn the possible dangers.
Saito, Toshikuni; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Hayashibe, Mitsuhiro; Otake, Yoshito
2006-01-01
We have been developing a DSVC (Dynamic Spatial Video Camera) system to measure and observe human locomotion quantitatively and freely. A 4D (four-dimensional) human model with detailed skeletal structure, joint, muscle, and motor functionality has been built. The purpose of our research was to estimate skeletal movements from body surface shapes using DSVC and the 4D human model. For this purpose, we constructed a body surface model of a subject and resized the standard 4D human model to match with geometrical features of the subject's body surface model. Software that integrates the DSVC system and the 4D human model, and allows dynamic skeletal state analysis from body surface movement data was also developed. We practically applied the developed system in dynamic skeletal state analysis of a lower limb in motion and were able to visualize the motion using geometrically resized standard 4D human model.
NASA Technical Reports Server (NTRS)
Graves, Sharon S.; Burner, Alpheus W.; Edwards, John W.; Schuster, David M.
2001-01-01
The techniques used to acquire, reduce, and analyze dynamic deformation measurements of an aeroelastic semispan wind tunnel model are presented. Single-camera, single-view video photogrammetry (also referred to as videogrammetric model deformation, or VMD) was used to determine dynamic aeroelastic deformation of the semispan 'Models for Aeroelastic Validation Research Involving Computation' (MAVRIC) model in the Transonic Dynamics Tunnel at the NASA Langley Research Center. Dynamic deformation was determined from optical retroreflective tape targets at five semispan locations located on the wing from the root to the tip. Digitized video images from a charge coupled device (CCD) camera were recorded and processed to automatically determine target image plane locations that were then corrected for sensor, lens, and frame grabber spatial errors. Videogrammetric dynamic data were acquired at a 60-Hz rate for time records of up to 6 seconds during portions of this flutter/Limit Cycle Oscillation (LCO) test at Mach numbers from 0.3 to 0.96. Spectral analysis of the deformation data is used to identify dominant frequencies in the wing motion. The dynamic data will be used to separate aerodynamic and structural effects and to provide time history deflection data for Computational Aeroelasticity code evaluation and validation.
NASA Astrophysics Data System (ADS)
Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team
2018-01-01
A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.
Fast camera observations of injected and intrinsic dust in TEXTOR
NASA Astrophysics Data System (ADS)
Shalpegin, A.; Vignitchouk, L.; Erofeev, I.; Brochard, F.; Litnovsky, A.; Bozhenkov, S.; Bykov, I.; den Harder, N.; Sergienko, G.
2015-12-01
Stereoscopic fast camera observations of pre-characterized carbon and tungsten dust injection in TEXTOR are reported, along with the modelling of tungsten particle trajectories with MIGRAINe. Particle tracking analysis of the video data showed significant differences in dust dynamics: while carbon flakes were prone to agglomeration and explosive destruction, spherical tungsten particles followed quasi-inertial trajectories. Although this inertial nature prevented any validation of the force models used in MIGRAINe, comparisons between the experimental and simulated lifetimes provide a direct evidence of dust temperature overestimation in dust dynamics codes. Furthermore, wide-view observations of the TEXTOR interior revealed the main production mechanism of intrinsic carbon dust, as well as the location of probable dust remobilization sites.
NASA Astrophysics Data System (ADS)
Wolszczak, Piotr; Łygas, Krystian; Litak, Grzegorz
2018-07-01
This study investigates dynamic responses of a nonlinear vibration energy harvester. The nonlinear mechanical resonator consists of a flexible beam moving like an inverted pendulum between amplitude limiters. It is coupled with a piezoelectric converter, and excited kinematically. Consequently, the mechanical energy input is converted into the electrical power output on the loading resistor included in an electric circuit attached to the piezoelectric electrodes. The curvature of beam mode shapes as well as deflection of the whole beam are examined using a high speed camera. The visual identification results are compared with the voltage output generated by the piezoelectric element for corresponding frequency sweeps and analyzed by the Hilbert transform.
A filter spectrometer concept for facsimile cameras
NASA Technical Reports Server (NTRS)
Jobson, D. J.; Kelly, W. L., IV; Wall, S. D.
1974-01-01
A concept which utilizes interference filters and photodetector arrays to integrate spectrometry with the basic imagery function of a facsimile camera is described and analyzed. The analysis considers spectral resolution, instantaneous field of view, spectral range, and signal-to-noise ratio. Specific performance predictions for the Martian environment, the Viking facsimile camera design parameters, and a signal-to-noise ratio for each spectral band equal to or greater than 256 indicate the feasibility of obtaining a spectral resolution of 0.01 micrometers with an instantaneous field of view of about 0.1 deg in the 0.425 micrometers to 1.025 micrometers range using silicon photodetectors. A spectral resolution of 0.05 micrometers with an instantaneous field of view of about 0.6 deg in the 1.0 to 2.7 micrometers range using lead sulfide photodetectors is also feasible.
Pulse Based Time-of-Flight Range Sensing.
Sarbolandi, Hamed; Plack, Markus; Kolb, Andreas
2018-05-23
Pulse-based Time-of-Flight (PB-ToF) cameras are an attractive alternative range imaging approach, compared to the widely commercialized Amplitude Modulated Continuous-Wave Time-of-Flight (AMCW-ToF) approach. This paper presents an in-depth evaluation of a PB-ToF camera prototype based on the Hamamatsu area sensor S11963-01CR. We evaluate different ToF-related effects, i.e., temperature drift, systematic error, depth inhomogeneity, multi-path effects, and motion artefacts. Furthermore, we evaluate the systematic error of the system in more detail, and introduce novel concepts to improve the quality of range measurements by modifying the mode of operation of the PB-ToF camera. Finally, we describe the means of measuring the gate response of the PB-ToF sensor and using this information for PB-ToF sensor simulation.
Kohoutek, Tobias K.; Mautz, Rainer; Wegner, Jan D.
2013-01-01
We present a novel approach for autonomous location estimation and navigation in indoor environments using range images and prior scene knowledge from a GIS database (CityGML). What makes this task challenging is the arbitrary relative spatial relation between GIS and Time-of-Flight (ToF) range camera further complicated by a markerless configuration. We propose to estimate the camera's pose solely based on matching of GIS objects and their detected location in image sequences. We develop a coarse-to-fine matching strategy that is able to match point clouds without any initial parameters. Experiments with a state-of-the-art ToF point cloud show that our proposed method delivers an absolute camera position with decimeter accuracy, which is sufficient for many real-world applications (e.g., collision avoidance). PMID:23435055
Gyrocopter-Based Remote Sensing Platform
NASA Astrophysics Data System (ADS)
Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.
2015-04-01
In this paper the development of a lightweight and highly modularized airborne sensor platform for remote sensing applications utilizing a gyrocopter as a carrier platform is described. The current sensor configuration consists of a high resolution DSLR camera for VIS-RGB recordings. As a second sensor modality, a snapshot hyperspectral camera was integrated in the aircraft. Moreover a custom-developed thermal imaging system composed of a VIS-PAN camera and a LWIR-camera is used for aerial recordings in the thermal infrared range. Furthermore another custom-developed highly flexible imaging system for high resolution multispectral image acquisition with up to six spectral bands in the VIS-NIR range is presented. The performance of the overall system was tested during several flights with all sensor modalities and the precalculated demands with respect to spatial resolution and reliability were validated. The collected data sets were georeferenced, georectified, orthorectified and then stitched to mosaics.
Ariza-Avidad, M; Agudo-Acemel, M; Salinas-Castillo, A; Capitán-Vallvey, L F
2015-05-04
A sulphide selective colorimetric metal complexing indicator-displacement assay has been developed using an immobilized copper(II) complex of the azo dye 1-(2-pyridylazo)-2-naphthol printed by inkjetting on a nylon support. The change in colour measured from the image of the disposable membrane acquired by a digital camera using the H coordinate of the HSV colour space as the analytical parameter is able to sense sulphide in aqueous solution at pH 7.4 with a dynamic range up to 145 μM, a detection limit of 0.10 μM and a precision between 2 and 11%. Copyright © 2015 Elsevier B.V. All rights reserved.
Study of dynamics of two-phase flow through a minichannel by means of recurrences
NASA Astrophysics Data System (ADS)
Litak, Grzegorz; Górski, Grzegorz; Mosdorf, Romuald; Rysak, Andrzej
2017-05-01
By changing air and water flow rates in the two-phase (air-water) flow through a minichannel, we observed the evolution of air bubbles and slugs patterns. This spatiotemporal behaviour was identified qualitatively by using a digital camera. Simultaneously, we provided a detailed analysis of these phenomena by using the corresponding sequences of light transmission time series recorded with a laser-phototransistor sensor. To distinguish particular patterns, we used recurrence plots and recurrence quantification analysis. Finally, we showed that the maxima of various recurrence quantificators obtained from the laser time series could follow the bubble and slugs patterns in studied ranges of air and water flows.
H2RG Detector Characterization for RIMAS and Instrument Efficiencies
NASA Technical Reports Server (NTRS)
Toy, Vicki L.; Kutyrev, Alexander S.; Capone, John I.; Hams, Thomas; Robinson, F. David; Lotkin, Gennadiy N.; Veilleux, Sylvain; Moseley, Samuel H.; Gehrels, Neil A.; Vogel, Stuart N.
2016-01-01
The Rapid infrared IMAger-Spectrometer (RIMAS) is a near-infrared (NIR) imager and spectrometer that will quickly follow up gamma-ray burst afterglows on the 4.3-meter Discovery Channel Telescope (DCT). RIMAS has two optical arms which allows simultaneous coverage over two bandpasses (YJ and HK) in either imaging or spectroscopy mode. RIMAS utilizes two Teledyne HgCdTe H2RG detectors controlled by Astronomical Research Cameras, Inc. (ARC/Leach) drivers. We report the laboratory characterization of RIMAS's detectors: conversion gain, read noise, linearity, saturation, dynamic range, and dark current. We also present RIMAS's instrument efficiency from atmospheric transmission models and optics data (both telescope and instrument) in all three observing modes.
NASA Astrophysics Data System (ADS)
Wang, Xu-yang; Zhdanov, Dmitry D.; Potemin, Igor S.; Wang, Ying; Cheng, Han
2016-10-01
One of the challenges of augmented reality is a seamless combination of objects of the real and virtual worlds, for example light sources. We suggest a measurement and computation models for reconstruction of light source position. The model is based on the dependence of luminance of the small size diffuse surface directly illuminated by point like source placed at a short distance from the observer or camera. The advantage of the computational model is the ability to eliminate the effects of indirect illumination. The paper presents a number of examples to illustrate the efficiency and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Torres, Juan; Menéndez, José Manuel
2015-02-01
This paper establishes a real-time auto-exposure method to guarantee that surveillance cameras in uncontrolled light conditions take advantage of their whole dynamic range while provide neither under nor overexposed images. State-of-the-art auto-exposure methods base their control on the brightness of the image measured in a limited region where the foreground objects are mostly located. Unlike these methods, the proposed algorithm establishes a set of indicators based on the image histogram that defines its shape and position. Furthermore, the location of the objects to be inspected is likely unknown in surveillance applications. Thus, the whole image is monitored in this approach. To control the camera settings, we defined a parameters function (Ef ) that linearly depends on the shutter speed and the electronic gain; and is inversely proportional to the square of the lens aperture diameter. When the current acquired image is not overexposed, our algorithm computes the value of Ef that would move the histogram to the maximum value that does not overexpose the capture. When the current acquired image is overexposed, it computes the value of Ef that would move the histogram to a value that does not underexpose the capture and remains close to the overexposed region. If the image is under and overexposed, the whole dynamic range of the camera is therefore used, and a default value of the Ef that does not overexpose the capture is selected. This decision follows the idea that to get underexposed images is better than to get overexposed ones, because the noise produced in the lower regions of the histogram can be removed in a post-processing step while the saturated pixels of the higher regions cannot be recovered. The proposed algorithm was tested in a video surveillance camera placed at an outdoor parking lot surrounded by buildings and trees which produce moving shadows in the ground. During the daytime of seven days, the algorithm was running alternatively together with a representative auto-exposure algorithm in the recent literature. Besides the sunrises and the nightfalls, multiple weather conditions occurred which produced light changes in the scene: sunny hours that produced sharpen shadows and highlights; cloud coverages that softened the shadows; and cloudy and rainy hours that dimmed the scene. Several indicators were used to measure the performance of the algorithms. They provided the objective quality as regards: the time that the algorithms recover from an under or over exposure, the brightness stability, and the change related to the optimal exposure. The results demonstrated that our algorithm reacts faster to all the light changes than the selected state-of-the-art algorithm. It is also capable of acquiring well exposed images and maintaining the brightness stable during more time. Summing up the results, we concluded that the proposed algorithm provides a fast and stable auto-exposure method that maintains an optimal exposure for video surveillance applications. Future work will involve the evaluation of this algorithm in robotics.
Motionless active depth from defocus system using smart optics for camera autofocus applications
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2016-04-01
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
Continuous monitoring of Hawaiian volcanoes with thermal cameras
Patrick, Matthew R.; Orr, Tim R.; Antolik, Loren; Lee, Robert Lopaka; Kamibayashi, Kevan P.
2014-01-01
Continuously operating thermal cameras are becoming more common around the world for volcano monitoring, and offer distinct advantages over conventional visual webcams for observing volcanic activity. Thermal cameras can sometimes “see” through volcanic fume that obscures views to visual webcams and the naked eye, and often provide a much clearer view of the extent of high temperature areas and activity levels. We describe a thermal camera network recently installed by the Hawaiian Volcano Observatory to monitor Kīlauea’s summit and east rift zone eruptions (at Halema‘uma‘u and Pu‘u ‘Ō‘ō craters, respectively) and to keep watch on Mauna Loa’s summit caldera. The cameras are long-wave, temperature-calibrated models protected in custom enclosures, and often positioned on crater rims close to active vents. Images are transmitted back to the observatory in real-time, and numerous Matlab scripts manage the data and provide automated analyses and alarms. The cameras have greatly improved HVO’s observations of surface eruptive activity, which includes highly dynamic lava lake activity at Halema‘uma‘u, major disruptions to Pu‘u ‘Ō‘ō crater and several fissure eruptions.
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...
2015-08-13
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS
2018-02-15
23 9 Ground truth creation based on marked building feature points in two different views 50 frames apart in...between just two views , each row in the current figure represents a similar assessment however between one camera and all other cameras within the dataset...BA4S. While Fig. 44 depicted the epipolar lines for the point correspondences between just two views , the current figure represents a similar
NASA Astrophysics Data System (ADS)
Guo, Jie; Zhu, Chang`an
2016-01-01
The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
Automatic segmentation of trees in dynamic outdoor environments
USDA-ARS?s Scientific Manuscript database
Segmentation in dynamic outdoor environments can be difficult when the illumination levels and other aspects of the scene cannot be controlled. Specifically in agricultural contexts, a background material is often used to shield a camera's field of view from other rows of crops. In this paper, we ...
A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-07-03
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.
Active confocal imaging for visual prostheses
Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli
2014-01-01
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710
NASA Astrophysics Data System (ADS)
Skripnyak, Vladimir; Skripnyak, Evgeniya; Skripnyak, Vladimir; Vaganova, Irina; Skripnyak, Nataliya
2013-06-01
Results of researches testify that a grain size have a strong influence on the mechanical behavior of metals and alloys. Ultrafine grained HCP and FCC metal alloys present higher values of the spall strength than a corresponding coarse grained counterparts. In the present study we investigate the effect of grain size distribution on the flow stress and strength under dynamic compression and tension of aluminium and magnesium alloys. Microstructure and grain size distribution in alloys were varied by carrying out severe plastic deformation during the multiple-pass equal channel angular pressing, cyclic constrained groove pressing, and surface mechanical attrition treatment. Tests were performed using a VHS-Instron servo-hydraulic machine. Ultra high speed camera Phantom V710 was used for photo registration of deformation and fracture of specimens in range of strain rates from 0,01 to 1000 1/s. In dynamic regime UFG alloys exhibit a stronger decrease in ductility compared to the coarse grained material. The plastic flow of UFG alloys with a bimodal grain size distribution was highly localized. Shear bands and shear crack nucleation and growth were recorded using high speed photography.
NASA Astrophysics Data System (ADS)
Feinaeugle, M.; Gregorčič, P.; Heath, D. J.; Mills, B.; Eason, R. W.
2017-02-01
We have studied the transfer regimes and dynamics of polymer flyers from laser-induced backward transfer (LIBT) via time-resolved shadowgraphy. Imaging of the flyer ejection phase of LIBT of 3.8 μm and 6.4 μm thick SU-8 polymer films on germanium and silicon carrier substrates was performed over a time delay range of 1.4-16.4 μs after arrival of the laser pulse. The experiments were carried out with 150 fs, 800 nm pulses spatially shaped using a digital micromirror device, and laser fluences of up to 3.5 J/cm2 while images were recorded via a CCD camera and a spark discharge lamp. Velocities of flyers found in the range of 6-20 m/s, and the intact and fragmented ejection regimes, were a function of donor thickness, carrier and laser fluence. The crater profile of the donor after transfer and the resulting flyer profile indicated different flyer ejection modes for Si carriers and high fluences. The results contribute to better understanding of the LIBT process, and help to determine experimental parameters for successful LIBT of intact deposits.
NASA Technical Reports Server (NTRS)
1976-01-01
Wide field measurements, namely, measurements of relative angular separations between stars over a relatively wide field for parallax and proper motion determinations, were made with the third fine guidance sensor. Narrow field measurements, i.e., double star measurements, are accomplished primarily with the area photometer or faint object camera at f/96. The wavelength range required can be met by the fine guidance sensor which has a spectral coverage from 3000 to 7500 A. The field of view of the fine guidance sensor also exceeds that required for the wide field astrometric instrument. Requirements require a filter wheel for the wide field astrometer, and so one was incorporated into the design of the fine guidance sensor. The filter wheel probably would contain two neutral density filters to extend the dynamic range of the sensor and three spectral filters for narrowing effective double star magnitude difference.
1991-04-03
The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.
1995-08-29
The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.
LWIR NUC using an uncooled microbolometer camera
NASA Astrophysics Data System (ADS)
Laveigne, Joe; Franks, Greg; Sparkman, Kevin; Prewarski, Marcus; Nehring, Brian; McHugh, Steve
2010-04-01
Performing a good non-uniformity correction is a key part of achieving optimal performance from an infrared scene projector. Ideally, NUC will be performed in the same band in which the scene projector will be used. Cooled, large format MWIR cameras are readily available and have been successfully used to perform NUC, however, cooled large format LWIR cameras are not as common and are prohibitively expensive. Large format uncooled cameras are far more available and affordable, but present a range of challenges in practical use for performing NUC on an IRSP. Santa Barbara Infrared, Inc. reports progress on a continuing development program to use a microbolometer camera to perform LWIR NUC on an IRSP. Camera instability and temporal response and thermal resolution are the main difficulties. A discussion of processes developed to mitigate these issues follows.
Uncertainty Propagation Methods for High-Dimensional Complex Systems
NASA Astrophysics Data System (ADS)
Mukherjee, Arpan
Researchers are developing ever smaller aircraft called Micro Aerial Vehicles (MAVs). The Space Robotics Group has joined the field by developing a dragonfly-inspired MAV. This thesis presents two contributions to this project. The first is the development of a dynamical model of the internal MAV components to be used for tuning design parameters and as a future plant model. This model is derived using the Lagrangian method and differs from others because it accounts for the internal dynamics of the system. The second contribution of this thesis is an estimation algorithm that can be used to determine prototype performance and verify the dynamical model from the first part. Based on the Gauss-Newton Batch Estimator, this algorithm uses a single camera and known points of interest on the wing to estimate the wing kinematic angles. Unlike other single-camera methods, this method is probabilistically based rather than being geometric.
Active 3D camera design for target capture on Mars orbit
NASA Astrophysics Data System (ADS)
Cottin, Pierre; Babin, François; Cantin, Daniel; Deslauriers, Adam; Sylvestre, Bruno
2010-04-01
During the ESA Mars Sample Return (MSR) mission, a sample canister launched from Mars will be autonomously captured by an orbiting satellite. We present the concept and the design of an active 3D camera supporting the orbiter navigation system during the rendezvous and capture phase. This camera aims at providing the range and bearing of a 20 cm diameter canister from 2 m to 5 km within a 20° field-of-view without moving parts (scannerless). The concept exploits the sensitivity and the gating capability of a gated intensified camera. It is supported by a pulsed source based on an array of laser diodes with adjustable amplitude and pulse duration (from nanoseconds to microseconds). The ranging capability is obtained by adequately controlling the timing between the acquisition of 2D images and the emission of the light pulses. Three modes of acquisition are identified to accommodate the different levels of ranging and bearing accuracy and the 3D data refresh rate. To come up with a single 3D image, each mode requires a different number of images to be processed. These modes can be applied to the different approach phases. The entire concept of operation of this camera is detailed with an emphasis on the extreme lighting conditions. Its uses for other space missions and terrestrial applications are also highlighted. This design is implemented in a prototype with shorter ranging capabilities for concept validation. Preliminary results obtained with this prototype are also presented. This work is financed by the Canadian Space Agency.
Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lu, Jian
Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.
NASA Astrophysics Data System (ADS)
Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David
2017-03-01
Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. The proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.
Are camera surveys useful for assessing recruitment in white-tailed deer?
Chitwood, M. Colter; Lashley, Marcus A.; Kilgo, John C.; ...
2016-12-27
Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter in ungulate population dynamics, there is a growing need to test the effectiveness of camera surveys for assessing fawn recruitment. At Savannah River Site, South Carolina, we used six years of camera-based recruitment estimates (i.e. fawn:doe ratio) to predict concurrently collected annual radiotag-based survival estimates. The coefficientmore » of determination (R) was 0.445, indicating some support for the viability of cameras to reflect recruitment. Here, we added two years of data from Fort Bragg Military Installation, North Carolina, which improved R to 0.621 without accounting for site-specific variability. Also, we evaluated the correlation between year-to-year changes in recruitment and survival using the Savannah River Site data; R was 0.758, suggesting that camera-based recruitment could be useful as an indicator of the trend in survival. Because so few researchers concurrently estimate survival and camera-based recruitment, examining this relationship at larger spatial scales while controlling for numerous confounding variables remains difficult. We believe that future research should test the validity of our results from other areas with varying deer and camera densities, as site (e.g. presence of feral pigs Sus scrofa) and demographic (e.g. fawn age at time of camera survey) parameters may have a large influence on detectability. Until such biases are fully quantified, we urge researchers and managers to use caution when advocating the use of camera-based recruitment estimates.« less
Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan
NASA Astrophysics Data System (ADS)
Pichette, Julien; Charle, Wouter; Lambrechts, Andy
2017-02-01
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.
Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera
NASA Astrophysics Data System (ADS)
Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.
2007-09-01
We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.
Automatic Exposure Iris Control (AEIC) for data acquisition camera
NASA Technical Reports Server (NTRS)
Mcatee, G. E., Jr.; Stoap, L. J.; Solheim, C. D.; Sharpsteen, J. T.
1975-01-01
A lens design capable of operating over a total range of f/1.4 to f/11.0 with through the lens light sensing is presented along with a system which compensates for ASA film speeds as well as shutter openings. The space shuttle camera system package is designed so that it can be assembled on the existing 16 mm DAC with a minimum of alteration to the camera.
Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras
NASA Astrophysics Data System (ADS)
Quinn, Mark Kenneth
2018-05-01
Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.
NDVI derived from IR-enabled digital cameras: applicability across different plant functional types
NASA Astrophysics Data System (ADS)
Filippa, Gianluca; Cremonese, Edoardo; Galvagno, Marta; Migliavacca, Mirco; Sonnentag, Oliver; Hufkens, Koen; Ryu, Youngryel; Humphreys, Elyn; Morra di Cella, Umberto; Richardson, Andrew D.
2017-04-01
Phenological time-series based on the deployment of radiometric measurements are now being constructed at different spatial and temporal scales ranging from weekly satellite observations to sub-hourly in situ measurements by means of e.g. radiometers or digital cameras. In situ measurements are strongly required to provide high-frequency validation data for satellite-derived vegetation indices. In this study we used a recently developed method to calculate NDVI from NIR-enabled digital cameras (NDVIC) at 17 sites encompassing 6 plant functional types and totalizing 74 year-sites of data from the PHENOCAM network. The seasonality of NDVIC was comparable to both NDVI measured by ground light emitting diode (LED) sensors and by MODIS, whereas site-specific scaling factors are required to compare absolute values of NDVIC to standard NDVI measurements. We also compared green chromatic coordinate (GCC) extracted from RGB-only images to NDVIC and found that the two are characterized by slight different dynamics, dependent on the plant functional type. During senescence, NDVIC lags behind GCC in deciduous broad-leaf forests and grasslands, suggesting that GCC is more sensitive to leaf decoloration and NDVIC to the biomass reduction resulting from leaf abscission and green to dry biomass ratio of the canopy. In evergreen forests, NDVIC peaks later than GCC in spring, likely tracking the processes of shoot elongation and new needle formation. Our findings suggest therefore that NDVIC and GCC can complement each other in describing ecosystem phenology.
Process simulation in digital camera system
NASA Astrophysics Data System (ADS)
Toadere, Florin
2012-06-01
The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.
Investigation of sparsity metrics for autofocusing in digital holographic microscopy
NASA Astrophysics Data System (ADS)
Fan, Xin; Healy, John J.; Hennelly, Bryan M.
2017-05-01
Digital holographic microscopy (DHM) is an optoelectronic technique that is made up of two parts: (i) the recording of the interference pattern of the diffraction pattern of an object and a known reference wavefield using a digital camera and (ii) the numerical reconstruction of the complex object wavefield using the recorded interferogram and a distance parameter as input. The latter is based on the simulation of optical propagation from the camera plane to a plane at any arbitrary distance from the camera. A key advantage of DHM over conventional microscopy is that both the phase and intensity information of the object can be recovered at any distance, using only one capture, and this facilitates the recording of scenes that may change dynamically and that may otherwise go in and out of focus. Autofocusing using traditional microscopy requires mechanical movement of the translation stage or the microscope objective, and multiple image captures that are then compared using some metric. Autofocusing in DHM is similar, except that the sequence of intensity images, to which the metric is applied, is generated numerically from a single capture. We recently investigated the application of a number of sparsity metrics for DHM autofocusing and in this paper we extend this work to include more such metrics, and apply them over a greater range of biological diatom cells and magnification/numerical apertures. We demonstrate for the first time that these metrics may be grouped together according to matching behavior following high pass filtering.
Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation
NASA Astrophysics Data System (ADS)
Inamoto, Naho; Saito, Hideo
2003-06-01
This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..
COBRA ATD multispectral camera response model
NASA Astrophysics Data System (ADS)
Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.
2000-08-01
A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.
Optical design of portable nonmydriatic fundus camera
NASA Astrophysics Data System (ADS)
Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang
2016-03-01
Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dinwiddie, Ralph Barton; Parris, Larkin S.; Lindal, John M.
This paper explores the temperature range extension of long-wavelength infrared (LWIR) cameras by placing an aperture in front of the lens. An aperture smaller than the lens will reduce the radiance to the sensor, allowing the camera to image targets much hotter than typically allowable. These higher temperatures were accurately determined after developing a correction factor which was applied to the built-in temperature calibration. The relationship between aperture diameter and temperature range is linear. The effect of pre-lens apertures on the image uniformity is a form of anti-vignetting, meaning the corners appear brighter (hotter) than the rest of the image.more » An example of using this technique to measure temperatures of high melting point polymers during 3D printing provide valuable information of the time required for the weld-line temperature to fall below the glass transition temperature.« less
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
Techniques for optically compressing light intensity ranges
Rushford, Michael C.
1989-01-01
A pin hole camera assembly for use in viewing an object having a relatively large light intensity range, for example a crucible containing molten uranium in an atomic vapor laser isotope separator (AVLIS) system is disclosed herein. The assembly includes means for optically compressing the light intensity range appearing at its input sufficient to make it receivable and decipherable by a standard video camera. A number of different means for compressing the intensity range are disclosed. These include the use of photogray glass, the use of a pair of interference filters, and the utilization of a new liquid crystal notch filter in combination with an interference filter.
Techniques for optically compressing light intensity ranges
Rushford, M.C.
1989-03-28
A pin hole camera assembly for use in viewing an object having a relatively large light intensity range, for example a crucible containing molten uranium in an atomic vapor laser isotope separator (AVLIS) system is disclosed herein. The assembly includes means for optically compressing the light intensity range appearing at its input sufficient to make it receivable and decipherable by a standard video camera. A number of different means for compressing the intensity range are disclosed. These include the use of photogray glass, the use of a pair of interference filters, and the utilization of a new liquid crystal notch filter in combination with an interference filter. 18 figs.
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
NASA Astrophysics Data System (ADS)
Mali, V. K.; Kuiry, S. N.
2015-12-01
Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.
NASA Astrophysics Data System (ADS)
Méndez Incera, F. J.; Erikson, L. H.; Ruggiero, P.; Barnard, P.; Camus, P.; Rueda Zamora, A. C.
2014-12-01
Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.
NASA Astrophysics Data System (ADS)
Capocchiano, F.; Ravanelli, R.; Crespi, M.
2017-11-01
Within the construction sector, Building Information Models (BIMs) are more and more used thanks to the several benefits that they offer in the design of new buildings and the management of the existing ones. Frequently, however, BIMs are not available for already built constructions, but, at the same time, the range camera technology provides nowadays a cheap, intuitive and effective tool for automatically collecting the 3D geometry of indoor environments. It is thus essential to find new strategies, able to perform the first step of the scan to BIM process, by extracting the geometrical information contained in the 3D models that are so easily collected through the range cameras. In this work, a new algorithm to extract planimetries from the 3D models of rooms acquired by means of a range camera is therefore presented. The algorithm was tested on two rooms, characterized by different shapes and dimensions, whose 3D models were captured with the Occipital Structure SensorTM. The preliminary results are promising: the developed algorithm is able to model effectively the 2D shape of the investigated rooms, with an accuracy level comprised in the range of 5 - 10 cm. It can be potentially used by non-expert users in the first step of the BIM generation, when the building geometry is reconstructed, for collecting crowdsourced indoor information in the frame of BIMs Volunteered Geographic Information (VGI) generation.
Space infrared telescope facility wide field and diffraction limited array camera (IRAC)
NASA Technical Reports Server (NTRS)
Fazio, Giovanni G.
1988-01-01
The wide-field and diffraction limited array camera (IRAC) is capable of two-dimensional photometry in either a wide-field or diffraction-limited mode over the wavelength range from 2 to 30 microns with a possible extension to 120 microns. A low-doped indium antimonide detector was developed for 1.8 to 5.0 microns, detectors were tested and optimized for the entire 1.8 to 30 micron range, beamsplitters were developed and tested for the 1.8 to 30 micron range, and tradeoff studies of the camera's optical system performed. Data are presented on the performance of InSb, Si:In, Si:Ga, and Si:Sb array detectors bumpbonded to a multiplexed CMOS readout chip of the source-follower type at SIRTF operating backgrounds (equal to or less than 1 x 10 to the 8th ph/sq cm/sec) and temperature (4 to 12 K). Some results at higher temperatures are also presented for comparison to SIRTF temperature results. Data are also presented on the performance of IRAC beamsplitters at room temperature at both 0 and 45 deg angle of incidence and on the performance of the all-reflecting optical system baselined for the camera.
Optical attenuation mechanism upgrades, MOBLAS, and TLRS systems
NASA Technical Reports Server (NTRS)
Eichinger, Richard; Johnson, Toni; Malitson, Paul; Oldham, Thomas; Stewart, Loyal
1993-01-01
This poster presentation describes the Optical Attenuation Mechanism (OAM) Upgrades to the MOBLAS and TLRS Crustal Dynamics Satellite Laser Ranging (CDSLR) systems. The upgrades were for the purposes of preparing these systems to laser range to the TOPEX/POSEIDON spacecraft when it will be launched in the summer of 1992. The OAM permits the laser receiver to operate over the expected large signal dynamic range from TOPEX/POSEIDON and it reduces the number of pre- and post-calibrations for each satellite during multi-satellite tracking operations. It further simplifies the calibration bias corrections that had been made due to the pass-to-pass variation of the photomultiplier supply voltage and the transmit filter glass thickness. The upgrade incorporated improvements to the optical alignment capability of each CDSLR system through the addition of a CCD camera into the MOBLAS receive telescope and an alignment telescope onto the TLRS optical table. The OAM is stepper motor and microprocessor based; and the system can be controlled either manually by a control switch panel or computer controlled via an EIA RS-232C serial interface. The OAM has a neutral density (ND) range of 0.0 to 4.0 and the positioning is absolute referenced in steps of 0.1 ND. Both the fixed transmit filter and the daylight filter are solenoid actuated with digital inputs and outputs to and from the OAM microprocessor. During automated operation, the operator has the option to overide the remote control and control the OAM system via a local control switch panel.
Neil A. Clark; Sang-Mook Lee
2004-01-01
This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...
An assessment of the utility of a non-metric digital camera for measuring standing trees
Neil Clark; Randolph H. Wynne; Daniel L. Schmoldt; Matthew F. Winn
2000-01-01
Images acquired with a commercially available digital camera were used to make measurements on 20 red oak (Quercus spp.) stems. The ranges of diameter at breast height (DBH) and height to a 10 cm upper-stem diameter were 16-66 cm and 12-20 m, respectively. Camera stations located 3, 6, 9, 12, and 15 m from the stem were studied to determine the best distance to be...
Windy Mars: A Dynamic Planet as Seen by the HiRISE Camera
NASA Technical Reports Server (NTRS)
Bridges, N. T.; Geissler, P. E.; McEwen, A. S.; Thomson, B. J.; Chuang, F. C.; Herkenhoff, K. E.; Keszthelyi, L. P.; Martnez-Alonso, S.
2007-01-01
With a dynamic atmosphere and a large supply of particulate material, the surface of Mars is heavily influenced by wind-driven, or aeolian, processes. The High Resolution Imaging Science Experiment (HiRISE) camera on the Mars Reconnaissance Orbiter (MRO) provides a new view of Martian geology, with the ability to see decimeter-size features. Current sand movement, and evidence for recent bedform development, is observed. Dunes and ripples generally exhibit complex surfaces down to the limits of resolution. Yardangs have diverse textures, with some being massive at HiRISE scale, others having horizontal and cross-cutting layers of variable character, and some exhibiting blocky and polygonal morphologies. 'Reticulate' (fine polygonal texture) bedforms are ubiquitous in the thick mantle at the highest elevations.
Three-dimensional particle tracking velocimetry using dynamic vision sensors
NASA Astrophysics Data System (ADS)
Borer, D.; Delbruck, T.; Rösgen, T.
2017-12-01
A fast-flow visualization method is presented based on tracking neutrally buoyant soap bubbles with a set of neuromorphic cameras. The "dynamic vision sensors" register only the changes in brightness with very low latency, capturing fast processes at a low data rate. The data consist of a stream of asynchronous events, each encoding the corresponding pixel position, the time instant of the event and the sign of the change in logarithmic intensity. The work uses three such synchronized cameras to perform 3D particle tracking in a medium sized wind tunnel. The data analysis relies on Kalman filters to associate the asynchronous events with individual tracers and to reconstruct the three-dimensional path and velocity based on calibrated sensor information.
NASA Technical Reports Server (NTRS)
Steele, P.; Kirch, D.
1975-01-01
In 47 men with arteriographically defined coronary artery disease comparative studies of left ventricular ejection fraction and segmental wall motion were made with radionuclide data obtained from the image intensifier camera computer system and with contrast cineventriculography. The radionuclide data was digitized and the images corresponding to left ventricular end-diastole and end-systole were identified from the left ventricular time-activity curve. The left ventricular end-diastolic and end-systolic images were subtracted to form a silhouette difference image which described wall motion of the anterior and inferior left ventricular segments. The image intensifier camera allows manipulation of dynamically acquired radionuclide data because of the high count rate and consequently improved resolution of the left ventricular image.
Two-Way Communication Using RFID Equipment and Techniques
NASA Technical Reports Server (NTRS)
Jedry, Thomas; Archer, Eric
2007-01-01
Equipment and techniques used in radio-frequency identification (RFID) would be extended, according to a proposal, to enable short-range, two-way communication between electronic products and host computers. In one example of a typical contemplated application, the purpose of the short-range radio communication would be to transfer image data from a user s digital still or video camera to the user s computer for recording and/or processing. The concept is also applicable to consumer electronic products other than digital cameras (for example, cellular telephones, portable computers, or motion sensors in alarm systems), and to a variety of industrial and scientific sensors and other devices that generate data. Until now, RFID has been used to exchange small amounts of mostly static information for identifying and tracking assets. Information pertaining to an asset (typically, an object in inventory to be tracked) is contained in miniature electronic circuitry in an RFID tag attached to the object. Conventional RFID equipment and techniques enable a host computer to read data from and, in some cases, to write data to, RFID tags, but they do not enable such additional functions as sending commands to, or retrieving possibly large quantities of dynamic data from, RFID-tagged devices. The proposal would enable such additional functions. The figure schematically depicts an implementation of the proposal for a sensory device (e.g., a digital camera) that includes circuitry that converts sensory information to digital data. In addition to the basic sensory device, there would be a controller and a memory that would store the sensor data and/or data from the controller. The device would also be equipped with a conventional RFID chipset and antenna, which would communicate with a host computer via an RFID reader. The controller would function partly as a communication interface, implementing two-way communication protocols at all levels (including RFID if needed) between the sensory device and the memory and between the host computer and the memory. The controller would perform power V
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chitwood, M. Colter; Lashley, Marcus A.; Kilgo, John C.
Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter in ungulate population dynamics, there is a growing need to test the effectiveness of camera surveys for assessing fawn recruitment. At Savannah River Site, South Carolina, we used six years of camera-based recruitment estimates (i.e. fawn:doe ratio) to predict concurrently collected annual radiotag-based survival estimates. The coefficientmore » of determination (R) was 0.445, indicating some support for the viability of cameras to reflect recruitment. Here, we added two years of data from Fort Bragg Military Installation, North Carolina, which improved R to 0.621 without accounting for site-specific variability. Also, we evaluated the correlation between year-to-year changes in recruitment and survival using the Savannah River Site data; R was 0.758, suggesting that camera-based recruitment could be useful as an indicator of the trend in survival. Because so few researchers concurrently estimate survival and camera-based recruitment, examining this relationship at larger spatial scales while controlling for numerous confounding variables remains difficult. We believe that future research should test the validity of our results from other areas with varying deer and camera densities, as site (e.g. presence of feral pigs Sus scrofa) and demographic (e.g. fawn age at time of camera survey) parameters may have a large influence on detectability. Until such biases are fully quantified, we urge researchers and managers to use caution when advocating the use of camera-based recruitment estimates.« less
NASA Astrophysics Data System (ADS)
Harrild, M.; Webley, P. W.; Dehn, J.
2015-12-01
The ability to detect and monitor precursory events, thermal signatures, and ongoing volcanic activity in near-realtime is an invaluable tool. Volcanic hazards often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash to aircraft cruise altitudes. Using ground based remote sensing to detect and monitor this activity is essential, but the required equipment is often expensive and difficult to maintain, which increases the risk to public safety and the likelihood of financial impact. Our investigation explores the use of 'off the shelf' cameras, ranging from computer webcams to low-light security cameras, to monitor volcanic incandescent activity in near-realtime. These cameras are ideal as they operate in the visible and near-infrared (NIR) portions of the electromagnetic spectrum, are relatively cheap to purchase, consume little power, are easily replaced, and can provide telemetered, near-realtime data. We focus on the early detection of volcanic activity, using automated scripts that capture streaming online webcam imagery and evaluate each image according to pixel brightness, in order to automatically detect and identify increases in potentially hazardous activity. The cameras used here range in price from 0 to 1,000 and the script is written in Python, an open source programming language, to reduce the overall cost to potential users and increase the accessibility of these tools, particularly in developing nations. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures to be correlated to pixel brightness. Data collected from several volcanoes; (1) Stromboli, Italy (2) Shiveluch, Russia (3) Fuego, Guatemala (4) Popcatépetl, México, along with campaign data from Stromboli (June, 2013), and laboratory tests are presented here.
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander
2017-05-01
The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.
A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-01-01
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972
NASA Astrophysics Data System (ADS)
Taya, T.; Kataoka, J.; Kishimoto, A.; Tagawa, L.; Mochizuki, S.; Toshito, T.; Kimura, M.; Nagao, Y.; Kurita, K.; Yamaguchi, M.; Kawachi, N.
2017-07-01
Particle therapy is an advanced cancer therapy that uses a feature known as the Bragg peak, in which particle beams suddenly lose their energy near the end of their range. The Bragg peak enables particle beams to damage tumors effectively. To achieve precise therapy, the demand for accurate and quantitative imaging of the beam irradiation region or dosage during therapy has increased. The most common method of particle range verification is imaging of annihilation gamma rays by positron emission tomography. Not only 511-keV gamma rays but also prompt gamma rays are generated during therapy; therefore, the Compton camera is expected to be used as an on-line monitor for particle therapy, as it can image these gamma rays in real time. Proton therapy, one of the most common particle therapies, uses a proton beam of approximately 200 MeV, which has a range of ~ 25 cm in water. As gamma rays are emitted along the path of the proton beam, quantitative evaluation of the reconstructed images of diffuse sources becomes crucial, but it is far from being fully developed for Compton camera imaging at present. In this study, we first quantitatively evaluated reconstructed Compton camera images of uniformly distributed diffuse sources, and then confirmed that our Compton camera obtained 3 %(1 σ) and 5 %(1 σ) uniformity for line and plane sources, respectively. Based on this quantitative study, we demonstrated on-line gamma imaging during proton irradiation. Through these studies, we show that the Compton camera is suitable for future use as an on-line monitor for particle therapy.
Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
Wen, Wei; Khatibi, Siamak
2017-01-01
Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459
Gas-phase lifetimes of nucleobase analogues by picosecond pumpionization and streak techniques.
Blaser, Susan; Frey, Hans-Martin; Heid, Cornelia G; Leutwyler, Samuel
2014-01-01
The picosecond (ps) timescale is relevant for the investigation of many molecular dynamical processes such as fluorescence, nonradiative relaxation, intramolecular vibrational relaxation, molecular rotation and intermolecular energy transfer, to name a few. While investigations of ultrafast (femtosecond) processes of biological molecules, e.g. nucleobases and their analogues in the gas phase are available, there are few investigations on the ps time scale. We have constructed a ps pump-ionization setup and a ps streak camera fluorescence apparatus for the determination of lifetimes of supersonic jet-cooled and isolated molecules and clusters. The ps pump-ionization setup was used to determine the lifetimes of the nucleobase analogue 2-aminopurine (2AP) and of two 2AP˙(H2O)n water cluster isomers with n=1 and 2. Their lifetimes lie between 150 ps and 3 ns and are strongly cluster-size dependent. The ps streak camera setup was used to determine accurate fluorescence lifetimes of the uracil analogue 2-pyridone (2PY), its self-dimer (2PY)2, two isomers of its trimer (2PY)3 and its tetramer (2PY)4, which lie in the 7-12 ns range.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
NASA Astrophysics Data System (ADS)
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-02-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes.
NASA Technical Reports Server (NTRS)
Barker, Ed; Maley, Paul; Mulrooney, Mark; Beaulieu, Kevin
2009-01-01
In September 2008, a joint ESA/NASA multi-instrument airborne observing campaign was conducted over the Southern Pacific ocean. The objective was the acquisition of data to support detailed atmospheric re-entry analysis for the first flight of the European Automated Transfer Vehicle (ATV)-1. Skilled observers were deployed aboard two aircraft which were flown at 12.8 km altitude within visible range of the ATV-1 re-entry zone. The observers operated a suite of instruments with low-light-level detection sensitivity including still cameras, high speed and 30 fps video cameras, and spectrographs. The collected data has provided valuable information regarding the dynamic time evolution of the ATV-1 re-entry fragmentation. Specifically, the data has satisfied the primary mission objective of recording the explosion of ATV-1's primary fuel tank and thereby validating predictions regarding the tanks demise and the altitude of its occurrence. Furthermore, the data contains the brightness and trajectories of several hundred ATV-1 fragments. It is the analysis of these properties, as recorded by the particular instrument set sponsored by NASA/Johnson Space Center, which we present here.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-01-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes. PMID:28233829
NASA Astrophysics Data System (ADS)
Zhao, Ziyue; Gan, Xiaochuan; Zou, Zhi; Ma, Liqun
2018-01-01
The dynamic envelope measurement plays very important role in the external dimension design for high-speed train. Recently there is no digital measurement system to solve this problem. This paper develops an optoelectronic measurement system by using monocular digital camera, and presents the research of measurement theory, visual target design, calibration algorithm design, software programming and so on. This system consists of several CMOS digital cameras, several luminous targets for measuring, a scale bar, data processing software and a terminal computer. The system has such advantages as large measurement scale, high degree of automation, strong anti-interference ability, noise rejection and real-time measurement. In this paper, we resolve the key technology such as the transformation, storage and calculation of multiple cameras' high resolution digital image. The experimental data show that the repeatability of the system is within 0.02mm and the distance error of the system is within 0.12mm in the whole workspace. This experiment has verified the rationality of the system scheme, the correctness, the precision and effectiveness of the relevant methods.
Upgrades and Modifications of the NASA Ames HFFAF Ballistic Range
NASA Technical Reports Server (NTRS)
Bogdanoff, David W.; Wilder, Michael C.; Cornelison, Charles J.; Perez, Alfredo J.
2017-01-01
The NASA Ames Hypervelocity Free Flight Aerodynamics Facility ballistic range is described. The various configurations of the shadowgraph stations are presented. This includes the original stations with film and configurations with two different types of digital cameras. Resolution tests for the 3 shadowgraph station configurations are described. The advantages of the digital cameras are discussed, including the immediate availability of the shadowgraphs. The final shadowgraph station configuration is a mix of 26 Nikon cameras and 6 PI-MAX2 cameras. Two types of trigger light sheet stations are described visible and IR. The two gunpowders used for the NASA Ames 6.251.50 light gas guns are presented. These are the Hercules HC-33-FS powder (no longer available) and the St. Marks Powder WC 886 powder. The results from eight proof shots for the two powders are presented. Both muzzle velocities and piston velocities are 5 9 lower for the new St. Marks WC 886 powder than for the old Hercules HC-33-FS powder (no longer available). The experimental and CFD (computational) piston and muzzle velocities are in good agreement. Shadowgraph-reading software that employs template-matching pattern recognition to locate the ballistic-range model is described. Templates are generated from a 3D solid model of the ballistic-range model. The accuracy of the approach is assessed using a set of computer-generated test images.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83% in RMS of range error and 72% in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90% true positive recognition and the average of 12 centimetres 3D positioning accuracy.
Pirie, Chris G; Pizzirani, Stefano
2011-12-01
To describe a digital single lens reflex (dSLR) camera adaptor for posterior segment photography. A total of 30 normal canine and feline animals were imaged using a dSLR adaptor which mounts between a dSLR camera body and lens. Posterior segment viewing and imaging was performed with the aid of an indirect lens ranging from 28-90D. Coaxial illumination for viewing was provided by a single white light emitting diode (LED) within the adaptor, while illumination during exposure was provided by the pop-up flash or an accessory flash. Corneal and/or lens reflections were reduced using a pair of linear polarizers, having their azimuths perpendicular to one another. Quality high-resolution, reflection-free, digital images of the retina were obtained. Subjective image evaluation demonstrated the same amount of detail, as compared to a conventional fundus camera. A wide range of magnification(s) [1.2-4X] and/or field(s) of view [31-95 degrees, horizontal] were obtained by altering the indirect lens utilized. The described adaptor may provide an alternative to existing fundus camera systems. Quality images were obtained and the adapter proved to be versatile, portable and of low cost.
Small SWAP 3D imaging flash ladar for small tactical unmanned air systems
NASA Astrophysics Data System (ADS)
Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.
2015-05-01
The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.
A rotorcraft flight database for validation of vision-based ranging algorithms
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1992-01-01
A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.
NASA Astrophysics Data System (ADS)
Ou, Yangwei; Zhang, Hongbo; Li, Bin
2018-04-01
The purpose of this paper is to show that absolute orbit determination can be achieved based on spacecraft formation. The relative position vectors expressed in the inertial frame are used as measurements. In this scheme, the optical camera is applied to measure the relative line-of-sight (LOS) angles, i.e., the azimuth and elevation. The LIDAR (Light radio Detecting And Ranging) or radar is used to measure the range and we assume that high-accuracy inertial attitude is available. When more deputies are included in the formation, the formation configuration is optimized from the perspective of the Fisher information theory. Considering the limitation on the field of view (FOV) of cameras, the visibility of spacecraft and the installation of cameras are investigated. In simulations, an extended Kalman filter (EKF) is used to estimate the position and velocity. The results show that the navigation accuracy can be enhanced by using more deputies and the installation of cameras significantly affects the navigation performance.
Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles
Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.
2017-01-01
Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985
A multi-criteria approach to camera motion design for volume data animation.
Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
Experimental results in autonomous landing approaches by dynamic machine vision
NASA Astrophysics Data System (ADS)
Dickmanns, Ernst D.; Werner, Stefan; Kraus, S.; Schell, R.
1994-07-01
The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.
Strategies for Multi-Modal Analysis
NASA Astrophysics Data System (ADS)
Hexemer, Alexander; Wang, Cheng; Pandolfi, Ronald; Kumar, Dinesh; Venkatakrishnan, Singanallur; Sethian, James; Camera Team
This section on soft materials will be dedicated to discuss the extraction of the chemical distribution and spatial arrangement of constituent elements and functional groups at multiple length scales and, thus, the examination of collective dynamics, transport, and electronic ordering phenomena. Traditional measures of structure in soft materials have relied heavily on scattering and imaging based techniques due to their capacity to measure nanoscale dimensions and their capacity to monitor structure under conditions of dynamic stress loading. Special attentions are planned to focus on the application of resonant x-ray scattering, contrast-varied neutron scattering, analytical transmission electron microscopy, and their combinations. This session aims to bring experts in both scattering and electron microscope fields to discuss recent advances in selectively characterizing structural architectures of complex soft materials, which have often multi-components with a wide range of length scales and multiple functionalities, and thus hopes to foster novel ideas to decipher a higher level of structural complexity in soft materials in future. CAMERA, Early Career Award.
Comparison of Brownian-dynamics-based estimates of polymer tension with direct force measurements.
Arsenault, Mark E; Purohit, Prashant K; Goldman, Yale E; Shuman, Henry; Bau, Haim H
2010-11-01
With the aid of brownian dynamics models, it is possible to estimate polymer tension by monitoring polymers' transverse thermal fluctuations. To assess the precision of the approach, brownian dynamics-based tension estimates were compared with the force applied to rhodamine-phalloidin labeled actin filaments bound to polymer beads and suspended between two optical traps. The transverse thermal fluctuations of each filament were monitored with a CCD camera, and the images were analyzed to obtain the filament's transverse displacement variance as a function of position along the filament, the filament's tension, and the camera's exposure time. A linear Brownian dynamics model was used to estimate the filament's tension. The estimated force was compared and agreed within 30% (when the tension <0.1 pN ) and 70% (when the tension <1 pN ) with the applied trap force. In addition, the paper presents concise asymptotic expressions for the mechanical compliance of a system consisting of a filament attached tangentially to bead handles (dumbbell system). The techniques described here can be used for noncontact estimates of polymers' and fibers' tension.
Repurposing video recordings for structure motion estimations
NASA Astrophysics Data System (ADS)
Khaloo, Ali; Lattanzi, David
2016-04-01
Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.
D Animation Reconstruction from Multi-Camera Coordinates Transformation
NASA Astrophysics Data System (ADS)
Jhan, J. P.; Rau, J. Y.; Chou, C. M.
2016-06-01
Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.
NASA Technical Reports Server (NTRS)
Boyer, K. L.; Wuescher, D. M.; Sarkar, S.
1991-01-01
Dynamic edge warping (DEW), a technique for recovering reasonably accurate disparity maps from uncalibrated stereo image pairs, is presented. No precise knowledge of the epipolar camera geometry is assumed. The technique is embedded in a system including structural stereopsis on the front end and robust estimation in digital photogrammetry on the other for the purpose of self-calibrating stereo image pairs. Once the relative camera orientation is known, the epipolar geometry is computed and the system can use this information to refine its representation of the object space. Such a system will find application in the autonomous extraction of terrain maps from stereo aerial photographs, for which camera position and orientation are unknown a priori, and for online autonomous calibration maintenance for robotic vision applications, in which the cameras are subject to vibration and other physical disturbances after calibration. This work thus forms a component of an intelligent system that begins with a pair of images and, having only vague knowledge of the conditions under which they were acquired, produces an accurate, dense, relative depth map. The resulting disparity map can also be used directly in some high-level applications involving qualitative scene analysis, spatial reasoning, and perceptual organization of the object space. The system as a whole substitutes high-level information and constraints for precise geometric knowledge in driving and constraining the early correspondence process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo
2015-07-01
A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involvedmore » are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)« less