Sample records for camera response function

  1. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  2. RANKING TEM CAMERAS BY THEIR RESPONSE TO ELECTRON SHOT NOISE

    PubMed Central

    Grob, Patricia; Bean, Derek; Typke, Dieter; Li, Xueming; Nogales, Eva; Glaeser, Robert M.

    2013-01-01

    We demonstrate two ways in which the Fourier transforms of images that consist solely of randomly distributed electrons (shot noise) can be used to compare the relative performance of different electronic cameras. The principle is to determine how closely the Fourier transform of a given image does, or does not, approach that of an image produced by an ideal camera, i.e. one for which single-electron events are modeled as Kronecker delta functions located at the same pixels where the electrons were incident on the camera. Experimentally, the average width of the single-electron response is characterized by fitting a single Lorentzian function to the azimuthally averaged amplitude of the Fourier transform. The reciprocal of the spatial frequency at which the Lorentzian function falls to a value of 0.5 provides an estimate of the number of pixels at which the corresponding line-spread function falls to a value of 1/e. In addition, the excess noise due to stochastic variations in the magnitude of the response of the camera (for single-electron events) is characterized by the amount to which the appropriately normalized power spectrum does, or does not, exceed the total number of electrons in the image. These simple measurements provide an easy way to evaluate the relative performance of different cameras. To illustrate this point we present data for three different types of scintillator-coupled camera plus a silicon-pixel (direct detection) camera. PMID:23747527

  3. Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.

    PubMed

    Kopp, O; Markert, S; Tornow, R P

    2002-01-01

    To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.

  4. Graphic design of pinhole cameras

    NASA Technical Reports Server (NTRS)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  5. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  6. a Transplantable Compensation Scheme for the Effect of the Radiance from the Interior of a Camera on the Accuracy of Temperature Measurement

    NASA Astrophysics Data System (ADS)

    Dong, Shidu; Yang, Xiaofan; He, Bo; Liu, Guojin

    2006-11-01

    Radiance coming from the interior of an uncooled infrared camera has a significant effect on the measured value of the temperature of the object. This paper presents a three-phase compensation scheme for coping with this effect. The first phase acquires the calibration data and forms the calibration function by least square fitting. Likewise, the second phase obtains the compensation data and builds the compensation function by fitting. With the aid of these functions, the third phase determines the temperature of the object in concern from any given ambient temperature. It is known that acquiring the compensation data of a camera is very time-consuming. For the purpose of getting the compensation data at a reasonable time cost, we propose a transplantable scheme. The idea of this scheme is to calculate the ratio between the central pixel’s responsivity of the child camera to the radiance from the interior and that of the mother camera, followed by determining the compensation data of the child camera using this ratio and the compensation data of the mother camera Experimental results show that either of the child camera and the mother camera can measure the temperature of the object with an error of no more than 2°C.

  7. Directional Unfolded Source Term (DUST) for Compton Cameras.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Dean J.; Horne, Steven M.; O'Brien, Sean

    2018-03-01

    A Directional Unfolded Source Term (DUST) algorithm was developed to enable improved spectral analysis capabilities using data collected by Compton cameras. Achieving this objective required modification of the detector response function in the Gamma Detector Response and Analysis Software (GADRAS). Experimental data that were collected in support of this work include measurements of calibration sources at a range of separation distances and cylindrical depleted uranium castings.

  8. Absolute calibration of optical streak cameras on picosecond time scales using supercontinuum generation

    DOE PAGES

    Patankar, S.; Gumbrell, E. T.; Robinson, T. S.; ...

    2017-08-17

    Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less

  9. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  10. Thermal Effects on Camera Focal Length in Messenger Star Calibration and Orbital Imaging

    NASA Astrophysics Data System (ADS)

    Burmeister, S.; Elgner, S.; Preusker, F.; Stark, A.; Oberst, J.

    2018-04-01

    We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER) spacecraft for the camera's thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS). Within the several hundreds of images of star fields, the Wide Angle Camera (WAC) typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T) = A0 + A1 T. Next, we use images from MESSENGER's orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM). We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera - as well as the camera's focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC). This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in the photogrammetric analysis, specifically these may be responsible for erroneous longwavelength trends in topographic models.

  11. A spectral reflectance estimation technique using multispectral data from the Viking lander camera

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Huck, F. O.

    1976-01-01

    A technique is formulated for constructing spectral reflectance curve estimates from multispectral data obtained with the Viking lander camera. The multispectral data are limited to six spectral channels in the wavelength range from 0.4 to 1.1 micrometers and most of these channels exhibit appreciable out-of-band response. The output of each channel is expressed as a linear (integral) function of the (known) solar irradiance, atmospheric transmittance, and camera spectral responsivity and the (unknown) spectral responsivity and the (unknown) spectral reflectance. This produces six equations which are used to determine the coefficients in a representation of the spectral reflectance as a linear combination of known basis functions. Natural cubic spline reflectance estimates are produced for a variety of materials that can be reasonably expected to occur on Mars. In each case the dominant reflectance features are accurately reproduced, but small period features are lost due to the limited number of channels. This technique may be a valuable aid in selecting the number of spectral channels and their responsivity shapes when designing a multispectral imaging system.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patankar, S.; Gumbrell, E. T.; Robinson, T. S.

    Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less

  13. In-flight performance of the Faint Object Camera of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Greenfield, P.; Paresce, F.; Baxter, D.; Hodge, P.; Hook, R.; Jakobsen, P.; Jedrzejewski, R.; Nota, A.; Sparks, W. B.; Towers, N.

    1991-01-01

    An overview of the Faint Object Camera and its performance to date is presented. In particular, the detector's efficiency, the spatial uniformity of response, distortion characteristics, detector and sky background, detector linearity, spectrography, and operation are discussed. The effect of the severe spherical aberration of the telescope's primary mirror on the camera's point spread function is reviewed, as well as the impact it has on the camera's general performance. The scientific implications of the performance and the spherical aberration are outlined, with emphasis on possible remedies for spherical aberration, hardware remedies, and stellar population studies.

  14. Evaluation of High Dynamic Range Photography as a Luminance Mapping Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inanici, Mehlika; Galvin, Jim

    2004-12-30

    The potential, limitations, and applicability of the High Dynamic Range (HDR) photography technique is evaluated as a luminance mapping tool. Multiple exposure photographs of static scenes are taken with a Nikon 5400 digital camera to capture the wide luminance variation within the scenes. The camera response function is computationally derived using the Photosphere software, and is used to fuse the multiple photographs into HDR images. The vignetting effect and point spread function of the camera and lens system is determined. Laboratory and field studies have shown that the pixel values in the HDR photographs can correspond to the physical quantitymore » of luminance with reasonable precision and repeatability.« less

  15. Field camera measurements of gradient and shim impulse responses using frequency sweeps.

    PubMed

    Vannesjo, S Johanna; Dietrich, Benjamin E; Pavan, Matteo; Brunner, David O; Wilm, Bertram J; Barmet, Christoph; Pruessmann, Klaas P

    2014-08-01

    Applications of dynamic shimming require high field fidelity, and characterizing the shim field dynamics is therefore necessary. Modeling the system as linear and time-invariant, the purpose of this work was to measure the impulse response function with optimal sensitivity. Frequency-swept pulses as inputs are analyzed theoretically, showing that the sweep speed is a key factor for the measurement sensitivity. By adjusting the sweep speed it is possible to achieve any prescribed noise profile in the measured system response. Impulse response functions were obtained for the third-order shim system of a 7 Tesla whole-body MR scanner. Measurements of the shim fields were done with a dynamic field camera, yielding also cross-term responses. The measured shim impulse response functions revealed system characteristics such as response bandwidth, eddy currents and specific resonances, possibly of mechanical origin. Field predictions based on the shim characterization were shown to agree well with directly measured fields, also in the cross-terms. Frequency sweeps provide a flexible tool for shim or gradient system characterization. This may prove useful for applications involving dynamic shimming by yielding accurate estimates of the shim fields and a basis for setting shim pre-emphasis. Copyright © 2013 Wiley Periodicals, Inc.

  16. An optimal algorithm for reconstructing images from binary measurements

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin

    2010-01-01

    We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.

  17. New ultrasensitive pickup device for deep-sea robots: underwater super-HARP color TV camera

    NASA Astrophysics Data System (ADS)

    Maruyama, Hirotaka; Tanioka, Kenkichi; Uchida, Tetsuo

    1994-11-01

    An ultra-sensitive underwater super-HARP color TV camera has been developed. The characteristics -- spectral response, lag, etc. -- of the super-HARP tube had to be designed for use underwater because the propagation of light in water is very different from that in air, and also depends on the light's wavelength. The tubes have new electrostatic focusing and magnetic deflection functions and are arranged in parallel to miniaturize the camera. A deep sea robot (DOLPHIN 3K) was fitted with this camera and used for the first sea test in Sagami Bay, Japan. The underwater visual information was clear enough to promise significant improvements in both deep sea surveying and safety. It was thus confirmed that the Super- HARP camera is very effective for underwater use.

  18. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    PubMed

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  19. OSIRIS-REx Asteroid Sample Return Mission Image Analysis

    NASA Astrophysics Data System (ADS)

    Chevres Fernandez, Lee Roger; Bos, Brent

    2018-01-01

    NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding regarding the functionality of the camera system, which will in turn aid in the fly-down to the asteroid, as it will allow the pick of a suitable landing and sample location.

  20. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  1. Operator vision aids for space teleoperation assembly and servicing

    NASA Technical Reports Server (NTRS)

    Brooks, Thurston L.; Ince, Ilhan; Lee, Greg

    1992-01-01

    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.

  2. Radiometric calibration of wide-field camera system with an application in astronomy

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  3. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  4. Image features dependant correlation-weighting function for efficient PRNU based source camera identification.

    PubMed

    Tiwari, Mayank; Gupta, Bhupendra

    2018-04-01

    For source camera identification (SCI), photo response non-uniformity (PRNU) has been widely used as the fingerprint of the camera. The PRNU is extracted from the image by applying a de-noising filter then taking the difference between the original image and the de-noised image. However, it is observed that intensity-based features and high-frequency details (edges and texture) of the image, effect quality of the extracted PRNU. This effects correlation calculation and creates problems in SCI. For solving this problem, we propose a weighting function based on image features. We have experimentally identified image features (intensity and high-frequency contents) effect on the estimated PRNU, and then develop a weighting function which gives higher weights to image regions which give reliable PRNU and at the same point it gives comparatively less weights to the image regions which do not give reliable PRNU. Experimental results show that the proposed weighting function is able to improve the accuracy of SCI up to a great extent. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  6. Human-Robot Emergency Response - Experimental Platform and Preliminary Dataset

    DTIC Science & Technology

    2014-07-28

    Proceedings of the IEEE International Conference on Robotics and Automation, Leuven, Belgium, May 16–21 1998, pp. 3715–3720. [13] itseez, “ Opencv ,” http...function and camshift function in OpenCV [13]. In each image obtained form cameras, we first calculate back projection of a histogram model of a human. In

  7. Optimized lighting method of applying shaped-function signal for increasing the dynamic range of LED-multispectral imaging system

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling

    2018-02-01

    This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.

  8. Optimized lighting method of applying shaped-function signal for increasing the dynamic range of LED-multispectral imaging system.

    PubMed

    Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling

    2018-02-01

    This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.

  9. Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.

    2007-09-01

    We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  10. Estimation of reflectance from camera responses by the regularized local linear model.

    PubMed

    Zhang, Wei-Feng; Tang, Gongguo; Dai, Dao-Qing; Nehorai, Arye

    2011-10-01

    Because of the limited approximation capability of using fixed basis functions, the performance of reflectance estimation obtained by traditional linear models will not be optimal. We propose an approach based on the regularized local linear model. Our approach performs efficiently and knowledge of the spectral power distribution of the illuminant and the spectral sensitivities of the camera is not needed. Experimental results show that the proposed method performs better than some well-known methods in terms of both reflectance error and colorimetric error. © 2011 Optical Society of America

  11. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  12. Image quality prediction - An aid to the Viking lander imaging investigation on Mars

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Wall, S. D.

    1976-01-01

    Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).

  13. Rapid estimation of frequency response functions by close-range photogrammetry

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1985-01-01

    The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.

  14. Speech versus manual control of camera functions during a telerobotic task

    NASA Technical Reports Server (NTRS)

    Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.

    1989-01-01

    Voice input for control of camera functions was investigated in this study. Objective were to (1) assess the feasibility of a voice-commanded camera control system, and (2) identify factors that differ between voice and manual control of camera functions. Subjects participated in a remote manipulation task that required extensive camera-aided viewing. Each subject was exposed to two conditions, voice and manual input, with a counterbalanced administration order. Voice input was found to be significantly slower than manual input for this task. However, in terms of remote manipulator performance errors and subject preference, there was no difference between modalities. Voice control of continuous camera functions is not recommended. It is believed that the use of voice input for discrete functions, such as multiplexing or camera switching, could aid performance. Hybrid mixes of voice and manual input may provide the best use of both modalities. This report contributes to a better understanding of the issues that affect the design of an efficient human/telerobot interface.

  15. Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.

    1998-07-01

    This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).

  16. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    NASA Astrophysics Data System (ADS)

    Pospisil, J.; Jakubik, P.; Machala, L.

    2005-11-01

    This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.

  17. Portal imaging with flat-panel detector and CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Wai; Dallas, William J.

    1997-07-01

    This paper provides a comparison of imaging parameters of two portal imaging systems at 6 MV: a flat panel detector and a CCD-camera based portal imaging system. Measurements were made of the signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. Both systems have a linear response with respect to exposure, and the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal- to-noise ratio, which is higher than that observed wit the CCD-camera based portal imaging system. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The paper also presents data on the screen's photon gain (the number of light-photons per interacting x-ray photon), as well as on the magnitude of the Swank-noise, (which describes fluctuation in the screen's photon gain). Images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center, were generated at an exposure of 1 MU. The CCD-camera based system permits detection of aluminum-holes of 0.01194 cm diameter and 0.228 mm depth while the flat-panel detector permits detection of aluminum holes of 0.01194 cm diameter and 0.1626 mm depth, indicating a better signal-to-noise ratio. Rank order filtering was applied to the raw images from the CCD-based system in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-camera and generate 'salt and pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise.

  18. Impact of intense x-ray pulses on a NaI(Tl)-based gamma camera

    NASA Astrophysics Data System (ADS)

    Koppert, W. J. C.; van der Velden, S.; Steenbergen, J. H. L.; de Jong, H. W. A. M.

    2018-03-01

    In SPECT/CT systems x-ray and γ-ray imaging is performed sequentially. Simultaneous acquisition may have advantages, for instance in interventional settings. However, this may expose a gamma camera to relatively high x-ray doses and deteriorate its functioning. We studied the NaI(Tl) response to x-ray pulses with a photodiode, PMT and gamma camera, respectively. First, we exposed a NaI(Tl)-photodiode assembly to x-ray pulses to investigate potential crystal afterglow. Next, we exposed a NaI(Tl)-PMT assembly to 10 ms LED pulses (mimicking x-ray pulses) and measured the response to flashing LED probe-pulses (mimicking γ-pulses). We then exposed the assembly to x-ray pulses, with detector entrance doses of up to 9 nGy/pulse, and analysed the response for γ-pulse variations. Finally, we studied the response of a Siemens Diacam gamma camera to γ-rays while exposed to x-ray pulses. X-ray exposure of the crystal, read out with a photodiode, revealed 15% afterglow fraction after 3 ms. The NaI(Tl)-PMT assembly showed disturbances up to 10 ms after 10 ms LED exposure. After x-ray exposure however, responses showed elevated baselines, with 60 ms decay-time. Both for x-ray and LED exposure and after baseline subtraction, probe-pulse analysis revealed disturbed pulse height measurements shortly after exposure. X-ray exposure of the Diacam corroborated the elementary experiments. Up to 50 ms after an x-ray pulse, no events are registered, followed by apparent energy elevations up to 100 ms after exposure. Limiting the dose to 0.02 nGy/pulse prevents detrimental effects. Conventional gamma cameras exhibit substantial dead-time and mis-registration of photon energies up to 100 ms after intense x-ray pulses. This is due PMT limitations and due to afterglow in the crystal. Using PMTs with modified circuitry, we show that deteriorative afterglow effects can be reduced without noticeable effects on the PMT performance, up to x-ray pulse doses of 1 nGy.

  19. Camera characterization for all-sky polarization measurements during the 2017 solar eclipse

    NASA Astrophysics Data System (ADS)

    Hashimoto, Taiga; Dahl, Laura M.; Laurie, Seth A.; Shaw, Joseph A.

    2017-08-01

    A solar eclipse provides a rare opportunity to observe skylight polarization during conditions that are fundamentally different than what we see every day. On 21 August 2017 we will measure the skylight polarization during a total solar eclipse in Rexburg, Idaho, USA. Previous research has shown that during totality the sky polarization pattern is altered significantly to become nominally symmetric about the zenith. However, there are still questions remaining about the details of how surface reflectance near the eclipse observation site and optical properties of aerosols in the atmosphere influence the totality sky polarization pattern. We will study how skylight polarization in a solar eclipse changes through each phase and how surface and atmospheric features affect the measured polarization signatures. To accomplish this, fully characterizing the cameras and fisheye lenses is critical. This paper reports measurements that include finding the camera sensitivity and its relationship to the required short exposure times, measuring the camera's spectral response function, mapping the angles of each camera pixel with the fisheye lens, and taking test measurements during daytime and twilight conditions. The daytime polarimetric images were compared to images from an existing all-sky polarization imager and a polarimetric radiative transfer model.

  20. Effect of indocyanine green angiography using infrared fundus camera on subsequent dark adaptation and electroretinogram.

    PubMed

    Wen, Feng; Yu, Minzhong; Wu, Dezheng; Ma, Juanmei; Wu, Lezheng

    2002-07-01

    To observe the effect of indocyanine green angiography (ICGA) with infrared fundus camera on subsequent dark adaptation and the Ganzfeld electroretinogram (ERG), the ERGs of 38 eyes with different retinal diseases were recorded before and after ICGA during a 40-min dark adaptation period. ICGA was performed with Topcon 50IA retina camera. Ganzfeld ERG was recorded with Neuropack II evoked response recorder. The results showed that ICGA did not affect the latencies and the amplitudes in ERG of rod response, cone response and mixed maximum response (p>0.05). It suggests that ICGA using infrared fundus camera could be performed prior to the recording of the Ganzfeld ERG.

  1. A Planar Two-Dimensional Superconducting Bolometer Array for the Green Bank Telescope

    NASA Technical Reports Server (NTRS)

    Benford, Dominic; Staguhn, Johannes G.; Chervenak, James A.; Chen, Tina C.; Moseley, S. Harvey; Wollack, Edward J.; Devlin, Mark J.; Dicker, Simon R.; Supanich, Mark

    2004-01-01

    In order to provide high sensitivity rapid imaging at 3.3mm (90GHz) for the Green Bank Telescope - the world's largest steerable aperture - a camera is being built by the University of Pennsylvania, NASA/GSFC, and NRAO. The heart of this camera is an 8x8 close-packed, Nyquist-sampled detector array. We have designed and are fabricating a functional superconducting bolometer array system using a monolithic planar architecture. Read out by SQUID multiplexers, the superconducting transition edge sensors will provide fast, linear, sensitive response for high performance imaging. This will provide the first ever superconducting bolometer array on a facility instrument.

  2. Study of plant phototropic responses to different LEDs illumination in microgravity

    NASA Astrophysics Data System (ADS)

    Zyablova, Natalya; Berkovich, Yuliy A.; Skripnikov, Alexander; Nikitin, Vladimir

    2012-07-01

    The purpose of the experiment planned for Russian BION-M #1, 2012, biosatellite is research of Physcomitrella patens (Hedw.) B.S.G. phototropic responses to different light stimuli in microgravity. The moss was chosen as small-size higher plant. The experimental design involves five lightproof culture flasks with moss gametophores fixed inside the cylindrical container (diameter 120 mm; height 240 mm). The plants in each flask are illuminated laterally by one of the following LEDs: white, blue (475 nm), red (625 nm), far red (730 nm), infrared (950 nm). The gametophores growth and bending are captured periodically by means of five analogue video cameras and recorder. The programmable command module controls power supply of each camera and each light source, commutation of the cameras and functioning of video recorder. Every 20 minutes the recorder is sequentially connecting to one of the cameras. This results in a clip, containing 5 sets of frames in a row. After landing time-lapse films are automatically created. As a result we will have five time-lapse films covering transformations in each of the five culture flasks. Onground experiments demonstrated that white light induced stronger gametophores phototropic bending as compared to red and blue stimuli. The comparison of time-lapse recordings in the experiments will provide useful information to optimize lighting assemblies for space plant growth facilities.

  3. High Dynamic Range Imaging Using Multiple Exposures

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei

    2017-06-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.

  4. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  5. Electrostatic camera system functional design study

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Cook, F. J.; Moore, R. F.

    1972-01-01

    A functional design study for an electrostatic camera system for application to planetary missions is presented. The electrostatic camera can produce and store a large number of pictures and provide for transmission of the stored information at arbitrary times after exposure. Preliminary configuration drawings and circuit diagrams for the system are illustrated. The camera system's size, weight, power consumption, and performance are characterized. Tradeoffs between system weight, power, and storage capacity are identified.

  6. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  7. Response, Emergency Staging, Communications, Uniform Management, and Evacuation (R.E.S.C.U.M.E.) : report on functional and performance requirements, and high-level data and communication needs.

    DOT National Transportation Integrated Search

    1995-06-01

    INTELLIGENT VEHICLE INITIATIVE OR IVI ABSTRACT THE GOAL OF THE TRAVTEK CAMERA CAR STUDY WAS TO FURNISH A DETAILED EVALUATION OF DRIVING AND NAVIGATION PERFORMANCE, SYSTEM USABILITY, AND SAFETY FOR THE TRAVTEK SYSTEM. TO ACHIEVE THIS GOAL, AN INSTRUME...

  8. Structural Dynamics Analysis and Research for FEA Modeling Method of a Light High Resolution CCD Camera

    NASA Astrophysics Data System (ADS)

    Sun, Jiwen; Wei, Ling; Fu, Danying

    2002-01-01

    resolution and wide swath. In order to assure its high optical precision smoothly passing the rigorous dynamic load of launch, it should be of high structural rigidity. Therefore, a careful study of the dynamic features of the camera structure should be performed. Pro/E. An interference examination is performed on the precise CAD model of the camera for mending the structural design. for the first time in China, and the analysis of structural dynamic of the camera is accomplished by applying the structural analysis code PATRAN and NASTRAN. The main research programs include: 1) the comparative calculation of modes analysis of the critical structure of the camera is achieved by using 4 nodes and 10 nodes tetrahedral elements respectively, so as to confirm the most reasonable general model; 2) through the modes analysis of the camera from several cases, the inherent frequencies and modes are obtained and further the rationality of the structural design of the camera is proved; 3) the static analysis of the camera under self gravity and overloads is completed and the relevant deformation and stress distributions are gained; 4) the response calculation of sine vibration of the camera is completed and the corresponding response curve and maximum acceleration response with corresponding frequencies are obtained. software technique is accurate and efficient. sensitivity, the dynamic design and engineering optimization of the critical structure of the camera are discussed. fundamental technology in design of forecoming space optical instruments.

  9. High dynamic range CMOS (HDRC) imagers for safety systems

    NASA Astrophysics Data System (ADS)

    Strobel, Markus; Döttling, Dietmar

    2013-04-01

    The first part of this paper describes the high dynamic range CMOS (HDRC®) imager - a special type of CMOS image sensor with logarithmic response. The powerful property of a high dynamic range (HDR) image acquisition is detailed by mathematical definition and measurement of the optoelectronic conversion function (OECF) of two different HDRC imagers. Specific sensor parameters will be discussed including the pixel design for the global shutter readout. The second part will give an outline on the applications and requirements of cameras for industrial safety. Equipped with HDRC global shutter sensors SafetyEYE® is a high-performance stereo camera system for safe three-dimensional zone monitoring enabling new and more flexible solutions compared to existing safety guards.

  10. Using hacked point and shoot cameras for time-lapse snow cover monitoring in an Alpine valley

    NASA Astrophysics Data System (ADS)

    Weijs, S. V.; Diebold, M.; Mutzner, R.; Golay, J. R.; Parlange, M. B.

    2012-04-01

    In Alpine environments, monitoring snow cover is essential get insight in the hydrological processes and water balance. Although measurement techniques based on LIDAR are available, their cost is often a restricting factor. In this research, an experiment was done using a distributed array of cheap consumer cameras to get insight in the spatio-temporal evolution of snowpack. Two experiments are planned. The first involves the measurement of eolic snow transport around a hill, to validate a snow saltation model. The second monitors the snowmelt during the melting season, which can then be combined with data from a wireless network of meteorological stations and discharge measurements at the outlet of the catchment. The poster describes the hardware and software setup, based on an external timer circuit and CHDK, the Canon Hack Development Kit. This latter is a flexible and developing software package, released under a GPL license. It was developed by hackers that reverse engineered the firmware of the camera and added extra functionality such as raw image output, more full control of the camera, external trigger and motion detection, and scripting. These features make it a great tool for geosciences. Possible other applications involve aerial stereo photography, monitoring vegetation response. We are interested in sharing experiences and brainstorming about new applications. Bring your camera!

  11. Speech versus manual control of camera functions during a telerobotic task

    NASA Technical Reports Server (NTRS)

    Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.

    1993-01-01

    This investigation has evaluated the voice-commanded camera control concept. For this particular task, total voice control of continuous and discrete camera functions was significantly slower than manual control. There was no significant difference between voice and manual input for several types of errors. There was not a clear trend in subjective preference of camera command input modality. Task performance, in terms of both accuracy and speed, was very similar across both levels of experience.

  12. The electromagnetic interference of mobile phones on the function of a γ-camera.

    PubMed

    Javadi, Hamid; Azizmohammadi, Zahra; Mahmoud Pashazadeh, Ali; Neshandar Asli, Isa; Moazzeni, Taleb; Baharfar, Nastaran; Shafiei, Babak; Nabipour, Iraj; Assadi, Majid

    2014-03-01

    The aim of the present study is to evaluate whether or not the electromagnetic field generated by mobile phones interferes with the function of a SPECT γ-camera during data acquisition. We tested the effects of 7 models of mobile phones on 1 SPECT γ-camera. The mobile phones were tested when making a call, in ringing mode, and in standby mode. The γ-camera function was assessed during data acquisition from a planar source and a point source of Tc with activities of 10 mCi and 3 mCi, respectively. A significant visual decrease in count number was considered to be electromagnetic interference (EMI). The percentage of induced EMI with the γ-camera per mobile phone was in the range of 0% to 100%. The incidence of EMI was mainly observed in the first seconds of ringing and then mitigated in the following frames. Mobile phones are portable sources of electromagnetic radiation, and there is interference potential with the function of SPECT γ-cameras leading to adverse effects on the quality of the acquired images.

  13. Cinematic camera emulation using two-dimensional color transforms

    NASA Astrophysics Data System (ADS)

    McElvain, Jon S.; Gish, Walter

    2015-02-01

    For cinematic and episodic productions, on-set look management is an important component of the creative process, and involves iterative adjustments of the set, actors, lighting and camera configuration. Instead of using the professional motion capture device to establish a particular look, the use of a smaller form factor DSLR is considered for this purpose due to its increased agility. Because the spectral response characteristics will be different between the two camera systems, a camera emulation transform is needed to approximate the behavior of the destination camera. Recently, twodimensional transforms have been shown to provide high-accuracy conversion of raw camera signals to a defined colorimetric state. In this study, the same formalism is used for camera emulation, whereby a Canon 5D Mark III DSLR is used to approximate the behavior a Red Epic cinematic camera. The spectral response characteristics for both cameras were measured and used to build 2D as well as 3x3 matrix emulation transforms. When tested on multispectral image databases, the 2D emulation transforms outperform their matrix counterparts, particularly for images containing highly chromatic content.

  14. Image quality evaluation of color displays using a Fovean color camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro

    2007-03-01

    This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  15. SOFIA tracking image simulation

    NASA Astrophysics Data System (ADS)

    Taylor, Charles R.; Gross, Michael A. K.

    2016-09-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) tracking camera simulator is a component of the Telescope Assembly Simulator (TASim). TASim is a software simulation of the telescope optics, mounting, and control software. Currently in its fifth major version, TASim is relied upon for telescope operator training, mission planning and rehearsal, and mission control and science instrument software development and testing. TASim has recently been extended for hardware-in-the-loop operation in support of telescope and camera hardware development and control and tracking software improvements. All three SOFIA optical tracking cameras are simulated, including the Focal Plane Imager (FPI), which has recently been upgraded to the status of a science instrument that can be used on its own or in parallel with one of the seven infrared science instruments. The simulation includes tracking camera image simulation of starfields based on the UCAC4 catalog at real-time rates of 4-20 frames per second. For its role in training and planning, it is important for the tracker image simulation to provide images with a realistic appearance and response to changes in operating parameters. For its role in tracker software improvements, it is vital to have realistic signal and noise levels and precise star positions. The design of the software simulation for precise subpixel starfield rendering (including radial distortion), realistic point-spread function as a function of focus, tilt, and collimation, and streaking due to telescope motion will be described. The calibration of the simulation for light sensitivity, dark and bias signal, and noise will also be presented

  16. Measurement of the timing behaviour of off-the-shelf cameras

    NASA Astrophysics Data System (ADS)

    Schatz, Volker

    2017-04-01

    This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.

  17. Selecting the right digital camera for telemedicine-choice for 2009.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart; Brudzinski, Jay; Spargo, Garret

    2010-03-01

    Digital cameras are fundamental tools for store-and-forward telemedicine (electronic consultation). The choice of a camera may significantly impact this consultative process based on the quality of the images, the ability of users to leverage the cameras' features, and other facets of the camera design. The goal of this research was to provide a substantive framework and clearly defined process for reviewing digital cameras and to demonstrate the results obtained when employing this process to review point-and-shoot digital cameras introduced in 2009. The process included a market review, in-house evaluation of features, image reviews, functional testing, and feature prioritization. Seventy-two cameras were identified new on the market in 2009, and 10 were chosen for in-house evaluation. Four cameras scored very high for mechanical functionality and ease-of-use. The final analysis revealed three cameras that had excellent scores for both color accuracy and photographic detail and these represent excellent options for telemedicine: Canon Powershot SD970 IS, Fujifilm FinePix F200EXR, and Panasonic Lumix DMC-ZS3. Additional features of the Canon Powershot SD970 IS make it the camera of choice for our Alaska program.

  18. Two Persons with Multiple Disabilities Use Camera-Based Microswitch Technology to Control Stimulation with Small Mouth and Eyelid Responses

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell

    2012-01-01

    Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…

  19. Performance of cardiac cadmium-zinc-telluride gamma camera imaging in coronary artery disease: a review from the cardiovascular committee of the European Association of Nuclear Medicine (EANM).

    PubMed

    Agostini, Denis; Marie, Pierre-Yves; Ben-Haim, Simona; Rouzet, François; Songy, Bernard; Giordano, Alessandro; Gimelli, Alessia; Hyafil, Fabien; Sciagrà, Roberto; Bucerius, Jan; Verberne, Hein J; Slart, Riemer H J A; Lindner, Oliver; Übleis, Christopher; Hacker, Marcus

    2016-12-01

    The trade-off between resolution and count sensitivity dominates the performance of standard gamma cameras and dictates the need for relatively high doses of radioactivity of the used radiopharmaceuticals in order to limit image acquisition duration. The introduction of cadmium-zinc-telluride (CZT)-based cameras may overcome some of the limitations against conventional gamma cameras. CZT cameras used for the evaluation of myocardial perfusion have been shown to have a higher count sensitivity compared to conventional single photon emission computed tomography (SPECT) techniques. CZT image quality is further improved by the development of a dedicated three-dimensional iterative reconstruction algorithm, based on maximum likelihood expectation maximization (MLEM), which corrects for the loss in spatial resolution due to line response function of the collimator. All these innovations significantly reduce imaging time and result in a lower patient's radiation exposure compared with standard SPECT. To guide current and possible future users of the CZT technique for myocardial perfusion imaging, the Cardiovascular Committee of the European Association of Nuclear Medicine, starting from the experience of its members, has decided to examine the current literature regarding procedures and clinical data on CZT cameras. The committee hereby aims 1) to identify the main acquisitions protocols; 2) to evaluate the diagnostic and prognostic value of CZT derived myocardial perfusion, and finally 3) to determine the impact of CZT on radiation exposure.

  20. New camera-based microswitch technology to monitor small head and mouth responses of children with multiple disabilities.

    PubMed

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred

    2014-06-01

    Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.

  1. The effects of self-focused attention, performance demand, and dispositional sexual self-consciousness on sexual arousal of sexually functional and dysfunctional men.

    PubMed

    van Lankveld, Jacques J D M; van den Hout, Marcel A; Schouten, Erik G W

    2004-08-01

    Sexually functional (N=26) and sexually dysfunctional heterosexual men with psychogenic erectile disorder (N=23) viewed two sexually explicit videos. Performance demand was manipulated through verbal instruction that a substantial genital response was to be expected from the videos. Self-focused attention was manipulated by introducing a camera pointed at the participant. Dispositional self-consciousness was assessed by questionnaire. Performance demand was found to independently inhibit the genital response. No main effect of self-focus was found. Self-focus inhibited genital response in men scoring high on general and sexual self-consciousness traits, whereas it enhanced penile tumescence in low self-conscious men. Inhibition effects were found in both volunteers and patients. No interaction effects of performance demand and self-focus were found. Subjective sexual arousal in sexually functional men was highest in the self-focus condition. In sexually dysfunctional men, subjective sexual response proved dependent on locus of attention as well as presentation order.

  2. Making Ceramic Cameras

    ERIC Educational Resources Information Center

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  3. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  4. Astronomical Polarimetry with the RIT Polarization Imaging Camera

    NASA Astrophysics Data System (ADS)

    Vorobiev, Dmitry V.; Ninkov, Zoran; Brock, Neal

    2018-06-01

    In the last decade, imaging polarimeters based on micropolarizer arrays have been developed for use in terrestrial remote sensing and metrology applications. Micropolarizer-based sensors are dramatically smaller and more mechanically robust than other polarimeters with similar spectral response and snapshot capability. To determine the suitability of these new polarimeters for astronomical applications, we developed the RIT Polarization Imaging Camera to investigate the performance of these devices, with a special attention to the low signal-to-noise regime. We characterized the device performance in the lab, by determining the relative throughput, efficiency, and orientation of every pixel, as a function of wavelength. Using the resulting pixel response model, we developed demodulation procedures for aperture photometry and imaging polarimetry observing modes. We found that, using the current calibration, RITPIC is capable of detecting polarization signals as small as ∼0.3%. The relative ease of data collection, calibration, and analysis provided by these sensors suggest than they may become an important tool for a number of astronomical targets.

  5. Improved spatial resolution of luminescence images acquired with a silicon line scanning camera

    NASA Astrophysics Data System (ADS)

    Teal, Anthony; Mitchell, Bernhard; Juhl, Mattias K.

    2018-04-01

    Luminescence imaging is currently being used to provide spatially resolved defect in high volume silicon solar cell production. One option to obtain the high throughput required for on the fly detection is the use a silicon line scan cameras. However, when using a silicon based camera, the spatial resolution is reduced as a result of the weakly absorbed light scattering within the camera's chip. This paper address this issue by applying deconvolution from a measured point spread function. This paper extends the methods for determining the point spread function of a silicon area camera to a line scan camera with charge transfer. The improvement in resolution is quantified in the Fourier domain and in spatial domain on an image of a multicrystalline silicon brick. It is found that light spreading beyond the active sensor area is significant in line scan sensors, but can be corrected for through normalization of the point spread function. The application of this method improves the raw data, allowing effective detection of the spatial resolution of defects in manufacturing.

  6. Performance measurement of commercial electronic still picture cameras

    NASA Astrophysics Data System (ADS)

    Hsu, Wei-Feng; Tseng, Shinn-Yih; Chiang, Hwang-Cheng; Cheng, Jui-His; Liu, Yuan-Te

    1998-06-01

    Commercial electronic still picture cameras need a low-cost, systematic method for evaluating the performance. In this paper, we present a measurement method to evaluating the dynamic range and sensitivity by constructing the opto- electronic conversion function (OECF), the fixed pattern noise by the peak S/N ratio (PSNR) and the image shading function (ISF), and the spatial resolution by the modulation transfer function (MTF). The evaluation results of individual color components and the luminance signal from a PC camera using SONY interlaced CCD array as the image sensor are then presented.

  7. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    PubMed Central

    Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter

    2017-01-01

    Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038

  8. Occlusion handling framework for tracking in smart camera networks by per-target assistance task assignment

    NASA Astrophysics Data System (ADS)

    Bo, Nyan Bo; Deboeverie, Francis; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Occlusion is one of the most difficult challenges in the area of visual tracking. We propose an occlusion handling framework to improve the performance of local tracking in a smart camera view in a multicamera network. We formulate an extensible energy function to quantify the quality of a camera's observation of a particular target by taking into account both person-person and object-person occlusion. Using this energy function, a smart camera assesses the quality of observations over all targets being tracked. When it cannot adequately observe of a target, a smart camera estimates the quality of observation of the target from view points of other assisting cameras. If a camera with better observation of the target is found, the tracking task of the target is carried out with the assistance of that camera. In our framework, only positions of persons being tracked are exchanged between smart cameras. Thus, communication bandwidth requirement is very low. Performance evaluation of our method on challenging video sequences with frequent and severe occlusions shows that the accuracy of a baseline tracker is considerably improved. We also report the performance comparison to the state-of-the-art trackers in which our method outperforms.

  9. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.

    PubMed

    Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der

    2017-01-01

    The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.

  10. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research

    PubMed Central

    Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René

    2017-01-01

    The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444

  11. Camera-Based Microswitch Technology to Monitor Mouth, Eyebrow, and Eyelid Responses of Children with Profound Multiple Disabilities

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…

  12. Camera-Based Microswitch Technology for Eyelid and Mouth Responses of Persons with Profound Multiple Disabilities: Two Case Studies

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff

    2010-01-01

    These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color…

  13. Time-resolved measurements in diffuse reflectance: effects of the instrument response function of different detection systems on the depth sensitivity

    NASA Astrophysics Data System (ADS)

    Puszka, Agathe; Planat-Chrétien, Anne; Berger, Michel; Hervé, Lionel; Dinten, Jean-Marc

    2014-02-01

    We demonstrate the loss of depth sensitivity induced by the instrument response function on reflectance time-resolved diffuse optical tomography through the comparison of 3 detection systems: on one hand a photomultiplier tube (PMT) and a hybrid PMT coupled with a time-correlated single-photon counting card and on the other hand a high rate intensified camera. We experimentally evaluate the depth sensitivity achieved for each detection module with an absorbing inclusion embedded in a turbid medium. The different interfiber distances of 5, 10 and 15 mm are considered. Finally, we determine a maximal depth reached for each detection system by using 3D tomographic reconstructions based on the Mellin-Laplace transform.

  14. Modified slanted-edge method for camera modulation transfer function measurement using nonuniform fast Fourier transform technique

    NASA Astrophysics Data System (ADS)

    Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin

    2018-01-01

    ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.

  15. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  16. Genetic mechanisms involved in the evolution of the cephalopod camera eye revealed by transcriptomic and developmental studies

    PubMed Central

    2011-01-01

    Background Coleoid cephalopods (squids and octopuses) have evolved a camera eye, the structure of which is very similar to that found in vertebrates and which is considered a classic example of convergent evolution. Other molluscs, however, possess mirror, pin-hole, or compound eyes, all of which differ from the camera eye in the degree of complexity of the eye structures and neurons participating in the visual circuit. Therefore, genes expressed in the cephalopod eye after divergence from the common molluscan ancestor could be involved in eye evolution through association with the acquisition of new structural components. To clarify the genetic mechanisms that contributed to the evolution of the cephalopod camera eye, we applied comprehensive transcriptomic analysis and conducted developmental validation of candidate genes involved in coleoid cephalopod eye evolution. Results We compared gene expression in the eyes of 6 molluscan (3 cephalopod and 3 non-cephalopod) species and selected 5,707 genes as cephalopod camera eye-specific candidate genes on the basis of homology searches against 3 molluscan species without camera eyes. First, we confirmed the expression of these 5,707 genes in the cephalopod camera eye formation processes by developmental array analysis. Second, using molecular evolutionary (dN/dS) analysis to detect positive selection in the cephalopod lineage, we identified 156 of these genes in which functions appeared to have changed after the divergence of cephalopods from the molluscan ancestor and which contributed to structural and functional diversification. Third, we selected 1,571 genes, expressed in the camera eyes of both cephalopods and vertebrates, which could have independently acquired a function related to eye development at the expression level. Finally, as experimental validation, we identified three functionally novel cephalopod camera eye genes related to optic lobe formation in cephalopods by in situ hybridization analysis of embryonic pygmy squid. Conclusion We identified 156 genes positively selected in the cephalopod lineage and 1,571 genes commonly found in the cephalopod and vertebrate camera eyes from the analysis of cephalopod camera eye specificity at the expression level. Experimental validation showed that the cephalopod camera eye-specific candidate genes include those expressed in the outer part of the optic lobes, which unique to coleoid cephalopods. The results of this study suggest that changes in gene expression and in the primary structure of proteins (through positive selection) from those in the common molluscan ancestor could have contributed, at least in part, to cephalopod camera eye acquisition. PMID:21702923

  17. Helms in FGB/Zarya with cameras

    NASA Image and Video Library

    2001-06-08

    ISS002-E-6526 (8 June 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, mounts a video camera onto a bracket in the Zarya or Functional Cargo Block (FGB) of the International Space Station (ISS). The image was recorded with a digital still camera.

  18. Digital holographic interferometry for characterizing deformable mirrors in aero-optics

    NASA Astrophysics Data System (ADS)

    Trolinger, James D.; Hess, Cecil F.; Razavi, Payam; Furlong, Cosme

    2016-08-01

    Measuring and understanding the transient behavior of a surface with high spatial and temporal resolution are required in many areas of science. This paper describes the development and application of a high-speed, high-dynamic range, digital holographic interferometer for high-speed surface contouring with fractional wavelength precision and high-spatial resolution. The specific application under investigation here is to characterize deformable mirrors (DM) employed in aero-optics. The developed instrument was shown capable of contouring a deformable mirror with extremely high-resolution at frequencies exceeding 40 kHz. We demonstrated two different procedures for characterizing the mechanical response of a surface to a wide variety of input forces, one that employs a high-speed digital camera and a second that employs a low-speed, low-cost digital camera. The latter is achieved by cycling the DM actuators with a step input, producing a transient that typically lasts up to a millisecond before reaching equilibrium. Recordings are made at increasing times after the DM initiation from zero to equilibrium to analyze the transient. Because the wave functions are stored and reconstructable, they can be compared with each other to produce contours including absolute, difference, and velocity. High-speed digital cameras recorded the wave functions during a single transient at rates exceeding 40 kHz. We concluded that either method is fully capable of characterizing a typical DM to the extent required by aero-optical engineers.

  19. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  20. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  1. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  2. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  3. Camera network video summarization

    NASA Astrophysics Data System (ADS)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  4. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  5. Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0.

    PubMed

    Jin, Xin; Liu, Li; Chen, Yanqin; Dai, Qionghai

    2017-05-01

    This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object's depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.

  6. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  7. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    PubMed

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  8. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  9. Examining wildlife responses to phenology and wildfire using a landscape-scale camera trap network

    Treesearch

    Miguel L. Villarreal; Leila Gass; Laura Norman; Joel B. Sankey; Cynthia S. A. Wallace; Dennis McMacken; Jack L. Childs; Roy Petrakis

    2013-01-01

    Between 2001 and 2009, the Borderlands Jaguar Detection Project deployed 174 camera traps in the mountains of southern Arizona to record jaguar activity. In addition to jaguars, the motion-activated cameras, placed along known wildlife travel routes, recorded occurrences of ~ 20 other animal species. We examined temporal relationships of white-tailed deer (Odocoileus...

  10. Comparative Analysis of Gene Expression for Convergent Evolution of Camera Eye Between Octopus and Human

    PubMed Central

    Ogura, Atsushi; Ikeo, Kazuho; Gojobori, Takashi

    2004-01-01

    Although the camera eye of the octopus is very similar to that of humans, phylogenetic and embryological analyses have suggested that their camera eyes have been acquired independently. It has been known as a typical example of convergent evolution. To study the molecular basis of convergent evolution of camera eyes, we conducted a comparative analysis of gene expression in octopus and human camera eyes. We sequenced 16,432 ESTs of the octopus eye, leading to 1052 nonredundant genes that have matches in the protein database. Comparing these 1052 genes with 13,303 already-known ESTs of the human eye, 729 (69.3%) genes were commonly expressed between the human and octopus eyes. On the contrary, when we compared octopus eye ESTs with human connective tissue ESTs, the expression similarity was quite low. To trace the evolutionary changes that are potentially responsible for camera eye formation, we also compared octopus-eye ESTs with the completed genome sequences of other organisms. We found that 1019 out of the 1052 genes had already existed at the common ancestor of bilateria, and 875 genes were conserved between humans and octopuses. It suggests that a larger number of conserved genes and their similar gene expression may be responsible for the convergent evolution of the camera eye. PMID:15289475

  11. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on any type of computer and will soon be downloadable from the net (http://rsb.info.nih.gov/ij/plugins or http://nucleartoolkit.free.fr).

  12. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  13. Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-01-01

    The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenariosmore » are presented with calculations showing the application of such a metric.« less

  14. The Photogrammetry Cube

    NASA Technical Reports Server (NTRS)

    2008-01-01

    We can determine distances between objects and points of interest in 3-D space to a useful degree of accuracy from a set of camera images by using multiple camera views and reference targets in the camera s field of view (FOV). The core of the software processing is based on the previously developed foreign-object debris vision trajectory software (see KSC Research and Technology 2004 Annual Report, pp. 2 5). The current version of this photogrammetry software includes the ability to calculate distances between any specified point pairs, the ability to process any number of reference targets and any number of camera images, user-friendly editing features, including zoom in/out, translate, and load/unload, routines to help mark reference points with a Find function, while comparing them with the reference point database file, and a comprehensive output report in HTML format. In this system, scene reference targets are replaced by a photogrammetry cube whose exterior surface contains multiple predetermined precision 2-D targets. Precise measurement of the cube s 2-D targets during the fabrication phase eliminates the need for measuring 3-D coordinates of reference target positions in the camera's FOV, using for example a survey theodolite or a Faroarm. Placing the 2-D targets on the cube s surface required the development of precise machining methods. In response, 2-D targets were embedded into the surface of the cube and then painted black for high contrast. A 12-inch collapsible cube was developed for room-size scenes. A 3-inch, solid, stainless-steel photogrammetry cube was also fabricated for photogrammetry analysis of small objects.

  15. Performance Evaluation of 98 CZT Sensors for Their Use in Gamma-Ray Imaging

    NASA Astrophysics Data System (ADS)

    Dedek, Nicolas; Speller, Robert D.; Spendley, Paul; Horrocks, Julie A.

    2008-10-01

    98 SPEAR sensors from eV Products have been evaluated for their use in a portable Compton camera. The sensors have a 5 mm times 5 mm times 5 mm CdZnTe crystal and are provided together with a preamplifier. The energy resolution was studied in detail for all sensors and was found to be 6% on average at 59.5 keV and 3% on average at 662 keV. The standard deviations of the corresponding energy resolution distributions are remarkably small (0.6% at 59.5 keV, 0.7% at 662 keV) and reflect the uniformity of the sensor characteristics. For a possible outside use the temperature dependence of the sensor performances was investigated for temperatures between 15 and 45 deg Celsius. A linear shift in calibration with temperature was observed. The energy resolution at low energies (81 keV) was found to deteriorate exponentially with temperature, while it stayed constant at higher energies (356 keV). A Compton camera built of these sensors was simulated. To obtain realistic energy spectra a suitable detector response function was implemented. To investigate the angular resolution of the camera a 137Cs point source was simulated. Reconstructed images of the point source were compared for perfect and realistic energy and position resolutions. The angular resolution of the camera was found to be better than 10 deg.

  16. Algebraic Approach for Recovering Topology in Distributed Camera Networks

    DTIC Science & Technology

    2009-01-14

    not valid for camera networks. Spatial sam- pling of plenoptic function [2] from a network of cameras is rarely i.i.d. (independent and identi- cally...coverage can be used to track and compare paths in a wireless camera network without any metric calibration information. In particular, these results can...edition edition, 2000. [14] A. Rahimi, B. Dunagan, and T. Darrell. Si- multaneous calibration and tracking with a network of non-overlapping sensors. In

  17. Transient full-field vibration measurement using spectroscopical stereo photogrammetry.

    PubMed

    Yue, Kaiduan; Li, Zhongke; Zhang, Ming; Chen, Shan

    2010-12-20

    Contrasted with other vibration measurement methods, a novel spectroscopical photogrammetric approach is proposed. Two colored light filters and a CCD color camera are used to achieve the function of two traditional cameras. Then a new calibration method is presented. It focuses on the vibrating object rather than the camera and has the advantage of more accuracy than traditional camera calibration. The test results have shown an accuracy of 0.02 mm.

  18. Camera Calibration with Radial Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  19. Exploring the imaging properties of thin lenses for cryogenic infrared cameras

    NASA Astrophysics Data System (ADS)

    Druart, Guillaume; Verdet, Sebastien; Guerineau, Nicolas; Magli, Serge; Chambon, Mathieu; Grulois, Tatiana; Matallah, Noura

    2016-05-01

    Designing a cryogenic camera is a good strategy to miniaturize and simplify an infrared camera using a cooled detector. Indeed, the integration of optics inside the cold shield allows to simply athermalize the design, guarantees a cold pupil and releases the constraint on having a high back focal length for small focal length systems. By this way, cameras made of a single lens or two lenses are viable systems with good optical features and a good stability in image correction. However it involves a relatively significant additional optical mass inside the dewar and thus increases the cool down time of the camera. ONERA is currently exploring a minimalist strategy consisting in giving an imaging function to thin optical plates that are found in conventional dewars. By this way, we could make a cryogenic camera that has the same cool down time as a traditional dewar without an imagery function. Two examples will be presented: the first one is a camera using a dual-band infrared detector made of a lens outside the dewar and a lens inside the cold shield, the later having the main optical power of the system. We were able to design a cold plano-convex lens with a thickness lower than 1mm. The second example is an evolution of a former cryogenic camera called SOIE. We replaced the cold meniscus by a plano-convex Fresnel lens with a decrease of the optical thermal mass of 66%. The performances of both cameras will be compared.

  20. Radiometric stability of the Multi-angle Imaging SpectroRadiometer (MISR) following 15 years on-orbit

    NASA Astrophysics Data System (ADS)

    Bruegge, Carol J.; Val, Sebastian; Diner, David J.; Jovanovic, Veljko; Gray, Ellyn; Di Girolamo, Larry; Zhao, Guangyu

    2014-09-01

    The Multi-angle Imaging SpectroRadiometer (MISR) has successfully operated on the EOS/ Terra spacecraft since 1999. It consists of nine cameras pointing from nadir to 70.5° view angle with four spectral channels per camera. Specifications call for a radiometric uncertainty of 3% absolute and 1% relative to the other cameras. To accomplish this, MISR utilizes an on-board calibrator (OBC) to measure camera response changes. Once every two months the two Spectralon panels are deployed to direct solar-light into the cameras. Six photodiode sets measure the illumination level that are compared to MISR raw digital numbers, thus determining the radiometric gain coefficients used in Level 1 data processing. Although panel stability is not required, there has been little detectable change in panel reflectance, attributed to careful preflight handling techniques. The cameras themselves have degraded in radiometric response by 10% since launch, but calibration updates using the detector-based scheme has compensated for these drifts and allowed the radiance products to meet accuracy requirements. Validation using Sahara desert observations show that there has been a drift of ~1% in the reported nadir-view radiance over a decade, common to all spectral bands.

  1. Distributed Compression in Camera Sensor Networks

    DTIC Science & Technology

    2006-02-13

    complicated in this context. This effort will make use of the correlation structure of the data given by the plenoptic function n the case of multi-camera...systems. In many cases the structure of the plenoptic function can be estimated without requiring inter-sensor communications, but by using some a...priori global geometrical information. Once the structure of the plenoptic function has been predicted, it is possible to develop specific distributed

  2. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/

  3. Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.

    PubMed

    Barter, James D; Thompson, Harold R; Richardson, Christine L

    2003-03-20

    A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.

  4. Camera traps can be heard and seen by animals.

    PubMed

    Meek, Paul D; Ballard, Guy-Anthony; Fleming, Peter J S; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  5. Response function of single crystal synthetic diamond detectors to 1-4 MeV neutrons for spectroscopy of D plasmas

    NASA Astrophysics Data System (ADS)

    Rebai, M.; Giacomelli, L.; Milocco, A.; Nocente, M.; Rigamonti, D.; Tardocchi, M.; Camera, F.; Cazzaniga, C.; Chen, Z. J.; Du, T. F.; Fan, T. S.; Giaz, A.; Hu, Z. M.; Marchi, T.; Peng, X. Y.; Gorini, G.

    2016-11-01

    A Single-crystal Diamond (SD) detector prototype was installed at Joint European Torus (JET) in 2013 and the achieved results have shown its spectroscopic capability of measuring 2.5 MeV neutrons from deuterium plasmas. This paper presents measurements of the SD response function to monoenergetic neutrons, which is a key point for the development of a neutron spectrometer based on SDs and compares them with Monte Carlo simulations. The analysis procedure allows for a good reconstruction of the experimental results. The good pulse height energy resolution (equivalent FWHM of 80 keV at 2.5 MeV), gain stability, insensitivity to magnetic field, and compact size make SDs attractive as compact neutron spectrometers of high flux deuterium plasmas, such as for instance those needed for the ITER neutron camera.

  6. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  7. Human tracking over camera networks: a review

    NASA Astrophysics Data System (ADS)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  8. High-resolution digital dosimetric system for spatial characterization of radiation fields using a thermoluminescent CaF/sub 2/:Dy crystal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atari, N.A.; Svensson, G.K.

    1986-05-01

    A high-resolution digital dosimetric system has been developed for the spatial characterization of radiation fields. The system comprises the following: 0.5-mm-thick, 25-mm-diam CaF/sub 2/:Dy thermoluminescent crystal; intensified charge coupled device video camera; video cassette recorder; and a computerized image processing subsystem. The optically flat single crystal is used as a radiation imaging device and the subsequent thermally stimulated phosphorescence is viewed by the intensified camera for further processing and analysis. Parameters governing the performance characteristics of the system were measured. A spatial resolution limit of 31 +- 2 ..mu..m (1sigma) corresponding to 16 +- 1 line pair/mm measured at themore » 4% level of the modulation transfer function has been achieved. The full width at half maximum of the line spread function measured independently by the slit method or derived from the edge response function was found to be 69 +- 4 ..mu..m (1sigma). The high resolving power, speed of readout, good precision, wide dynamic range, and the large image storage capacity make the system suitable for the digital mapping of the relative distribution of absorbed doses for various small radiation fields and the edges of larger fields.« less

  9. High-resolution digital dosimetric system for spatial characterization of radiation fields using a thermoluminescent CaF2:Dy crystal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atari, N.A.; Svensson, G.K.

    1986-05-01

    A high-resolution digital dosimetric system has been developed for the spatial characterization of radiation fields. The system comprises the following: 0.5-mm-thick, 25-mm-diam CaF2:Dy thermoluminescent crystal; intensified charge coupled device video camera; video cassette recorder; and a computerized image processing subsystem. The optically flat single crystal is used as a radiation imaging device and the subsequent thermally stimulated phosphorescence is viewed by the intensified camera for further processing and analysis. Parameters governing the performance characteristics of the system were measured. A spatial resolution limit of 31 +/- 2 microns (1 sigma) corresponding to 16 +/- 1 line pairs/mm measured at themore » 4% level of the modulation transfer function has been achieved. The full width at half maximum of the line spread function measured independently by the slit method or derived from the edge response function was found to be 69 +/- 4 microns (1 sigma). The high resolving power, speed of readout, good precision, wide dynamic range, and the large image storage capacity make the system suitable for the digital mapping of the relative distribution of absorbed doses for various small radiation fields and the edges of larger fields.« less

  10. Improved calibration-based non-uniformity correction method for uncooled infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao

    2017-08-01

    With the latest improvements of microbolometer focal plane arrays (FPA), uncooled infrared (IR) cameras are becoming the most widely used devices in thermography, especially in handheld devices. However the influences derived from changing ambient condition and the non-uniform response of the sensors make it more difficult to correct the nonuniformity of uncooled infrared camera. In this paper, based on the infrared radiation characteristic in the TEC-less uncooled infrared camera, a novel model was proposed for calibration-based non-uniformity correction (NUC). In this model, we introduce the FPA temperature, together with the responses of microbolometer under different ambient temperature to calculate the correction parameters. Based on the proposed model, we can work out the correction parameters with the calibration measurements under controlled ambient condition and uniform blackbody. All correction parameters can be determined after the calibration process and then be used to correct the non-uniformity of the infrared camera in real time. This paper presents the detail of the compensation procedure and the performance of the proposed calibration-based non-uniformity correction method. And our method was evaluated on realistic IR images obtained by a 384x288 pixels uncooled long wave infrared (LWIR) camera operated under changed ambient condition. The results show that our method can exclude the influence caused by the changed ambient condition, and ensure that the infrared camera has a stable performance.

  11. Suppressing the image smear of the vibration modulation transfer function for remote-sensing optical cameras.

    PubMed

    Li, Jin; Liu, Zilong; Liu, Si

    2017-02-20

    In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.

  12. Low cost thermal camera for use in preclinical detection of diabetic peripheral neuropathy in primary care setting

    NASA Astrophysics Data System (ADS)

    Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.

    2018-02-01

    Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity < 50mK) as one unit. Right and left cameras record the videos of right and left foot respectively. A compactible embedded system (raspberry pi3 model Bv1.2) is used to configure the sensors, capture and stream the video via ethernet. The resulting video has 160x120 active pixels (8 frames/second). We compared the temperature measurement of feet obtained using low-cost camera against the gold standard highend FLIR SC305. Twelve subjects (aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.

  13. Hydrogen peroxide plasma sterilization of a waterproof, high-definition video camera case for intraoperative imaging in veterinary surgery.

    PubMed

    Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan

    2018-06-13

    To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.

  14. LWIR NUC using an uncooled microbolometer camera

    NASA Astrophysics Data System (ADS)

    Laveigne, Joe; Franks, Greg; Sparkman, Kevin; Prewarski, Marcus; Nehring, Brian; McHugh, Steve

    2010-04-01

    Performing a good non-uniformity correction is a key part of achieving optimal performance from an infrared scene projector. Ideally, NUC will be performed in the same band in which the scene projector will be used. Cooled, large format MWIR cameras are readily available and have been successfully used to perform NUC, however, cooled large format LWIR cameras are not as common and are prohibitively expensive. Large format uncooled cameras are far more available and affordable, but present a range of challenges in practical use for performing NUC on an IRSP. Santa Barbara Infrared, Inc. reports progress on a continuing development program to use a microbolometer camera to perform LWIR NUC on an IRSP. Camera instability and temporal response and thermal resolution are the main difficulties. A discussion of processes developed to mitigate these issues follows.

  15. Motionless active depth from defocus system using smart optics for camera autofocus applications

    NASA Astrophysics Data System (ADS)

    Amin, M. Junaid; Riza, Nabeel A.

    2016-04-01

    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.

  16. Geocam Space: Enhancing Handheld Digital Camera Imagery from the International Space Station for Research and Applications

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.; Lee, Yeon Jin; Dille, Michael

    2016-01-01

    Handheld astronaut photography of the Earth has been collected from the International Space Station (ISS) since 2000, making it the most temporally extensive remotely sensed dataset from this unique Low Earth orbital platform. Exclusive use of digital handheld cameras to perform Earth observations from the ISS began in 2004. Nadir viewing imagery is constrained by the inclined equatorial orbit of the ISS to between 51.6 degrees North and South latitude, however numerous oblique images of land surfaces above these latitudes are included in the dataset. While unmodified commercial off-the-shelf digital cameras provide only visible wavelength, three-band spectral information of limited quality current cameras used with long (400+ mm) lenses can obtain high quality spatial information approaching 2 meters/ground pixel resolution. The dataset is freely available online at the Gateway to Astronaut Photography of Earth site (http://eol.jsc.nasa.gov), and now comprises over 2 million images. Despite this extensive image catalog, use of the data for scientific research, disaster response, commercial applications and visualizations is minimal in comparison to other data collected from free-flying satellite platforms such as Landsat, Worldview, etc. This is due primarily to the lack of fully-georeferenced data products - while current digital cameras typically have integrated GPS, this does not function in the Low Earth Orbit environment. The Earth Science and Remote Sensing (ESRS) Unit at NASA Johnson Space Center provides training in Earth Science topics to ISS crews, performs daily operations and Earth observation target delivery to crews through the Crew Earth Observations (CEO) Facility on board ISS, and also catalogs digital handheld imagery acquired from orbit by manually adding descriptive metadata and determining an image geographic centerpoint using visual feature matching with other georeferenced data, e.g. Landsat, Google Earth, etc. The lack of full geolocation information native to the data makes it difficult to integrate astronaut photographs with other georeferenced data to facilitate quantitative analysis such as urban land cover/land use classification, change detection, or geologic mapping. The manual determination of image centerpoints is both time and labor-intensive, leading to delays in releasing geolocated and cataloged data to the public, such as the timely use of data for disaster response. The GeoCam Space project was funded by the ISS Program in 2015 to develop an on-orbit hardware and ground-based software system for increasing the efficiency of geolocating astronaut photographs from the ISS (Fig. 1). The Intelligent Robotics Group at NASA Ames Research Center leads the development of both the ground and on-orbit systems in collaboration with the ESRS Unit. The hardware component consists of modified smartphone elements including cameras, central processing unit, wireless Ethernet, and an inertial measurement unit (gyroscopes/accelerometers/magnetometers) reconfigured into a compact unit that attaches to the base of the current Nikon D4 camera - and its replacement, the Nikon D5 - and connects using the standard Nikon peripheral connector or USB port. This provides secondary, side and downward facing cameras perpendicular to the primary camera pointing direction. The secondary cameras observe calibration targets with known internal X, Y, and Z position affixed to the interior of the ISS to determine the camera pose corresponding to each image frame. This information is recorded by the GeoCam Space unit and indexed for correlation to the camera time recorded for each image frame. Data - image, EXIF header, and camera pose information - is transmitted to the ground software system (GeoRef) using the established Ku-band USOS downlink system. Following integration on the ground, the camera pose information provides an initial geolocation estimate for the individual film frame. This new capability represents a significant advance in geolocation from the manual feature-matching approach for both nadir and off-nadir viewing imagery. With the initial geolocation estimate, full georeferencing of an image is completed using the rapid tie-pointing interface in GeoRef, and the resulting data is added to the Gateway to Astronaut Photography of Earth online database in both Geotiff and Keyhole Markup Language (kml) formats. The integration of the GeoRef software component of Geocam Space into the CEO image cataloging workflow is complete, and disaster response imagery acquired by the ISS crew is now fully georeferenced as a standard data product. The on-orbit hardware component (GeoSens) is in final prototyping phase, and is on-schedule for launch to the ISS in late 2016. Installation and routine use of the Geocam Space system for handheld digital camera photography from the ISS is expected to significantly improve the usefulness of this unique dataset for a variety of public- and private-sector applications.

  17. AERCam Autonomy: Intelligent Software Architecture for Robotic Free Flying Nanosatellite Inspection Vehicles

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.; Duran, Steve G.; Braun, Angela N.; Straube, Timothy M.; Mitchell, Jennifer D.

    2006-01-01

    The NASA Johnson Space Center has developed a nanosatellite-class Free Flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam Free Flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35-pound, 14-inch diameter AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, power, propulsion, and imaging subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations, including automatic stationkeeping, point-to-point maneuvering, and waypoint tracking. The Mini AERCam Free Flyer is accompanied by a sophisticated control station for command and control, as well as a docking system for automated deployment, docking, and recharge at a parent spacecraft. Free Flyer functional testing has been conducted successfully on both an airbearing table and in a six-degree-of-freedom closed-loop orbital simulation with avionics hardware in the loop. Mini AERCam aims to provide beneficial on-orbit views that cannot be obtained from fixed cameras, cameras on robotic manipulators, or cameras carried by crewmembers during extravehicular activities (EVA s). On Shuttle or International Space Station (ISS), for example, Mini AERCam could support external robotic operations by supplying orthogonal views to the intravehicular activity (IVA) robotic operator, supply views of EVA operations to IVA and/or ground crews monitoring the EVA, and carry out independent visual inspections of areas of interest around the spacecraft. To enable these future benefits with minimal impact on IVA operators and ground controllers, the Mini AERCam system architecture incorporates intelligent systems attributes that support various autonomous capabilities. 1) A robust command sequencer enables task-level command scripting. Command scripting is employed for operations such as automatic inspection scans over a region of interest, and operator-hands-off automated docking. 2) A system manager built on the same expert-system software as the command sequencer provides detection and smart-response capability for potential system-level anomalies, like loss of communications between the Free Flyer and control station. 3) An AERCam dynamics manager provides nominal and off-nominal management of guidance, navigation, and control (GN&C) functions. It is employed for safe trajectory monitoring, contingency maneuvering, and related roles. This paper will describe these architectural components of Mini AERCam autonomy, as well as the interaction of these elements with a human operator during supervised autonomous control.

  18. Image quality enhancement method for on-orbit remote sensing cameras using invariable modulation transfer function.

    PubMed

    Li, Jin; Liu, Zilong

    2017-07-24

    Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.

  19. Wearable Cameras Are Useful Tools to Investigate and Remediate Autobiographical Memory Impairment: A Systematic PRISMA Review.

    PubMed

    Allé, Mélissa C; Manning, Liliann; Potheegadoo, Jevita; Coutelle, Romain; Danion, Jean-Marie; Berna, Fabrice

    2017-03-01

    Autobiographical memory, central in human cognition and every day functioning, enables past experienced events to be remembered. A variety of disorders affecting autobiographical memory are characterized by the difficulty of retrieving specific detailed memories of past personal events. Owing to the impact of autobiographical memory impairment on patients' daily life, it is necessary to better understand these deficits and develop relevant methods to improve autobiographical memory. The primary objective of the present systematic PRISMA review was to give an overview of the first empirical evidence of the potential of wearable cameras in autobiographical memory investigation in remediating autobiographical memory impairments. The peer-reviewed literature published since 2004 on the usefulness of wearable cameras in research protocols was explored in 3 databases (PUBMED, PsycINFO, and Google Scholar). Twenty-eight published studies that used a protocol involving wearable camera, either to explore wearable camera functioning and impact on daily life, or to investigate autobiographical memory processing or remediate autobiographical memory impairment, were included. This review analyzed the potential of wearable cameras for 1) investigating autobiographical memory processes in healthy volunteers without memory impairment and in clinical populations, and 2) remediating autobiographical memory in patients with various kinds of memory disorder. Mechanisms to account for the efficacy of wearable cameras are also discussed. The review concludes by discussing certain limitations inherent to using cameras, and new research perspectives. Finally, ethical issues raised by this new technology are considered.

  20. Integrated inertial stellar attitude sensor

    NASA Technical Reports Server (NTRS)

    Brady, Tye M. (Inventor); Kourepenis, Anthony S. (Inventor); Wyman, Jr., William F. (Inventor)

    2007-01-01

    An integrated inertial stellar attitude sensor for an aerospace vehicle includes a star camera system, a gyroscope system, a controller system for synchronously integrating an output of said star camera system and an output of said gyroscope system into a stream of data, and a flight computer responsive to said stream of data for determining from the star camera system output and the gyroscope system output the attitude of the aerospace vehicle.

  1. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  2. The use of vision-based image quality metrics to predict low-light performance of camera phones

    NASA Astrophysics Data System (ADS)

    Hultgren, B.; Hertel, D.

    2010-01-01

    Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.

  3. Camera Traps Can Be Heard and Seen by Animals

    PubMed Central

    Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356

  4. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    PubMed

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  5. Response function of single crystal synthetic diamond detectors to 1-4 MeV neutrons for spectroscopy of D plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rebai, M., E-mail: marica.rebai@mib.infn.it; Nocente, M.; Rigamonti, D.

    2016-11-15

    A Single-crystal Diamond (SD) detector prototype was installed at Joint European Torus (JET) in 2013 and the achieved results have shown its spectroscopic capability of measuring 2.5 MeV neutrons from deuterium plasmas. This paper presents measurements of the SD response function to monoenergetic neutrons, which is a key point for the development of a neutron spectrometer based on SDs and compares them with Monte Carlo simulations. The analysis procedure allows for a good reconstruction of the experimental results. The good pulse height energy resolution (equivalent FWHM of 80 keV at 2.5 MeV), gain stability, insensitivity to magnetic field, and compactmore » size make SDs attractive as compact neutron spectrometers of high flux deuterium plasmas, such as for instance those needed for the ITER neutron camera.« less

  6. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects

    PubMed Central

    Lambers, Martin; Kolb, Andreas

    2017-01-01

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888

  7. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    PubMed

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  8. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer.

    PubMed

    Shen, Bailey Y; Mukai, Shizuo

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  9. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    PubMed Central

    Shen, Bailey Y.

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient. PMID:28396802

  10. Capturing the plenoptic function in a swipe

    NASA Astrophysics Data System (ADS)

    Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi

    2016-09-01

    Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.

  11. An affordable wearable video system for emergency response training

    NASA Astrophysics Data System (ADS)

    King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.

    2009-02-01

    Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.

  12. A quasi-dense matching approach and its calibration application with Internet photos.

    PubMed

    Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei

    2015-03-01

    This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.

  13. Raspberry Pi camera with intervalometer used as crescograph

    NASA Astrophysics Data System (ADS)

    Albert, Stefan; Surducan, Vasile

    2017-12-01

    The intervalometer is an attachment or facility on a photo-camera that operates the shutter regularly at set intervals over a period. Professional cameras with built in intervalometers are expensive and quite difficult to find. The Canon CHDK open source operating system allows intervalometer implementation on Canon cameras only. However finding a Canon camera with near infra-red (NIR) photographic lens at affordable price is impossible. On experiments requiring several cameras (used to measure growth in plants - the crescographs, but also for coarse evaluation of the water content of leaves), the costs of the equipment are often over budget. Using two Raspberry Pi modules each equipped with a low cost NIR camera and a WIFI adapter (for downloading pictures stored on the SD card) and some freely available software, we have implemented two low budget intervalometer cameras. The shutting interval, the number of pictures to be taken, image resolution and some other parameters can be fully programmed. Cameras have been in use continuously for three months (July-October 2017) in a relevant environment (outside), proving the concept functionality.

  14. Surveillance of a 2D Plane Area with 3D Deployed Cameras

    PubMed Central

    Fu, Yi-Ge; Zhou, Jie; Deng, Lei

    2014-01-01

    As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353

  15. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  16. Timing generator of scientific grade CCD camera and its implementation based on FPGA technology

    NASA Astrophysics Data System (ADS)

    Si, Guoliang; Li, Yunfei; Guo, Yongfei

    2010-10-01

    The Timing Generator's functions of Scientific Grade CCD Camera is briefly presented: it generates various kinds of impulse sequence for the TDI-CCD, video processor and imaging data output, acting as the synchronous coordinator for time in the CCD imaging unit. The IL-E2TDI-CCD sensor produced by DALSA Co.Ltd. use in the Scientific Grade CCD Camera. Driving schedules of IL-E2 TDI-CCD sensor has been examined in detail, the timing generator has been designed for Scientific Grade CCD Camera. FPGA is chosen as the hardware design platform, schedule generator is described with VHDL. The designed generator has been successfully fulfilled function simulation with EDA software and fitted into XC2VP20-FF1152 (a kind of FPGA products made by XILINX). The experiments indicate that the new method improves the integrated level of the system. The Scientific Grade CCD camera system's high reliability, stability and low power supply are achieved. At the same time, the period of design and experiment is sharply shorted.

  17. Control and protection of outdoor embedded camera for astronomy

    NASA Astrophysics Data System (ADS)

    Rigaud, F.; Jegouzo, I.; Gaudemard, J.; Vaubaillon, J.

    2012-09-01

    The purpose of CABERNET- Podet-Met (CAmera BEtter Resolution NETwork, Pole sur la Dynamique de l'Environnement Terrestre - Meteor) project is the automated observation, by triangulation with three cameras, of meteor showers to perform a calculation of meteoroids trajectory and velocity. The scientific goal is to search the parent body, comet or asteroid, for each observed meteor. To install outdoor cameras in order to perform astronomy measurements for several years with high reliability requires a very specific design for the box. For these cameras, this contribution shows how we fulfilled the various functions of their boxes, such as cooling of the CCD, heating to melt snow and ice, the protecting against moisture, lightning and Solar light. We present the principal and secondary functions, the product breakdown structure, the technical solutions evaluation grid of criteria, the adopted technology products and their implementation in multifunction subsets for miniaturization purpose. To manage this project, we aim to get the lowest manpower and development time for every part. In appendix, we present the measurements the image quality evolution during the CCD cooling, and some pictures of the prototype.

  18. Recent technology and usage of plastic lenses in image taking objectives

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko

    2005-09-01

    Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.

  19. Extended spectrum SWIR camera with user-accessible Dewar

    NASA Astrophysics Data System (ADS)

    Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva

    2017-02-01

    Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.

  20. Technology development: Future use of NASA's large format camera is uncertain

    NASA Astrophysics Data System (ADS)

    Rey, Charles F.; Fliegel, Ilene H.; Rohner, Karl A.

    1990-06-01

    The Large Format Camera, developed as a project to verify an engineering concept or design, has been flown only once, in 1984, on the shuttle Challenger. Since this flight, the camera has been in storage. NASA had expected that, following the camera's successful demonstration, other government agencies or private companies with special interests in photographic applications would absorb the costs for further flights using the Large Format Camera. But, because shuttle transportation costs for the Large Format Camera were estimated to be approximately $20 million (in 1987 dollars) per flight and the market for selling Large Format Camera products was limited, NASA was not successful in interesting other agencies or private companies in paying the costs. Using the camera on the space station does not appear to be a realistic alternative. Using the camera aboard NASA's Earth Resources Research (ER-2) aircraft may be feasible. Until the final disposition of the camera is decided, NASA has taken actions to protect it from environmental deterioration. The Government Accounting Office (GAO) recommends that the NASA Administrator should consider, first, using the camera on an aircraft such as the ER-2. NASA plans to solicit the private sector for expressions of interest in such use of the camera, at no cost to the government, and will be guided by the private sector response. Second, GAO recommends that if aircraft use is determined to be infeasible, NASA should consider transferring the camera to a museum, such as the National Air and Space Museum.

  1. Automatic source camera identification using the intrinsic lens radial distortion

    NASA Astrophysics Data System (ADS)

    Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.

    2006-11-01

    Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

  2. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  3. Television camera as a scientific instrument

    NASA Technical Reports Server (NTRS)

    Smokler, M. I.

    1970-01-01

    Rigorous calibration program, coupled with a sophisticated data-processing program that introduced compensation for system response to correct photometry, geometric linearity, and resolution, converted a television camera to a quantitative measuring instrument. The output data are in the forms of both numeric printout records and photographs.

  4. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013683 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  5. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013687 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  6. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013693 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  7. The NOAO NEWFIRM Data Handling System

    NASA Astrophysics Data System (ADS)

    Zárate, N.; Fitzpatrick, M.

    2008-08-01

    The NOAO Extremely Wide-Field IR Mosaic (NEWFIRM) is a new 1-2.4 micron IR camera that is now being commissioned for the 4m Mayall telescope at Kitt Peak. The focal plane consists of a 2x2 mosaic of 2048x2048 arrays offerring a field-of-view of 27.6' on a side. The use of dual MONSOON array controllers permits very fast readout, a scripting interface allows for highly efficient observing modes. We describe the Data Handling System (DHS) for the NEWFIRM camera which is designed to meet the performance requirements of the instrument as well as the observing environment in which in operates. It is responsible for receiving the data stream from the detector and instrument software, rectifying the image geometry, presenting a real-time display of the image to the user, final assembly of a science-grade image with complete headers, as well as triggering automated pipeline and archival functions. The DHS uses an event-based messaging system to control multiple processes on a distributed network of machines. The asynchronous nature of this processing means the DHS operates independently from the camera readout and the design of the system is inherently scalable to larger focal planes that use a greater number of array controllers. Current status and future plans for the DHS are also discussed.

  8. Immersive viewing engine

    NASA Astrophysics Data System (ADS)

    Schonlau, William J.

    2006-05-01

    An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.

  9. The First Year of Croatian Meteor Network

    NASA Astrophysics Data System (ADS)

    Andreic, Zeljko; Segon, Damir

    2010-08-01

    The idea and a short history of Croatian Meteor Network (CMN) is described. Based on use of cheap surveillance cameras, standard PC-TV cards and old PCs, the Network allows schools, amateur societies and individuals to participate in photographic meteor patrol program. The network has a strong educational component and many cameras are located at or around teaching facilities. Data obtained by these cameras are collected and processed by the scientific team of the network. Currently 14 cameras are operable, covering a large part of the croatian sky, data gathering is fully functional, and data reduction software is in testing phase.

  10. Thermal Imaging with Novel Infrared Focal Plane Arrays and Quantitative Analysis of Thermal Imagery

    NASA Technical Reports Server (NTRS)

    Gunapala, S. D.; Rafol, S. B.; Bandara, S. V.; Liu, J. K.; Mumolo, J. M.; Soibel, A.; Ting, D. Z.; Tidrow, Meimei

    2012-01-01

    We have developed a single long-wavelength infrared (LWIR) quantum well infrared photodetector (QWIP) camera for thermography. This camera has been used to measure the temperature profile of patients. A pixel coregistered simultaneously reading mid-wavelength infrared (MWIR)/LWIR dual-band QWIP camera was developed to improve the accuracy of temperature measurements especially with objects with unknown emissivity. Even the dualband measurement can provide inaccurate results due to the fact that emissivity is a function of wavelength. Thus we have been developing a four-band QWIP camera for accurate temperature measurement of remote object.

  11. A new mapping function in table-mounted eye tracker

    NASA Astrophysics Data System (ADS)

    Tong, Qinqin; Hua, Xiao; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is a new apparatus of human-computer interaction, which has caught much attention in recent years. Eye tracking technology is to obtain the current subject's "visual attention (gaze)" direction by using mechanical, electronic, optical, image processing and other means of detection. While the mapping function is one of the key technology of the image processing, and is also the determination of the accuracy of the whole eye tracker system. In this paper, we present a new mapping model based on the relationship among the eyes, the camera and the screen that the eye gazed. Firstly, according to the geometrical relationship among the eyes, the camera and the screen, the framework of mapping function between the pupil center and the screen coordinate is constructed. Secondly, in order to simplify the vectors inversion of the mapping function, the coordinate of the eyes, the camera and screen was modeled by the coaxial model systems. In order to verify the mapping function, corresponding experiment was implemented. It is also compared with the traditional quadratic polynomial function. And the results show that our approach can improve the accuracy of the determination of the gazing point. Comparing with other methods, this mapping function is simple and valid.

  12. A Quasi-Static Method for Determining the Characteristics of a Motion Capture Camera System in a "Split-Volume" Configuration

    NASA Technical Reports Server (NTRS)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2001-01-01

    To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.

  13. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  14. External Mask Based Depth and Light Field Camera

    DTIC Science & Technology

    2013-12-08

    laid out in the previous light field cameras. A good overview of the sampling of the plenoptic function can be found in the survey work by Wetzstein et...view is shown in Figure 6. 5. Applications High spatial resolution depth and light fields are a rich source of information about the plenoptic ...http://www.pelicanimaging.com/. [4] E. Adelson and J. Wang. Single lens stereo with a plenoptic camera. Pattern Analysis and Machine Intelligence

  15. A 90GHz Bolometer Camera Detector System for the Green Bank Telescope

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Allen, Christine A.; Buchanan, Ernest D.; Chen, Tina C.; Chervenak, James A.; Devlin, Mark J.; Dicker, Simon R.; Forgione, Joshua B.

    2004-01-01

    We describe a close-packed, two-dimensional imaging detector system for operation at 90GHz (3.3mm) for the 100 m Green Bank Telescope (GBT) This system will provide high sensitivity (<1mjy in 1s rapid imaging (15'x15' to 250 microJy in 1 hr) at the world's largest steerable aperture. The heart of this camera is an 8x8 close packed, Nyquist-sampled array of superconducting transition edge sensor bolometers. We have designed and are producing a functional superconducting bolometer array system using a monolithic planar architecture and high-speed multiplexed readout electronics. With an NEP of approx. 2.10(exp 17) W/square root Hz, the TES bolometers will provide fast linear sensitive response for high performance imaging. The detectors are read out by and 8x8 time domain SQUID multiplexer. A digital/analog electronics system has been designed to enable read out by SQUID multiplexers. First light for this instrument on the GBT is expected within a year.

  16. A 90GHz Bolometer Camera Detector System for the Green

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Allen, Christine A.; Buchanan, Ernest; Chen, Tina C.; Chervenak, James A.; Devlin, Mark J.; Dicker, Simon R.; Forgione, Joshua B.

    2004-01-01

    We describe a close-packed, two-dimensional imaging detector system for operation at 90GHz (3.3 mm) for the 100m Green Bank Telescope (GBT). This system will provide high sensitivity (less than 1mJy in 1s) rapid imaging (15'x15' to 150 micron Jy in 1 hr) at the world's largest steerable aperture. The heart of this camera is an 8x8 close-packed, Nyquist-sampled array of superconducting transition edge sensor (TES) bolometers. We have designed and are producing a functional superconducting bolometer array system using a monolithic planar architecture and high-speed multiplexed readout electronics. With an NEP of approximately 2 x 10(exp -17) W/square root of Hz, the TES bolometers will provide fast, linear, sensitive response for high performance imaging. The detectors are read out by an 8x8 time domain SQUID multiplexer. A digital/analog electronics system has been designed to enable read out by SQUID multiplexers. First light for this instrument on the GBT is expected within a year.

  17. A wearable device for emotional recognition using facial expression and physiological response.

    PubMed

    Jangho Kwon; Da-Hye Kim; Wanjoo Park; Laehyun Kim

    2016-08-01

    This paper introduces a glasses-typed wearable system to detect user's emotions using facial expression and physiological responses. The system is designed to acquire facial expression through a built-in camera and physiological responses such as photoplethysmogram (PPG) and electrodermal activity (EDA) in unobtrusive way. We used video clips for induced emotions to test the system suitability in the experiment. The results showed a few meaningful properties that associate emotions with facial expressions and physiological responses captured by the developed wearable device. We expect that this wearable system with a built-in camera and physiological sensors may be a good solution to monitor user's emotional state in daily life.

  18. Clinical utility of scintimammography: From the Anger-camera to new dedicated devices

    NASA Astrophysics Data System (ADS)

    Schillaci, Orazio; Danieli, Roberta; Romano, Pasquale; Cossu, Elsa; Simonetti, Giovanni

    2006-12-01

    Scintimammography is a functional imaging technique which uses a radiation detection camera to detect radionuclide tracers in the patient's breasts. Tracers are designed to accumulate in tumours more than in healthy tissue: the most used are Tc-99 m sestamibi and Tc-99 m tetrofosmin. Scintimammography is useful in some clinical indications as an adjunct to mammography: it is recommended for those lesions where additional information is required to reach a definitive diagnosis. Patients with dubious mammograms may benefit from this test, as well as women with dense breasts or with implants. Scintimammography is a valuable diagnostic tool also in patients with locally advanced breast cancer for monitoring and predicting response to neoadjuvant chemotherapy. Nevertheless, using an Anger-camera this technique shows a high sensitivity only for cancers >1 cm. Since other modalities are increasingly employed for the early identification of small abnormalities, the issue of detecting small cancers is critical for the future development and clinical utility of breast imaging with radiopharmaceuticals. The use of high-resolution cameras dedicated for breast imaging is the best option to improve the detection of small cancers: they allow higher flexibility in patient positioning, and the availability of mammography-like projections. Moreover, the detector can be placed directly in contact with the breast allowing a mild compression with reduction of the breast's thickness, thus increasing the target-to-background ratio and the sensitivity. These new devices have the potential of increasing the total number of breast scintigraphies performed thereby enhancing the role of nuclear medicine in breast cancer imaging.

  19. Assessment of right ventricular function with nonimaging first pass ventriculography and comparison of results with gamma camera studies.

    PubMed

    Zhang, Z; Liu, X J; Liu, Y Z; Lu, P; Crawley, J C; Lahiri, A

    1990-08-01

    A new technique has been developed for measuring right ventricular function by nonimaging first pass ventriculography. The right ventricular ejection fraction (RVEF) obtained by non-imaging first pass ventriculography was compared with that obtained by gamma camera first pass and equilibrium ventriculography. The data has demonstrated that the correlation of RVEFs obtained by the nonimaging nuclear cardiac probe and by gamma camera first pass ventriculography in 15 subjects was comparable (r = 0.93). There was also a good correlation between RVEFs obtained by the nonimaging nuclear probe and by equilibrium gated blood pool studies in 33 subjects (r = 0.89). RVEF was significantly reduced in 15 patients with right ventricular and/or inferior myocardial infarction compared to normal subjects (28 +/- 9% v. 45 +/- 9%). The data suggests that nonimaging probes may be used for assessing right ventricular function accurately.

  20. A Spectralon BRF Data Base for MISR Calibration Application

    NASA Technical Reports Server (NTRS)

    Bruegge, C.; Chrien, N.; Haner, D.

    1999-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) is an Earth observing sensor which will provide global retrievals of aerosols, clouds, and land surface parameters. Instrument specifications require high accuracy absolute calibration, as well as accurate camera-to-camera, band-to-band and pixel-to-pixel relative response determinations.

  1. Networked web-cameras monitor congruent seasonal development of birches with phenological field observations

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Kubin, Eero; Linkosalmi, Maiju; Melih Tanis, Cemal; Nadir Arslan, Ali

    2017-04-01

    Ecosystems' potential to provide services, e.g. to sequester carbon is largely driven by the phenological cycle of vegetation. Timing of phenological events is required for understanding and predicting the influence of climate change on ecosystems and to support various analyses of ecosystem functioning. We established a network of cameras for automated monitoring of phenological activity of vegetation in boreal ecosystems of Finland. Cameras were mounted on 14 sites, each site having 1-3 cameras. In this study, we used cameras at 11 of these sites to investigate how well networked cameras detect phenological development of birches (Betula spp.) along the latitudinal gradient. Birches are interesting focal species for the analyses as they are common throughout Finland. In our cameras they often appear in smaller quantities within dominant species in the images. Here, we tested whether small scattered birch image elements allow reliable extraction of color indices and changes therein. We compared automatically derived phenological dates from these birch image elements to visually determined dates from the same image time series, and to independent observations recorded in the phenological monitoring network from the same region. Automatically extracted season start dates based on the change of green color fraction in the spring corresponded well with the visually interpreted start of season, and field observed budburst dates. During the declining season, red color fraction turned out to be superior over green color based indices in predicting leaf yellowing and fall. The latitudinal gradients derived using automated phenological date extraction corresponded well with gradients based on phenological field observations from the same region. We conclude that already small and scattered birch image elements allow reliable extraction of key phenological dates for birch species. Devising cameras for species specific analyses of phenological timing will be useful for explaining variation of time series of satellite based indices, and it will also benefit models describing ecosystem functioning at species or plant functional type level. With the contribution of the LIFE+ financial instrument of the European Union (LIFE12 ENV/FI/000409 Monimet, http://monimet.fmi.fi)

  2. Design of a structural and functional hierarchy for planning and control of telerobotic systems

    NASA Technical Reports Server (NTRS)

    Acar, Levent; Ozguner, Umit

    1989-01-01

    Hierarchical structures offer numerous advantages over conventional structures for the control of telerobotic systems. A hierarchically organized system can be controlled via undetailed task assignments and can easily adapt to changing circumstances. The distributed and modular structure of these systems also enables fast response needed in most telerobotic applications. On the other hand, most of the hierarchical structures proposed in the literature are based on functional properties of a system. These structures work best for a few given functions of a large class of systems. In telerobotic applications, all functions of a single system needed to be explored. This approach requires a hierarchical organization based on physical properties of a system and such a hierarchical organization is introduced. The decomposition, organization, and control of the hierarchical structure are considered, and a system with two robot arms and a camera is presented.

  3. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  4. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  5. Method and apparatus for calibrating a display using an array of cameras

    NASA Technical Reports Server (NTRS)

    Johnson, Michael J. (Inventor); Chen, Chung-Jen (Inventor); Chandrasekhar, Rajesh (Inventor)

    2001-01-01

    The present invention overcomes many of the disadvantages of the prior art by providing a display that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, the present invention provides one or more cameras to capture an image that is projected on a display screen. In one embodiment, the one or more cameras are placed on the same side of the screen as the projectors. In another embodiment, an array of cameras is provided on either or both sides of the screen for capturing a number of adjacent and/or overlapping capture images of the screen. In either of these embodiments, the resulting capture images are processed to identify any non-desirable characteristics including any visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and/or other visible artifacts.

  6. Optical aberration correction for simple lenses via sparse representation

    NASA Astrophysics Data System (ADS)

    Cui, Jinlin; Huang, Wei

    2018-04-01

    Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.

  7. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  8. Density estimation in a wolverine population using spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.; McKelvey, Kevin

    2011-01-01

    Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

  9. Texture-based measurement of spatial frequency response using the dead leaves target: extensions, and application to real camera systems

    NASA Astrophysics Data System (ADS)

    McElvain, Jon; Campbell, Scott P.; Miller, Jonathan; Jin, Elaine W.

    2010-01-01

    The dead leaves model was recently introduced as a method for measuring the spatial frequency response (SFR) of camera systems. The target consists of a series of overlapping opaque circles with a uniform gray level distribution and radii distributed as r-3. Unlike the traditional knife-edge target, the SFR derived from the dead leaves target will be penalized for systems that employ aggressive noise reduction. Initial studies have shown that the dead leaves SFR correlates well with sharpness/texture blur preference, and thus the target can potentially be used as a surrogate for more expensive subjective image quality evaluations. In this paper, the dead leaves target is analyzed for measurement of camera system spatial frequency response. It was determined that the power spectral density (PSD) of the ideal dead leaves target does not exhibit simple power law dependence, and scale invariance is only loosely obeyed. An extension to the ideal dead leaves PSD model is proposed, including a correction term to account for system noise. With this extended model, the SFR of several camera systems with a variety of formats was measured, ranging from 3 to 10 megapixels; the effects of handshake motion blur are also analyzed via the dead leaves target.

  10. User guide for the USGS aerial camera Report of Calibration.

    USGS Publications Warehouse

    Tayman, W.P.

    1984-01-01

    Calibration and testing of aerial mapping cameras includes the measurement of optical constants and the check for proper functioning of a number of complicated mechanical and electrical parts. For this purpose the US Geological Survey performs an operational type photographic calibration. This paper is not strictly a scientific paper but rather a 'user guide' to the USGS Report of Calibration of an aerial mapping camera for compliance with both Federal and State mapping specifications. -Author

  11. Accurate and cost-effective MTF measurement system for lens modules of digital cameras

    NASA Astrophysics Data System (ADS)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-01-01

    For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.

  12. Patterned Video Sensors For Low Vision

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  13. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  14. Free LittleDog!: Towards Completely Untethered Operation of the LittleDog Quadruped

    DTIC Science & Technology

    2007-08-01

    helpful Intel Open Source Computer Vision ( OpenCV ) library [4] wherever possible rather than reimplementing many of the standard algorithms, however...correspondences between image points and world points, and feeding these to a camera calibration function, such as that provided by OpenCV , allows one to solve... OpenCV calibration function to that used for intrinsic calibration solves for Tboard→camerai . The position of the camera 37 Figure 5.3: Snapshot of

  15. Optimized algorithm for the spatial nonuniformity correction of an imaging system based on a charge-coupled device color camera.

    PubMed

    de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell

    2007-01-10

    We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.

  16. Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean

    PubMed Central

    Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave

    2009-01-01

    Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729

  17. Water availability not fruitfall modulates the dry season distribution of frugivorous terrestrial vertebrates in a lowland Amazon forest

    PubMed Central

    Paredes, Omar Stalin Landázuri; Norris, Darren; de Oliveira, Tadeu Gomes

    2017-01-01

    Terrestrial vertebrate frugivores constitute one of the major guilds in tropical forests. Previous studies show that the meso-scale distribution of this group is only weakly explained by variables such as altitude and tree basal area in lowland Amazon forests. For the first time we test whether seasonally limiting resources (water and fallen fruit) affect the dry season distribution in 25 species of terrestrial vertebrates. To examine the effects of the spatial availability of fruit and water on terrestrial vertebrates we used a standardized, regularly spaced arrangement of camera-traps within 25km2 of lowland Amazon forest. Generalized linear models (GLMs) were then used to examine the influence of four variables (altitude, distance to large rivers, distance to nearest water, and presence vs absence of fruits) on the number of photos on five functional groups (all frugivores, small, medium, large and very large frugivores) and on seven of the most abundant frugivore species (Cuniculus paca, Dasyprocta leporina, Mazama americana, Mazama nemorivaga, Myoprocta acouchy, Pecari tajacu and Psophia crepitans). A total of 279 independent photos of 25 species were obtained from 900 camera-trap days. For most species and three functional groups, the variation in the number of photos per camera was significantly but weakly explained by the GLMs (deviance explained ranging from 6.2 to 48.8%). Generally, we found that the presence of water availability was more important than the presence of fallen fruit for the groups and species studied. Medium frugivores, large-bodied frugivores, and two of the more abundant species (C. paca and P. crepitans) were recorded more frequently closer to water bodies; while none of the functional groups nor the most abundant species showed any significant relationship with the presence of fallen fruit. Two functional groups and two of the seven most common frugivore species assessed in the GLMs showed significant results with species-specific responses to altitude. Our findings provide a more detailed understanding of how frugivorous vertebrates cope with periods of water and fruit scarcity in lowland Amazon forests. PMID:28301589

  18. Water availability not fruitfall modulates the dry season distribution of frugivorous terrestrial vertebrates in a lowland Amazon forest.

    PubMed

    Paredes, Omar Stalin Landázuri; Norris, Darren; Oliveira, Tadeu Gomes de; Michalski, Fernanda

    2017-01-01

    Terrestrial vertebrate frugivores constitute one of the major guilds in tropical forests. Previous studies show that the meso-scale distribution of this group is only weakly explained by variables such as altitude and tree basal area in lowland Amazon forests. For the first time we test whether seasonally limiting resources (water and fallen fruit) affect the dry season distribution in 25 species of terrestrial vertebrates. To examine the effects of the spatial availability of fruit and water on terrestrial vertebrates we used a standardized, regularly spaced arrangement of camera-traps within 25km2 of lowland Amazon forest. Generalized linear models (GLMs) were then used to examine the influence of four variables (altitude, distance to large rivers, distance to nearest water, and presence vs absence of fruits) on the number of photos on five functional groups (all frugivores, small, medium, large and very large frugivores) and on seven of the most abundant frugivore species (Cuniculus paca, Dasyprocta leporina, Mazama americana, Mazama nemorivaga, Myoprocta acouchy, Pecari tajacu and Psophia crepitans). A total of 279 independent photos of 25 species were obtained from 900 camera-trap days. For most species and three functional groups, the variation in the number of photos per camera was significantly but weakly explained by the GLMs (deviance explained ranging from 6.2 to 48.8%). Generally, we found that the presence of water availability was more important than the presence of fallen fruit for the groups and species studied. Medium frugivores, large-bodied frugivores, and two of the more abundant species (C. paca and P. crepitans) were recorded more frequently closer to water bodies; while none of the functional groups nor the most abundant species showed any significant relationship with the presence of fallen fruit. Two functional groups and two of the seven most common frugivore species assessed in the GLMs showed significant results with species-specific responses to altitude. Our findings provide a more detailed understanding of how frugivorous vertebrates cope with periods of water and fruit scarcity in lowland Amazon forests.

  19. ARNICA, the Arcetri near-infrared camera: Astronomical performance assessment.

    NASA Astrophysics Data System (ADS)

    Hunt, L. K.; Lisi, F.; Testi, L.; Baffa, C.; Borelli, S.; Maiolino, R.; Moriondo, G.; Stanga, R. M.

    1996-01-01

    The Arcetri near-infrared camera ARNICA was built as a users' instrument for the Infrared Telescope at Gornergrat (TIRGO), and is based on a 256x256 NICMOS 3 detector. In this paper, we discuss ARNICA's optical and astronomical performance at the TIRGO and at the William Herschel Telescope on La Palma. Optical performance is evaluated in terms of plate scale, distortion, point spread function, and ghosting. Astronomical performance is characterized by camera efficiency, sensitivity, and spatial uniformity of the photometry.

  20. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  1. NEUTRON RADIATION DAMAGE IN CCD CAMERAS AT JOINT EUROPEAN TORUS (JET).

    PubMed

    Milocco, Alberto; Conroy, Sean; Popovichev, Sergey; Sergienko, Gennady; Huber, Alexander

    2017-10-26

    The neutron and gamma radiations in large fusion reactors are responsible for damage to charged couple device (CCD) cameras deployed for applied diagnostics. Based on the ASTM guide E722-09, the 'equivalent 1 MeV neutron fluence in silicon' was calculated for a set of CCD cameras at the Joint European Torus. Such evaluations would be useful to good practice in the operation of the video systems. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. An intelligent space for mobile robot localization using a multi-camera system.

    PubMed

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  3. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    PubMed Central

    Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.

    2014-01-01

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009

  4. DC drive system for cine/pulse cameras

    NASA Technical Reports Server (NTRS)

    Gerlach, R. H.; Sharpsteen, J. T.; Solheim, C. D.; Stoap, L. J.

    1977-01-01

    Camera-drive functions are separated mechanically into two groups which are driven by two separate dc brushless motors. First motor, a 90 deg stepper, drives rotating shutter; second electronically commutated motor drives claw and film transport. Shutter is made of one piece but has two openings for slow and fast exposures.

  5. Meteor Film Recording with Digital Film Cameras with large CMOS Sensors

    NASA Astrophysics Data System (ADS)

    Slansky, P. C.

    2016-12-01

    In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.

  6. Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.

    2014-10-01

    A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.

  7. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  8. LSST camera control system

    NASA Astrophysics Data System (ADS)

    Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael

    2006-06-01

    The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.

  9. Color constancy by characterization of illumination chromaticity

    NASA Astrophysics Data System (ADS)

    Nikkanen, Jarno T.

    2011-05-01

    Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.

  10. Robot Tracer with Visual Camera

    NASA Astrophysics Data System (ADS)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  11. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaffney, Kelly

    Movies have transformed our perception of the world. With slow motion photography, we can see a hummingbird flap its wings, and a bullet pierce an apple. The remarkably small and extremely fast molecular world that determines how your body functions cannot be captured with even the most sophisticated movie camera today. To see chemistry in real time requires a camera capable of seeing molecules that are one ten billionth of a foot with a frame rate of 10 trillion frames per second! SLAC has embarked on the construction of just such a camera. Please join me as I discuss howmore » this molecular movie camera will work and how it will change our perception of the molecular world.« less

  13. Fast simulation of yttrium-90 bremsstrahlung photons with GATE.

    PubMed

    Rault, Erwann; Staelens, Steven; Van Holen, Roel; De Beenhouwer, Jan; Vandenberghe, Stefaan

    2010-06-01

    Multiple investigators have recently reported the use of yttrium-90 (90Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging for the dosimetry of targeted radionuclide therapies. Because Monte Carlo (MC) simulations are useful for studying SPECT imaging, this study investigates the MC simulation of 90Y bremsstrahlung photons in SPECT. To overcome the computationally expensive simulation of electrons, the authors propose a fast way to simulate the emission of 90Y bremsstrahlung photons based on prerecorded bremsstrahlung photon probability density functions (PDFs). The accuracy of bremsstrahlung photon simulation is evaluated in two steps. First, the validity of the fast bremsstrahlung photon generator is checked. To that end, fast and analog simulations of photons emitted from a 90Y point source in a water phantom are compared. The same setup is then used to verify the accuracy of the bremsstrahlung photon simulations, comparing the results obtained with PDFs generated from both simulated and measured data to measurements. In both cases, the energy spectra and point spread functions of the photons detected in a scintillation camera are used. Results show that the fast simulation method is responsible for a 5% overestimation of the low-energy fluence (below 75 keV) of the bremsstrahlung photons detected using a scintillation camera. The spatial distribution of the detected photons is, however, accurately reproduced with the fast method and a computational acceleration of approximately 17-fold is achieved. When measured PDFs are used in the simulations, the simulated energy spectrum of photons emitted from a point source of 90Y in a water phantom and detected in a scintillation camera closely approximates the measured spectrum. The PSF of the photons imaged in the 50-300 keV energy window is also accurately estimated with a 12.4% underestimation of the full width at half maximum and 4.5% underestimation of the full width at tenth maximum. Despite its limited accuracy, the fast bremsstrahlung photon generator is well suited for the simulation of bremsstrahlung photons emitted in large homogeneous organs, such as the liver, and detected in a scintillation camera. The computational acceleration makes it very useful for future investigations of 90Y bremsstrahlung SPECT imaging.

  14. 50 CFR 217.75 - Requirements for monitoring and reporting.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., during, and 2 hours after launch; (2) Ensure a remote camera system will be in place and operating in a..., whenever a new class of rocket is flown from the Kodiak Launch Complex, a real-time sound pressure and... camera system designed to detect pinniped responses to rocket launches for at least the first five...

  15. 50 CFR 217.75 - Requirements for monitoring and reporting.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., during, and 2 hours after launch; (2) Ensure a remote camera system will be in place and operating in a..., whenever a new class of rocket is flown from the Kodiak Launch Complex, a real-time sound pressure and... camera system designed to detect pinniped responses to rocket launches for at least the first five...

  16. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  17. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  18. Toward a digital camera to rival the human eye

    NASA Astrophysics Data System (ADS)

    Skorka, Orit; Joseph, Dileepan

    2011-07-01

    All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.

  19. Applying UV cameras for SO2 detection to distant or optically thick volcanic plumes

    USGS Publications Warehouse

    Kern, Christoph; Werner, Cynthia; Elias, Tamar; Sutton, A. Jeff; Lübcke, Peter

    2013-01-01

    Ultraviolet (UV) camera systems represent an exciting new technology for measuring two dimensional sulfur dioxide (SO2) distributions in volcanic plumes. The high frame rate of the cameras allows the retrieval of SO2 emission rates at time scales of 1 Hz or higher, thus allowing the investigation of high-frequency signals and making integrated and comparative studies with other high-data-rate volcano monitoring techniques possible. One drawback of the technique, however, is the limited spectral information recorded by the imaging systems. Here, a framework for simulating the sensitivity of UV cameras to various SO2 distributions is introduced. Both the wavelength-dependent transmittance of the optical imaging system and the radiative transfer in the atmosphere are modeled. The framework is then applied to study the behavior of different optical setups and used to simulate the response of these instruments to volcanic plumes containing varying SO2 and aerosol abundances located at various distances from the sensor. Results show that UV radiative transfer in and around distant and/or optically thick plumes typically leads to a lower sensitivity to SO2 than expected when assuming a standard Beer–Lambert absorption model. Furthermore, camera response is often non-linear in SO2 and dependent on distance to the plume and plume aerosol optical thickness and single scatter albedo. The model results are compared with camera measurements made at Kilauea Volcano (Hawaii) and a method for integrating moderate resolution differential optical absorption spectroscopy data with UV imagery to retrieve improved SO2 column densities is discussed.

  20. Intensity distribution of the x ray source for the AXAF VETA-I mirror test

    NASA Technical Reports Server (NTRS)

    Zhao, Ping; Kellogg, Edwin M.; Schwartz, Daniel A.; Shao, Yibo; Fulton, M. Ann

    1992-01-01

    The X-ray generator for the AXAF VETA-I mirror test is an electron impact X-ray source with various anode materials. The source sizes of different anodes and their intensity distributions were measured with a pinhole camera before the VETA-I test. The pinhole camera consists of a 30 micrometers diameter pinhole for imaging the source and a Microchannel Plate Imaging Detector with 25 micrometers FWHM spatial resolution for detecting and recording the image. The camera has a magnification factor of 8.79, which enables measuring the detailed spatial structure of the source. The spot size, the intensity distribution, and the flux level of each source were measured with different operating parameters. During the VETA-I test, microscope pictures were taken for each used anode immediately after it was brought out of the source chamber. The source sizes and the intensity distribution structures are clearly shown in the pictures. They are compared and agree with the results from the pinhole camera measurements. This paper presents the results of the above measurements. The results show that under operating conditions characteristic of the VETA-I test, all the source sizes have a FWHM of less than 0.45 mm. For a source of this size at 528 meters away, the angular size to VETA is less than 0.17 arcsec which is small compared to the on ground VETA angular resolution (0.5 arcsec, required and 0.22 arcsec, measured). Even so, the results show the intensity distributions of the sources have complicated structures. These results were crucial for the VETA data analysis and for obtaining the on ground and predicted in orbit VETA Point Response Function.

  1. A Robotic Platform for Corn Seedling Morphological Traits Characterization

    PubMed Central

    Lu, Hang; Tang, Lie; Whitham, Steven A.; Mei, Yu

    2017-01-01

    Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x-axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping. PMID:28895892

  2. A Robotic Platform for Corn Seedling Morphological Traits Characterization.

    PubMed

    Lu, Hang; Tang, Lie; Whitham, Steven A; Mei, Yu

    2017-09-12

    Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x -axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping.

  3. Enhanced LWIR NUC using an uncooled microbolometer camera

    NASA Astrophysics Data System (ADS)

    LaVeigne, Joe; Franks, Greg; Sparkman, Kevin; Prewarski, Marcus; Nehring, Brian

    2011-06-01

    Performing a good non-uniformity correction is a key part of achieving optimal performance from an infrared scene projector, and the best NUC is performed in the band of interest for the sensor being tested. While cooled, large format MWIR cameras are readily available and have been successfully used to perform NUC, similar cooled, large format LWIR cameras are not as common and are prohibitively expensive. Large format uncooled cameras are far more available and affordable, but present a range of challenges in practical use for performing NUC on an IRSP. Some of these challenges were discussed in a previous paper. In this discussion, we report results from a continuing development program to use a microbolometer camera to perform LWIR NUC on an IRSP. Camera instability and temporal response and thermal resolution were the main problems, and have been solved by the implementation of several compensation strategies as well as hardware used to stabilize the camera. In addition, other processes have been developed to allow iterative improvement as well as supporting changes of the post-NUC lookup table without requiring re-collection of the pre-NUC data with the new LUT in use.

  4. System for photometric calibration of optoelectronic imaging devices especially streak cameras

    DOEpatents

    Boni, Robert; Jaanimagi, Paul

    2003-11-04

    A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.

  5. Fast Fiber-Coupled Imaging Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas

    HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less

  6. Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera

    NASA Astrophysics Data System (ADS)

    Cruz Perez, Carlos; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor

    2015-09-01

    Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.

  7. Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera.

    PubMed

    Perez, Carlos Cruz; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor

    2015-09-01

    Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.

  8. 25 CFR 543.2 - What are the definitions for this part?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., mechanical, or other technologic form, that function together to aid the play of one or more Class II games... a particular game, player interface, shift, or other period. Count room. A secured room where the... validated directly by a voucher system. Dedicated camera. A video camera that continuously records a...

  9. Using focused plenoptic cameras for rich image capture.

    PubMed

    Georgiev, T; Lumsdaine, A; Chunev, G

    2011-01-01

    This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture.

  10. Soft x-ray streak camera for laser fusion applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stradling, G.L.

    This thesis reviews the development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development. A brief introduction of laser fusion and laser fusion diagnostics is presented. The need for a soft x-ray streak camera as a laser fusion diagnostic is shown. Basic x-ray streak camera characteristics, design, and operation are reviewed. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV aremore » also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.« less

  11. Harbour surveillance with cameras calibrated with AIS data

    NASA Astrophysics Data System (ADS)

    Palmieri, F. A. N.; Castaldo, F.; Marino, G.

    The inexpensive availability of surveillance cameras, easily connected in network configurations, suggests the deployment of this additional sensor modality in port surveillance. Vessels appearing within cameras fields of view can be recognized and localized providing to fusion centers information that can be added to data coming from Radar, Lidar, AIS, etc. Camera systems, that are used as localizers however, must be properly calibrated in changing scenarios where often there is limited choice on the position on which they are deployed. Automatic Identification System (AIS) data, that includes position, course and vessel's identity, freely available through inexpensive receivers, for some of the vessels appearing within the field of view, provide the opportunity to achieve proper camera calibration to be used for the localization of vessels not equipped with AIS transponders. In this paper we assume a pinhole model for camera geometry and propose perspective matrices computation using AIS positional data. Images obtained from calibrated cameras are then matched and pixel association is utilized for other vessel's localization. We report preliminary experimental results of calibration and localization using two cameras deployed on the Gulf of Naples coastline. The two cameras overlook a section of the harbour and record short video sequences that are synchronized offline with AIS positional information of easily-identified passenger ships. Other small vessels, not equipped with AIS transponders, are localized using camera matrices and pixel matching. Localization accuracy is experimentally evaluated as a function of target distance from the sensors.

  12. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.

    PubMed

    Su, Po-Chang; Shen, Ju; Xu, Wanxin; Cheung, Sen-Ching S; Luo, Ying

    2018-01-15

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds.

  13. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks †

    PubMed Central

    Shen, Ju; Xu, Wanxin; Luo, Ying

    2018-01-01

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds. PMID:29342968

  14. How to model moon signals using 2-dimensional Gaussian function: Classroom activity for measuring nighttime cloud cover

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Lagrosas, N.

    2016-12-01

    Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be used for quantifying cloud cover as captured by ordinary cameras (Gacal et al, 2016). Cloud cover can be defined as the ratio of number of pixels whose values exceeds 0.07 and the total number of pixels. In this particular image, cloud cover value is 0.67.

  15. Multi-MGy Radiation Hardened Camera for Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef

    There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO{sub 2}) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations ofmore » the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera electronics will be exposed. Another important element of the camera is the optical system that transports the image from the scene to the image sensor. This arrangement of glass-based lenses is affected by radiations through two mechanisms: the radiation induced absorption and the radiation induced refractive index changes. The first one will limit the signal to noise ratio of the image whereas the second one will directly affect the resolution of the camera. We'll present at the conference a coupled simulation/experiment study of these effects for various commercial glasses and present vulnerability study of typical optical systems to radiations at MGy doses. The last very important part of the camera is the illumination system that can be based on various technologies of emitting devices like LED, SLED or lasers. The most promising solutions for high radiation doses will be presented at the conference. In addition to this hardening-by-component approach, the global radiation tolerance of the camera can be drastically improve by working at the system level, combining innovative approaches eg. for the optical and illumination systems. We'll present at the conference the developed approach allowing to extend the camera lifetime up to the MGy dose range. (authors)« less

  16. Evaluation of the poly(lactic-co-glycolic acid)/pluronic F127 for injection laryngoplasty in rabbits.

    PubMed

    Lee, Jin Ho; Kim, Dong Wook; Kim, Eun Na; Park, Seok-Won; Kim, Hee-Bok; Oh, Se Heang; Kwon, Seong Keun

    2014-11-01

    Poly(lactic-co-glycolic acid) (PLGA) is an aliphatic polyester and one of the most commonly used synthetic biodegradable polymers for tissue engineering. The objectives of this study were to evaluate the biocompatibility of PLGA/Pluronic F127 in the vocal fold. A randomized, prospective, controlled animal study. University laboratory. We used 18 New Zealand white rabbits, which were divided into 5% PLGA solution (n = 9) and 10% PLGA solution (n = 9) groups. The PLGA/Pluronic F127 solutions were injected into the rabbit vocal fold. Laryngoscopic exams were performed at 1, 4, and 8 weeks after implantation; then larynx specimens were sampled. High-speed video camera examination was performed for functional analysis of vocal mucosa vibration at 8 weeks after implantation. Also, we evaluated the amplitude of the mucosal wave from the laryngeal midline on high-speed recording. Histologic study of larynx specimen was performed at 4 and 8 weeks. All animals survived until the scheduled period. Laryngoscopic analysis showed that both 5% and 10% PLGA/Pluronic F127 maintained after 8 weeks after injection without significant inflammatory response. On functional analysis, high-speed camera examination revealed regular and symmetric contact of vocal fold mucosa without a distorted movement by injected PLGA/Pluronic F127. Histologically, no significant inflammation was observed in the injected vocal fold. As a vocal fold injection material, PLGA/Pluronic F127 showed a good bio-compatibility without significant inflammatory response. Further experiment will follow to elucidate its role for drug or gene delivery into the vocal fold. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2014.

  17. Free-viewpoint video of human actors using multiple handheld Kinects.

    PubMed

    Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian

    2013-10-01

    We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.

  18. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  19. Fiber optic TV direct

    NASA Technical Reports Server (NTRS)

    Kassak, John E.

    1991-01-01

    The objective of the operational television (OTV) technology was to develop a multiple camera system (up to 256 cameras) for NASA Kennedy installations where camera video, synchronization, control, and status data are transmitted bidirectionally via a single fiber cable at distances in excess of five miles. It is shown that the benefits (such as improved video performance, immunity from electromagnetic interference and radio frequency interference, elimination of repeater stations, and more system configuration flexibility) can be realized if application of the proven fiber optic transmission concept is used. The control system will marry the lens, pan and tilt, and camera control functions into a modular based Local Area Network (LAN) control network. Such a system does not exist commercially at present since the Television Broadcast Industry's current practice is to divorce the positional controls from the camera control system. The application software developed for this system will have direct applicability to similar systems in industry using LAN based control systems.

  20. The spatial resolution of a rotating gamma camera tomographic facility.

    PubMed

    Webb, S; Flower, M A; Ott, R J; Leach, M O; Inamdar, R

    1983-12-01

    An important feature determining the spatial resolution in transverse sections reconstructed by convolution and back-projection is the frequency filter corresponding to the convolution kernel. Equations have been derived giving the theoretical spatial resolution, for a perfect detector and noise-free data, using four filter functions. Experiments have shown that physical constraints will always limit the resolution that can be achieved with a given system. The experiments indicate that the region of the frequency spectrum between KN/2 and KN where KN is the Nyquist frequency does not contribute significantly to resolution. In order to investigate the physical effect of these filter functions, the spatial resolution of reconstructed images obtained with a GE 400T rotating gamma camera has been measured. The results obtained serve as an aid to choosing appropriate reconstruction filters for use with a rotating gamma camera system.

  1. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  2. Nonchronological video synopsis and indexing.

    PubMed

    Pritch, Yael; Rav-Acha, Alex; Peleg, Shmuel

    2008-11-01

    The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video Synopsis can be applied to create a synopsis of an endless video streams, as generated by webcams and by surveillance cameras. It can address queries like "Show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) An online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.

  3. Imaging using a supercontinuum laser to assess tumors in patients with breast carcinoma

    NASA Astrophysics Data System (ADS)

    Sordillo, Laura A.; Sordillo, Peter P.; Alfano, R. R.

    2016-03-01

    The supercontinuum laser light source has many advantages over other light sources, including broad spectral range. Transmission images of paired normal and malignant breast tissue samples from two patients were obtained using a Leukos supercontinuum (SC) laser light source with wavelengths in the second and third NIR optical windows and an IR- CCD InGaAs camera detector (Goodrich Sensors Inc. high response camera SU320KTSW-1.7RT with spectral response between 900 nm and 1,700 nm). Optical attenuation measurements at the four NIR optical windows were obtained from the samples.

  4. Improving Photometric Calibration of Meteor Video Camera Systems.

    PubMed

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera band pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  5. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  6. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  7. A compact neutron scatter camera for field deployment

    DOE PAGES

    Goldsmith, John E. M.; Gerling, Mark D.; Brennan, James S.

    2016-08-23

    Here, we describe a very compact (0.9 m high, 0.4 m diameter, 40 kg) battery operable neutron scatter camera designed for field deployment. Unlike most other systems, the configuration of the sixteen liquid-scintillator detection cells are arranged to provide omnidirectional (4π) imaging with sensitivity comparable to a conventional two-plane system. Although designed primarily to operate as a neutron scatter camera for localizing energetic neutron sources, it also functions as a Compton camera for localizing gamma sources. In addition to describing the radionuclide source localization capabilities of this system, we demonstrate how it provides neutron spectra that can distinguish plutonium metalmore » from plutonium oxide sources, in addition to the easier task of distinguishing AmBe from fission sources.« less

  8. Refueling machine with relative positioning capability

    DOEpatents

    Challberg, R.C.; Jones, C.R.

    1998-12-15

    A refueling machine is disclosed having relative positioning capability for refueling a nuclear reactor. The refueling machine includes a pair of articulated arms mounted on a refueling bridge. Each arm supports a respective telescoping mast. Each telescoping mast is designed to flex laterally in response to application of a lateral thrust on the end of the mast. A pendant mounted on the end of the mast carries an air-actuated grapple, television cameras, ultrasonic transducers and waterjet thrusters. The ultrasonic transducers are used to detect the gross position of the grapple relative to the bail of a nuclear fuel assembly in the fuel core. The television cameras acquire an image of the bail which is compared to a pre-stored image in computer memory. The pendant can be rotated until the television image and the pre-stored image match within a predetermined tolerance. Similarly, the waterjet thrusters can be used to apply lateral thrust to the end of the flexible mast to place the grapple in a fine position relative to the bail as a function of the discrepancy between the television and pre-stored images. 11 figs.

  9. Refueling machine with relative positioning capability

    DOEpatents

    Challberg, Roy Clifford; Jones, Cecil Roy

    1998-01-01

    A refueling machine having relative positioning capability for refueling a nuclear reactor. The refueling machine includes a pair of articulated arms mounted on a refueling bridge. Each arm supports a respective telescoping mast. Each telescoping mast is designed to flex laterally in response to application of a lateral thrust on the end of the mast. A pendant mounted on the end of the mast carries an air-actuated grapple, television cameras, ultrasonic transducers and waterjet thrusters. The ultrasonic transducers are used to detect the gross position of the grapple relative to the bail of a nuclear fuel assembly in the fuel core. The television cameras acquire an image of the bail which is compared to a pre-stored image in computer memory. The pendant can be rotated until the television image and the pre-stored image match within a predetermined tolerance. Similarly, the waterjet thrusters can be used to apply lateral thrust to the end of the flexible mast to place the grapple in a fine position relative to the bail as a function of the discrepancy between the television and pre-stored images.

  10. Fluorometric Biosniffer Camera "Sniff-Cam" for Direct Imaging of Gaseous Ethanol in Breath and Transdermal Vapor.

    PubMed

    Arakawa, Takahiro; Sato, Toshiyuki; Iitani, Kenta; Toma, Koji; Mitsubayashi, Kohji

    2017-04-18

    Various volatile organic compounds can be found in human transpiration, breath and body odor. In this paper, a novel two-dimensional fluorometric imaging system, known as a "sniffer-cam" for ethanol vapor released from human breath and palm skin was constructed and validated. This imaging system measures ethanol vapor concentrations as intensities of fluorescence through an enzymatic reaction induced by alcohol dehydrogenase (ADH). The imaging system consisted of multiple ultraviolet light emitting diode (UV-LED) excitation sheet, an ADH enzyme immobilized mesh substrate and a high-sensitive CCD camera. This imaging system uses ADH for recognition of ethanol vapor. It measures ethanol vapor by measuring fluorescence of nicotinamide adenine dinucleotide (NADH), which is produced by an enzymatic reaction on the mesh. This NADH fluorometric imaging system achieved the two-dimensional real-time imaging of ethanol vapor distribution (0.5-200 ppm). The system showed rapid and accurate responses and a visible measurement, which could lead to an analysis of metabolism function at real time in the near future.

  11. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  12. Handheld hyperspectral imager system for chemical/biological and environmental applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Piatek, Bob

    2004-08-01

    A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.

  13. Hand-held hyperspectral imager for chemical/biological and environmental applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Piatek, Bob

    2004-03-01

    A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.

  14. First results from the TOPSAT camera

    NASA Astrophysics Data System (ADS)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  15. Effects of red light camera enforcement on fatal crashes in large U.S. cities.

    PubMed

    Hu, Wen; McCartt, Anne T; Teoh, Eric R

    2011-08-01

    To estimate the effects of red light camera enforcement on per capita fatal crash rates at intersections with signal lights. From the 99 large U.S. cities with more than 200,000 residents in 2008, 14 cities were identified with red light camera enforcement programs for all of 2004-2008 but not at any time during 1992-1996, and 48 cities were identified without camera programs during either period. Analyses compared the citywide per capita rate of fatal red light running crashes and the citywide per capita rate of all fatal crashes at signalized intersections during the two study periods, and rate changes then were compared for cities with and without cameras programs. Poisson regression was used to model crash rates as a function of red light camera enforcement, land area, and population density. The average annual rate of fatal red light running crashes declined for both study groups, but the decline was larger for cities with red light camera enforcement programs than for cities without camera programs (35% vs. 14%). The average annual rate of all fatal crashes at signalized intersections decreased by 14% for cities with camera programs and increased slightly (2%) for cities without cameras. After controlling for population density and land area, the rate of fatal red light running crashes during 2004-2008 for cities with camera programs was an estimated 24% lower than what would have been expected without cameras. The rate of all fatal crashes at signalized intersections during 2004-2008 for cities with camera programs was an estimated 17% lower than what would have been expected without cameras. Red light camera enforcement programs were associated with a statistically significant reduction in the citywide rate of fatal red light running crashes and a smaller but still significant reduction in the rate of all fatal crashes at signalized intersections. The study adds to the large body of evidence that red light camera enforcement can prevent the most serious crashes. Communities seeking to reduce crashes at intersections should consider this evidence. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Integration of near-surface remote sensing and eddy covariance measurements: new insights on managed ecosystem structure and functioning

    NASA Astrophysics Data System (ADS)

    Hatala, J.; Sonnentag, O.; Detto, M.; Runkle, B.; Vargas, R.; Kelly, M.; Baldocchi, D. D.

    2009-12-01

    Ground-based, visible light imagery has been used for different purposes in agricultural and ecological research. A series of recent studies explored the utilization of networked digital cameras to continuously monitor vegetation by taking oblique canopy images at fixed view angles and time intervals. In our contribution we combine high temporal resolution digital camera imagery, eddy-covariance, and meteorological measurements with weekly field-based hyperspectral and LAI measurements to gain new insights on temporal changes in canopy structure and functioning of two managed ecosystems in California’s Sacramento-San Joaquin River Delta: a pasture infested by the invasive perennial pepperweed (Lepidium latifolium) and a rice plantation (Oryza sativa). Specific questions we address are: a) how does year-round grazing affect pepperweed canopy development, b) is it possible to identify phenological key events of managed ecosystems (pepperweed: flowering; rice: heading) from the limited spectral information of digital camera imagery, c) is a simple greenness index derived from digital camera imagery sufficient to track leaf area index and canopy development of managed ecosystems, and d) what are the scales of temporal correlation between digital camera signals and carbon and water fluxes of managed ecosystems? Preliminary results for the pasture-pepperweed ecosystem show that year-round grazing inhibits the accumulation of dead stalks causing earlier green-up and that digital camera imagery is well suited to capture the onset of flowering and the associated decrease in photosynthetic CO2 uptake. Results from our analyses are of great relevance from both a global environmental change and land management perspective.

  17. Concepts, laboratory, and telescope test results of the plenoptic camera as a wavefront sensor

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, L. F.; Montilla, I.; Fernández-Valdivia, J. J.; Trujillo-Sevilla, J. L.; Rodríguez-Ramos, J. M.

    2012-07-01

    The plenoptic camera has been proposed as an alternative wavefront sensor adequate for extended objects within the context of the design of the European Solar Telescope (EST), but it can also be used with point sources. Originated in the field of the Electronic Photography, the plenoptic camera directly samples the Light Field function, which is the four - dimensional representation of all the light entering a camera. Image formation can then be seen as the result of the photography operator applied to this function, and many other features of the light field can be exploited to extract information of the scene, like depths computation to extract 3D imaging or, as it will be specifically addressed in this paper, wavefront sensing. The underlying concept of the plenoptic camera can be adapted to the case of a telescope by using a lenslet array of the same f-number placed at the focal plane, thus obtaining at the detector a set of pupil images corresponding to every sampled point of view. This approach will generate a generalization of Shack-Hartmann, Curvature and Pyramid wavefront sensors in the sense that all those could be considered particular cases of the plenoptic wavefront sensor, because the information needed as the starting point for those sensors can be derived from the plenoptic image. Laboratory results obtained with extended objects, phase plates and commercial interferometers, and even telescope observations using stars and the Moon as an extended object are presented in the paper, clearly showing the capability of the plenoptic camera to behave as a wavefront sensor.

  18. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  19. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... acceptable. (b) Game play standards. (1) The functions of seller and payout verifier shall be segregated... selected in the bingo game. (5) Each ball shall be shown to a camera immediately before it is called so that it is individually displayed to all customers. For speed bingo games not verified by camera...

  20. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) Game play standards. (1) The functions of seller and payout verifier shall be segregated. Employees who... selected in the bingo game. (5) Each ball shall be shown to a camera immediately before it is called so that it is individually displayed to all customers. For speed bingo games not verified by camera...

  1. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  2. A Digital Approach to Learning Petrology

    NASA Astrophysics Data System (ADS)

    Reid, M. R.

    2011-12-01

    In the undergraduate igneous and metamorphic petrology course at Northern Arizona University, we are employing petrographic microscopes equipped with relatively inexpensive ( $200) digital cameras that are linked to pen-tablet computers. The camera-tablet systems can assist student learning in a variety of ways. Images provided by the tablet computers can be used for helping students filter the visually complex specimens they examine. Instructors and students can simultaneously view the same petrographic features captured by the cameras and exchange information about them by pointing to salient features using the tablet pen. These images can become part of a virtual mineral/rock/texture portfolio tailored to individual student's needs. Captured digital illustrations can be annotated with digital ink or computer graphics tools; this activity emulates essential features of more traditional line drawings (visualizing an appropriate feature and selecting a representative image of it, internalizing the feature through studying and annotating it) while minimizing the frustration that many students feel about drawing. In these ways, we aim to help a student progress more efficiently from novice to expert. A number of our petrology laboratory exercises involve use of the camera-tablet systems for collaborative learning. Observational responsibilities are distributed among individual members of teams in order to increase interdependence and accountability, and to encourage efficiency. Annotated digital images are used to share students' findings and arrive at an understanding of an entire rock suite. This interdependence increases the individual's sense of responsibility for their work, and reporting out encourages students to practice use of technical vocabulary and to defend their observations. Pre- and post-course student interest in the camera-tablet systems has been assessed. In a post-course survey, the majority of students reported that, if available, they would use camera-tablet systems to capture microscope images (77%) and to make notes on images (71%). An informal focus group recommended introducing the cameras as soon as possible and having them available for making personal mineralogy/petrology portfolios. Because the stakes are perceived as high, use of the camera-tablet systems for peer-peer learning has been progressively modified to bolster student confidence in their collaborative efforts.

  3. The In-flight Spectroscopic Performance of the Swift XRT CCD Camera During 2006-2007

    NASA Technical Reports Server (NTRS)

    Godet, O.; Beardmore, A.P.; Abbey, A.F.; Osborne, J.P.; Page, K.L.; Evans, P.; Starling, R.; Wells, A.A.; Angelini, L.; Burrows, D.N.; hide

    2007-01-01

    The Swift X-ray Telescope focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 135 eV FWHM at 5.9 keV as measured before launch. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Comparison of observed spectra with models folded through the instrument response produces negative residuals around and below the Oxygen edge. We discuss several possible causes for such residuals. Traps created by proton damage on the CCD increase the charge transfer inefficiency (CTI) over time. We describe the evolution of the CTI since the launch and its effect on the CCD spectral resolution and the gain.

  4. Development of two-framing camera with large format and ultrahigh speed

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaoguo; Wang, Yuan; Wang, Yi

    2012-10-01

    High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.

  5. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  6. Design framework for a spectral mask for a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Berkner, Kathrin; Shroff, Sapna A.

    2012-01-01

    Plenoptic cameras are designed to capture different combinations of light rays from a scene, sampling its lightfield. Such camera designs capturing directional ray information enable applications such as digital refocusing, rotation, or depth estimation. Only few address capturing spectral information of the scene. It has been demonstrated that by modifying a plenoptic camera with a filter array containing different spectral filters inserted in the pupil plane of the main lens, sampling of the spectral dimension of the plenoptic function is performed. As a result, the plenoptic camera is turned into a single-snapshot multispectral imaging system that trades-off spatial with spectral information captured with a single sensor. Little work has been performed so far on analyzing diffraction effects and aberrations of the optical system on the performance of the spectral imager. In this paper we demonstrate simulation of a spectrally-coded plenoptic camera optical system via wave propagation analysis, evaluate quality of the spectral measurements captured at the detector plane, and demonstrate opportunities for optimization of the spectral mask for a few sample applications.

  7. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  8. Optical Meteor Systems Used by the NASA Meteoroid Environment Office

    NASA Technical Reports Server (NTRS)

    Kingery, A. M.; Blaauw, R. C.; Cooke, W. J.; Moser, D. E.

    2015-01-01

    The NASA Meteoroid Environment Office (MEO) uses two main meteor camera networks to characterize the meteoroid environment: an all sky system and a wide field system to study cm and mm size meteors respectively. The NASA All Sky Fireball Network consists of fifteen meteor video cameras in the United States, with plans to expand to eighteen cameras by the end of 2015. The camera design and All-Sky Guided and Real-time Detection (ASGARD) meteor detection software [1, 2] were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN). After seven years of operation, the network has detected over 12,000 multi-station meteors, including meteors from at least 53 different meteor showers. The network is used for speed distribution determination, characterization of meteor showers and sporadic sources, and for informing the public on bright meteor events. The NASA Wide Field Meteor Network was established in December of 2012 with two cameras and expanded to eight cameras in December of 2014. The two camera configuration saw 5470 meteors over two years of operation with two cameras, and has detected 3423 meteors in the first five months of operation (Dec 12, 2014 - May 12, 2015) with eight cameras. We expect to see over 10,000 meteors per year with the expanded system. The cameras have a 20 degree field of view and an approximate limiting meteor magnitude of +5. The network's primary goal is determining the nightly shower and sporadic meteor fluxes. Both camera networks function almost fully autonomously with little human interaction required for upkeep and analysis. The cameras send their data to a central server for storage and automatic analysis. Every morning the servers automatically generates an e-mail and web page containing an analysis of the previous night's events. The current status of the networks will be described, alongside with preliminary results. In addition, future projects, CCD photometry and broadband meteor color camera system, will be discussed.

  9. The iQID Camera: An Ionizing-Radiation Quantum Imaging Detector

    DOE PAGES

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; ...

    2014-06-11

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier andmore » then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. In conclusion, we present the latest results and discuss potential applications.« less

  10. Getting the Picture: Using the Digital Camera as a Tool to Support Reflective Practice and Responsive Care

    ERIC Educational Resources Information Center

    Luckenbill, Julia

    2012-01-01

    Many early childhood educators use cameras to share the charming things that children do and the artwork they make. Programs often bind these photographs into portfolios and give them to children and their families as mementos at the end of the year. In the author's classrooms, they use photography on a daily basis to document children's…

  11. Choreographing the Frame: A Critical Investigation into How Dance for the Camera Extends the Conceptual and Artistic Boundaries of Dance

    ERIC Educational Resources Information Center

    Preston, Hilary

    2006-01-01

    This essay investigates the collaboration between dance and choreographic practice and film/video medium in a contemporary context. By looking specifically at dance made for the camera and the proliferation of dance-film/video, critical issues will be explored that have surfaced in response to this burgeoning form. Presenting a view of avant-garde…

  12. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  13. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  14. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera.

    PubMed

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.

  15. Comparison of the temperature accuracy between smart phone based and high-end thermal cameras using a temperature gradient phantom

    NASA Astrophysics Data System (ADS)

    Klaessens, John H.; van der Veen, Albert; Verdaasdonk, Rudolf M.

    2017-03-01

    Recently, low cost smart phone based thermal cameras are being considered to be used in a clinical setting for monitoring physiological temperature responses such as: body temperature change, local inflammations, perfusion changes or (burn) wound healing. These thermal cameras contain uncooled micro-bolometers with an internal calibration check and have a temperature resolution of 0.1 degree. For clinical applications a fast quality measurement before use is required (absolute temperature check) and quality control (stability, repeatability, absolute temperature, absolute temperature differences) should be performed regularly. Therefore, a calibrated temperature phantom has been developed based on thermistor heating on both ends of a black coated metal strip to create a controllable temperature gradient from room temperature 26 °C up to 100 °C. The absolute temperatures on the strip are determined with software controlled 5 PT-1000 sensors using lookup tables. In this study 3 FLIR-ONE cameras and one high end camera were checked with this temperature phantom. The results show a relative good agreement between both low-cost and high-end camera's and the phantom temperature gradient, with temperature differences of 1 degree up to 6 degrees between the camera's and the phantom. The measurements were repeated as to absolute temperature and temperature stability over the sensor area. Both low-cost and high-end thermal cameras measured relative temperature changes with high accuracy and absolute temperatures with constant deviations. Low-cost smart phone based thermal cameras can be a good alternative to high-end thermal cameras for routine clinical measurements, appropriate to the research question, providing regular calibration checks for quality control.

  16. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  17. Toolkit for testing scientific CCD cameras

    NASA Astrophysics Data System (ADS)

    Uzycki, Janusz; Mankiewicz, Lech; Molak, Marcin; Wrochna, Grzegorz

    2006-03-01

    The CCD Toolkit (1) is a software tool for testing CCD cameras which allows to measure important characteristics of a camera like readout noise, total gain, dark current, 'hot' pixels, useful area, etc. The application makes a statistical analysis of images saved in files with FITS format, commonly used in astronomy. A graphical interface is based on the ROOT package, which offers high functionality and flexibility. The program was developed in a way to ensure future compatibility with different operating systems: Windows and Linux. The CCD Toolkit was created for the "Pie of the Sky" project collaboration (2).

  18. Feasibility of a high-speed gamma-camera design using the high-yield-pileup-event-recovery method.

    PubMed

    Wong, W H; Li, H; Uribe, J; Baghaei, H; Wang, Y; Yokoyama, S

    2001-04-01

    Higher count-rate gamma cameras than are currently used are needed if the technology is to fulfill its promise in positron coincidence imaging, radionuclide therapy dosimetry imaging, and cardiac first-pass imaging. The present single-crystal design coupled with conventional detector electronics and the traditional Anger-positioning algorithm hinder higher count-rate imaging because of the pileup of gamma-ray signals in the detector and electronics. At an interaction rate of 2 million events per second, the fraction of nonpileup events is < 20% of the total incident events. Hence, the recovery of pileup events can significantly increase the count-rate capability, increase the yield of imaging photons, and minimize image artifacts associated with pileups. A new technology to significantly enhance the performance of gamma cameras in this area is introduced. We introduce a new electronic design called high-yield-pileup-event-recovery (HYPER) electronics for processing the detector signal in gamma cameras so that the individual gamma energies and positions of pileup events, including multiple pileups, can be resolved and recovered despite the mixing of signals. To illustrate the feasibility of the design concept, we have developed a small gamma-camera prototype with the HYPER-Anger electronics. The camera has a 10 x 10 x 1 cm NaI(Tl) crystal with four photomultipliers. Hot-spot and line sources with very high 99mTc activities were imaged. The phantoms were imaged continuously from 60,000 to 3,500,000 counts per second to illustrate the efficacy of the method as a function of counting rates. At 2-3 million events per second, all phantoms were imaged with little distortion, pileup, and dead-time loss. At these counting rates, multiple pileup events (> or = 3 events piling together) were the predominate occurrences, and the HYPER circuit functioned well to resolve and recover these events. The full width at half maximum of the line-spread function at 3,000,000 counts per second was 1.6 times that at 60,000 counts per second. This feasibility study showed that the HYPER electronic concept works; it can significantly increase the count-rate capability and dose efficiency of gamma cameras. In a larger clinical camera, multiple HYPER-Anger circuits may be implemented to further improve the imaging counting rates that we have shown by multiple times. This technology would facilitate the use of gamma cameras for radionuclide therapy dosimetry imaging, cardiac first-pass imaging, and positron coincidence imaging and the simultaneous acquisition of transmission and emission data using different isotopes with less cross-contamination between transmission and emission data.

  19. Estimation of spectral distribution of sky radiance using a commercial digital camera.

    PubMed

    Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao

    2016-01-10

    Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.

  20. The Wide Angle Camera of the ROSETTA Mission

    NASA Astrophysics Data System (ADS)

    Barbieri, C.; Fornasier, S.; Verani, S.; Bertini, I.; Lazzarin, M.; Rampazzi, F.; Cremonese, G.; Ragazzoni, R.; Marzari, F.; Angrilli, F.; Bianchini, G. A.; Debei, S.; Dececco, M.; Guizzo, G.; Parzianello, G.; Ramous, P.; Saggin, B.; Zaccariotto, M.; Da Deppo, V.; Naletto, G.; Nicolosi, G.; Pelizzo, M. G.; Tondello, G.; Brunello, P.; Peron, F.

    This paper aims to give a brief description of the Wide Angle Camera (WAC), built by the Centro Servizi e AttivitàSpaziali (CISAS) of the University of Padova for the ESA ROSETTA Mission to comet 46P/Wirtanen and asteroids 4979 Otawara and 140 Siwa. The WAC is part of the OSIRIS imaging system, which comprises also a Narrow Angle Camera (NAC) built by the Laboratoire d'Astrophysique Spatiale (LAS) of Marseille. CISAS had also the responsibility to build the shutter and the front cover mechanism for the NAC. The flight model of the WAC was delivered in December 2001, and has been already integrated on ROSETTA.

  1. Mars Exploration Rover Navigation Camera in-flight calibration

    NASA Astrophysics Data System (ADS)

    Soderblom, Jason M.; Bell, James F.; Johnson, Jeffrey R.; Joseph, Jonathan; Wolff, Michael J.

    2008-06-01

    The Navigation Camera (Navcam) instruments on the Mars Exploration Rover (MER) spacecraft provide support for both tactical operations as well as scientific observations where color information is not necessary: large-scale morphology, atmospheric monitoring including cloud observations and dust devil movies, and context imaging for both the thermal emission spectrometer and the in situ instruments on the Instrument Deployment Device. The Navcams are a panchromatic stereoscopic imaging system built using identical charge-coupled device (CCD) detectors and nearly identical electronics boards as the other cameras on the MER spacecraft. Previous calibration efforts were primarily focused on providing a detailed geometric calibration in line with the principal function of the Navcams, to provide data for the MER navigation team. This paper provides a detailed description of a new Navcam calibration pipeline developed to provide an absolute radiometric calibration that we estimate to have an absolute accuracy of 10% and a relative precision of 2.5%. Our calibration pipeline includes steps to model and remove the bias offset, the dark current charge that accumulates in both the active and readout regions of the CCD, and the shutter smear. It also corrects pixel-to-pixel responsivity variations using flat-field images, and converts from raw instrument-corrected digital number values per second to units of radiance (W m-2 nm-1 sr-1), or to radiance factor (I/F). We also describe here the initial results of two applications where radiance-calibrated Navcam data provide unique information for surface photometric and atmospheric aerosol studies.

  2. Performance of PHOTONIS' low light level CMOS imaging sensor for long range observation

    NASA Astrophysics Data System (ADS)

    Bourree, Loig E.

    2014-05-01

    Identification of potential threats in low-light conditions through imaging is commonly achieved through closed-circuit television (CCTV) and surveillance cameras by combining the extended near infrared (NIR) response (800-10000nm wavelengths) of the imaging sensor with NIR LED or laser illuminators. Consequently, camera systems typically used for purposes of long-range observation often require high-power lasers in order to generate sufficient photons on targets to acquire detailed images at night. While these systems may adequately identify targets at long-range, the NIR illumination needed to achieve such functionality can easily be detected and therefore may not be suitable for covert applications. In order to reduce dependency on supplemental illumination in low-light conditions, the frame rate of the imaging sensors may be reduced to increase the photon integration time and thus improve the signal to noise ratio of the image. However, this may hinder the camera's ability to image moving objects with high fidelity. In order to address these particular drawbacks, PHOTONIS has developed a CMOS imaging sensor (CIS) with a pixel architecture and geometry designed specifically to overcome these issues in low-light level imaging. By combining this CIS with field programmable gate array (FPGA)-based image processing electronics, PHOTONIS has achieved low-read noise imaging with enhanced signal-to-noise ratio at quarter moon illumination, all at standard video frame rates. The performance of this CIS is discussed herein and compared to other commercially available CMOS and CCD for long-range observation applications.

  3. Camera system resolution and its influence on digital image correlation

    DOE PAGES

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; ...

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss ofmore » spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.« less

  4. Practical target location and accuracy indicator in digital close range photogrammetry using consumer grade cameras

    NASA Astrophysics Data System (ADS)

    Moriya, Gentaro; Chikatsu, Hirofumi

    2011-07-01

    Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.

  5. Retrieval System for Calcined Waste for the Idaho Cleanup Project - 12104

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eastman, Randy L.; Johnston, Beau A.; Lower, Danielle E.

    This paper describes the conceptual approach to retrieve radioactive calcine waste, hereafter called calcine, from stainless steel storage bins contained within concrete vaults. The retrieval system will allow evacuation of the granular solids (calcine) from the storage bins through the use of stationary vacuum nozzles. The nozzles will use air jets for calcine fluidization and will be able to rotate and direct the fluidization or displacement of the calcine within the bin. Each bin will have a single retrieval system installed prior to operation to prevent worker exposure to the high radiation fields. The addition of an articulated camera armmore » will allow for operations monitoring and will be equipped with contingency tools to aid in calcine removal. Possible challenges (calcine bridging and rat-holing) associated with calcine retrieval and transport, including potential solutions for bin pressurization, calcine fluidization and waste confinement, are also addressed. The Calcine Disposition Project has the responsibility to retrieve, treat, and package HLW calcine. The calcine retrieval system has been designed to incorporate the functions and technical characteristics as established by the retrieval system functional analysis. By adequately implementing the highest ranking technical characteristics into the design of the retrieval system, the system will be able to satisfy the functional requirements. The retrieval system conceptual design provides the means for removing bulk calcine from the bins of the CSSF vaults. Top-down vacuum retrieval coupled with an articulating camera arm will allow for a robust, contained process capable of evacuating bulk calcine from bins and transporting it to the processing facility. The system is designed to fluidize, vacuum, transport and direct the calcine from its current location to the CSSF roof-top transport lines. An articulating camera arm, deployed through an adjacent access riser, will work in conjunction with the retrieval nozzle to aid in calcine fluidization, remote viewing, clumped calcine breaking and recovery from off-normal conditions. As the design of the retrieval system progresses from conceptual to preliminary, increasing attention will be directed toward detailed design and proof-of- concept testing. (authors)« less

  6. Procurement specification color graphic camera system

    NASA Technical Reports Server (NTRS)

    Prow, G. E.

    1980-01-01

    The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.

  7. High-speed and ultrahigh-speed cinematographic recording techniques

    NASA Astrophysics Data System (ADS)

    Miquel, J. C.

    1980-12-01

    A survey is presented of various high-speed and ultrahigh-speed cinematographic recording systems (covering a range of speeds from 100 to 14-million pps). Attention is given to the functional and operational characteristics of cameras and to details of high-speed cinematography techniques (including image processing, and illumination). A list of cameras (many of them French) available in 1980 is presented

  8. Miniaturized unified imaging system using bio-inspired fluidic lens

    NASA Astrophysics Data System (ADS)

    Tsai, Frank S.; Cho, Sung Hwan; Qiao, Wen; Kim, Nam-Hyong; Lo, Yu-Hwa

    2008-08-01

    Miniaturized imaging systems have become ubiquitous as they are found in an ever-increasing number of devices, such as cellular phones, personal digital assistants, and web cameras. Until now, the design and fabrication methodology of such systems have not been significantly different from conventional cameras. The only established method to achieve focusing is by varying the lens distance. On the other hand, the variable-shape crystalline lens found in animal eyes offers inspiration for a more natural way of achieving an optical system with high functionality. Learning from the working concepts of the optics in the animal kingdom, we developed bio-inspired fluidic lenses for a miniature universal imager with auto-focusing, macro, and super-macro capabilities. Because of the enormous dynamic range of fluidic lenses, the miniature camera can even function as a microscope. To compensate for the image quality difference between the central vision and peripheral vision and the shape difference between a solid-state image sensor and a curved retina, we adopted a hybrid design consisting of fluidic lenses for tunability and fixed lenses for aberration and color dispersion correction. A design of the world's smallest surgical camera with 3X optical zoom capabilities is also demonstrated using the approach of hybrid lenses.

  9. Coded aperture solution for improving the performance of traffic enforcement cameras

    NASA Astrophysics Data System (ADS)

    Masoudifar, Mina; Pourreza, Hamid Reza

    2016-10-01

    A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.

  10. Forensic use of photo response non-uniformity of imaging sensors and a counter method.

    PubMed

    Dirik, Ahmet Emir; Karaküçük, Ahmet

    2014-01-13

    Analogous to use of bullet scratches in forensic science, the authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, photo-response non-uniformity noise (PRNU) has been used in source camera identification (SCI). However, this technique can be used maliciously to track or inculpate innocent people. To impede such tracking, PRNU noise should be suppressed significantly. Based on this motivation, we propose a counter forensic method to deceive SCI. Experimental results show that it is possible to impede PRNU-based camera identification for various imaging sensors while preserving the image quality.

  11. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  12. Where can I find spectral response information for the MISR bands?

    Atmospheric Science Data Center

    2014-12-08

    ... green, red, and near-infrared, respectively. Variations in spectral response from one camera to another, for the same band, are minor and ... response information for both the in-band region of each filter as well as for the total band. MISR: General Questions ...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranson, W.F.; Schaeffel, J.A.; Murphree, E.A.

    The response of prestressed and preheated plates subject to an exponentially decaying blast load was experimentally determined. A grid was reflected from the front surface of the plate and the response was recorded with a high speed camera. The camera used in this analysis was a rotating drum camera operating at 20,000 frames per second with a maximum of 224 frames at 39 microseconds separation. Inplane tension loads were applied to the plate by means of air cylinders. Maximum biaxial load applied to the plate was 500 pounds. Plate preheating was obtained with resistance heaters located in the specimen platemore » holder with a maximum capability of 500F. Data analysis was restricted to the maximum conditions at the center of the plate. Strains were determined from the photographic data and the stresses were calculated from the strain data. Results were obtained from zero preload conditions to a maximum of 480 pounds inplane tension loads and a plate temperature of 490F. The blast load ranged from 6 to 23 psi.« less

  14. Image processing and data reduction of Apollo low light level photographs

    NASA Technical Reports Server (NTRS)

    Alvord, G. C.

    1975-01-01

    The removal of the lens induced vignetting from a selected sample of the Apollo low light level photographs is discussed. The methods used were developed earlier. A study of the effect of noise on vignetting removal and the comparability of the Apollo 35mm Nikon lens vignetting was also undertaken. The vignetting removal was successful to about 10% photometry, and noise has a severe effect on the useful photometric output data. Separate vignetting functions must be used for different flights since the vignetting function varies from camera to camera in size and shape.

  15. Development of Digital SLR Camera: PENTAX K-7

    NASA Astrophysics Data System (ADS)

    Kawauchi, Hiraku

    The DSLR "PENTAX K-7" comes with an easy-to-carry, minimal yet functional small form factor, a long inherited identities of the PENTAX brand. Nevertheless for its compact body, this camera has up-to-date enhanced fundamental features such as high-quality viewfinder, enhanced shutter mechanism, extended continuous shooting capabilities, reliable exposure control, and fine-tuned AF systems, as well as strings of newest technologies such as movie recording capability and automatic leveling function. The main focus of this article is to reveal the ideas behind the concept making of this product and its distinguished features.

  16. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging

    DOE PAGES

    Benedetti, L. R.; Holder, J. P.; Perkins, M.; ...

    2016-02-26

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement formore » gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. Furthermore, we have developed a device that can be added to the framing camera head to prevent these artifacts.« less

  17. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging.

    PubMed

    Benedetti, L R; Holder, J P; Perkins, M; Brown, C G; Anderson, C S; Allen, F V; Petre, R B; Hargrove, D; Glenn, S M; Simanovskaia, N; Bradley, D K; Bell, P

    2016-02-01

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement for gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. We have developed a device that can be added to the framing camera head to prevent these artifacts.

  18. The status of MUSIC: the multiwavelength sub-millimeter inductance camera

    NASA Astrophysics Data System (ADS)

    Sayers, Jack; Bockstiegel, Clint; Brugger, Spencer; Czakon, Nicole G.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Gill, Amandeep K.; Glenn, Jason; Golwala, Sunil R.; Hollister, Matthew I.; Lam, Albert; LeDuc, Henry G.; Maloney, Philip R.; Mazin, Benjamin A.; McHugh, Sean G.; Miller, David A.; Mroczkowski, Anthony K.; Noroozian, Omid; Nguyen, Hien Trong; Schlaerth, James A.; Siegel, Seth R.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas

    2014-08-01

    The Multiwavelength Sub/millimeter Inductance Camera (MUSIC) is a four-band photometric imaging camera operating from the Caltech Submillimeter Observatory (CSO). MUSIC is designed to utilize 2304 microwave kinetic inductance detectors (MKIDs), with 576 MKIDs for each observing band centered on 150, 230, 290, and 350 GHz. MUSIC's field of view (FOV) is 14' square, and the point-spread functions (PSFs) in the four observing bands have 45'', 31'', 25'', and 22'' full-widths at half maximum (FWHM). The camera was installed in April 2012 with 25% of its nominal detector count in each band, and has subsequently completed three short sets of engineering observations and one longer duration set of early science observations. Recent results from on-sky characterization of the instrument during these observing runs are presented, including achieved map- based sensitivities from deep integrations, along with results from lab-based measurements made during the same period. In addition, recent upgrades to MUSIC, which are expected to significantly improve the sensitivity of the camera, are described.

  19. Spatiotemporal motion boundary detection and motion boundary velocity estimation for tracking moving objects with a moving camera: a level sets PDEs approach with concurrent camera motion compensation.

    PubMed

    Feghali, Rosario; Mitiche, Amar

    2004-11-01

    The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.

  20. Performance evaluation for pinhole collimators of small gamma camera by MTF and NNPS analysis: Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Jeon, Hosang; Kim, Hyunduk; Cha, Bo Kyung; Kim, Jong Yul; Cho, Gyuseong; Chung, Yong Hyun; Yun, Jong-Il

    2009-06-01

    Presently, the gamma camera system is widely used in various medical diagnostic, industrial and environmental fields. Hence, the quantitative and effective evaluation of its imaging performance is essential for design and quality assurance. The National Electrical Manufacturers Association (NEMA) standards for gamma camera evaluation are insufficient to perform sensitive evaluation. In this study, modulation transfer function (MTF) and normalized noise power spectrum (NNPS) will be suggested to evaluate the performance of small gamma camera with changeable pinhole collimators using Monte Carlo simulation. We simulated the system with a cylinder and a disk source, and seven different pinhole collimators from 1- to 4-mm-diameter pinhole with lead. The MTF and NNPS data were obtained from output images and were compared with full-width at half-maximum (FWHM), sensitivity and differential uniformity. In the result, we found that MTF and NNPS are effective and novel standards to evaluate imaging performance of gamma cameras instead of conventional NEMA standards.

  1. Objective evaluation of slanted edge charts

    NASA Astrophysics Data System (ADS)

    Hornung, Harvey (.

    2015-01-01

    Camera objective characterization methodologies are widely used in the digital camera industry. Most objective characterization systems rely on a chart with specific patterns, a software algorithm measures a degradation or difference between the captured image and the chart itself. The Spatial Frequency Response (SFR) method, which is part of the ISO 122331 standard, is now very commonly used in the imaging industry, it is a very convenient way to measure a camera Modulation transfer function (MTF). The SFR algorithm can measure frequencies beyond the Nyquist frequency thanks to super-resolution, so it does provide useful information on aliasing and can provide modulation for frequencies between half Nyquist and Nyquist on all color channels of a color sensor with a Bayer pattern. The measurement process relies on a chart that is simple to manufacture: a straight transition from a bright reflectance to a dark one (black and white for instance), while a sine chart requires handling precisely shades of gray which can also create all sort of issues with printers that rely on half-toning. However, no technology can create a perfect edge, so it is important to assess the quality of the chart and understand how it affects the accuracy of the measurement. In this article, I describe a protocol to characterize the MTF of a slanted edge chart, using a high-resolution flatbed scanner. The main idea is to use the RAW output of the scanner as a high-resolution micro-densitometer, since the signal is linear it is suitable to measure the chart MTF using the SFR algorithm. The scanner needs to be calibrated in sharpness: the scanner MTF is measured with a calibrated sine chart and inverted to compensate for the modulation loss from the scanner. Then the true chart MTF is computed. This article compares measured MTF from commercial charts and charts printed on printers, and also compares how of the contrast of the edge (using different shades of gray) can affect the chart MTF, then concludes on what distance range and camera resolution the chart can reliably measure the camera MTF.

  2. Indoor integrated navigation and synchronous data acquisition method for Android smartphone

    NASA Astrophysics Data System (ADS)

    Hu, Chunsheng; Wei, Wenjian; Qin, Shiqiao; Wang, Xingshu; Habib, Ayman; Wang, Ruisheng

    2015-08-01

    Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have been recorded about 30 minutes. The bias stability and noise feature of the sensors have been analyzed. Besides the indoor integrated navigation, the integrated navigation and synchronous data acquisition method can be applied to outdoor navigation.

  3. Support for the Naval Research Laboratory Environmental Passive Microwave Remote Sensing Program.

    DTIC Science & Technology

    1983-04-29

    L. H. Gesell te _ C= Project Manager ’ . . , . ".."........... . ., . q J ABSTRACT This document summarizes the data acquisition, reduc- tion, and...film camera , and other environmental sensors. CSC gradually assumed the bulk of the responsibility for opera- ting this equipment. This included running...radiometers, and setting up and operating the strip-film camera and other en- vironmental sensors. Also of significant importance to the missions was

  4. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  5. Estimating Species Richness and Modelling Habitat Preferences of Tropical Forest Mammals from Camera Trap Data

    PubMed Central

    Rovero, Francesco; Martin, Emanuel; Rosa, Melissa; Ahumada, Jorge A.; Spitale, Daniel

    2014-01-01

    Medium-to-large mammals within tropical forests represent a rich and functionally diversified component of this biome; however, they continue to be threatened by hunting and habitat loss. Assessing these communities implies studying species’ richness and composition, and determining a state variable of species abundance in order to infer changes in species distribution and habitat associations. The Tropical Ecology, Assessment and Monitoring (TEAM) network fills a chronic gap in standardized data collection by implementing a systematic monitoring framework of biodiversity, including mammal communities, across several sites. In this study, we used TEAM camera trap data collected in the Udzungwa Mountains of Tanzania, an area of exceptional importance for mammal diversity, to propose an example of a baseline assessment of species’ occupancy. We used 60 camera trap locations and cumulated 1,818 camera days in 2009. Sampling yielded 10,647 images of 26 species of mammals. We estimated that a minimum of 32 species are in fact present, matching available knowledge from other sources. Estimated species richness at camera sites did not vary with a suite of habitat covariates derived from remote sensing, however the detection probability varied with functional guilds, with herbivores being more detectable than other guilds. Species-specific occupancy modelling revealed novel ecological knowledge for the 11 most detected species, highlighting patterns such as ‘montane forest dwellers’, e.g. the endemic Sanje mangabey (Cercocebus sanjei), and ‘lowland forest dwellers’, e.g. suni antelope (Neotragus moschatus). Our results show that the analysis of camera trap data with account for imperfect detection can provide a solid ecological assessment of mammal communities that can be systematically replicated across sites. PMID:25054806

  6. TESS Lens-Bezel Assembly Modal Testing

    NASA Technical Reports Server (NTRS)

    Dilworth, Brandon J.; Karlicek, Alexandra

    2017-01-01

    The Transiting Exoplanet Survey Satellite (TESS) program, led by the Kavli Institute for Astrophysics and Space Research at the Massachusetts Institute of Technology (MIT) will be the first-ever spaceborne all-sky transit survey. MIT Lincoln Laboratory is responsible for the cameras, including the lens assemblies, detector assemblies, lens hoods, and camera mounts. TESS is scheduled to be launched in August of 2017 with the primary goal to detect small planets with bright host starts in the solar neighborhood, so that detailed characterizations of the planets and their atmospheres can be performed. The TESS payload consists of four identical cameras and a data handling unit. Each camera consists of a lens assembly with seven optical elements and a detector assembly with four charge-coupled devices (CCDs) including their associated electronics. The optical prescription requires that several of the lenses are in close proximity to a neighboring element. A finite element model (FEM) was developed to estimate the relative deflections between each lens-bezel assembly under launch loads to predict that there are adequate clearances preventing the lenses from making contact. Modal tests using non-contact response measurements were conducted to experimentally estimate the modal parameters of the lens-bezel assembly, and used to validate the initial FEM assumptions. Key Words Non-contact measurements, modal analysis, model validation

  7. A study of possible ``reef effects'' caused by a long-term time-lapse camera in the deep North Pacific

    NASA Astrophysics Data System (ADS)

    Vardaro, M. F.; Parmley, D.; Smith, K. L.

    2007-08-01

    The aggregation response of fish populations following the addition of artificial structures to seafloor habitats has been well documented in shallow-water reefs and at deeper structures such as oil extraction platforms. A long-term time-lapse camera was deployed for 27 four-month deployment periods at 4100 m in the eastern North Pacific to study abyssal megafauna activity and surface-benthos connections. The unique time-series data set provided by this research presented an opportunity to examine how deep-sea benthopelagic fish and epibenthic megafauna populations were affected by an isolated artificial structure and whether animal surveys at this site were biased by aggregation behavior. Counts were taken of benthopelagic grenadiers, Coryphaenoides spp., observed per week as well as numbers of the epibenthic echinoid Echinocrepis rostrata. No significant correlation ( rs=-0.39; p=0.11) was found between the duration of deployment (in weeks) and the average number of Coryphaenoides observed at the site. There was also no evidence of associative behavior around the time-lapse camera by E. rostrata ( rs=-0.32; p=0.19). The results of our study suggest that abyssal fish and epibenthic megafauna do not aggregate around artificial structures and that long-term time-lapse camera studies should not be impacted by aggregation response behaviors.

  8. Intraocular and extraocular cameras for retinal prostheses: Effects of foveation by means of visual prosthesis simulation

    NASA Astrophysics Data System (ADS)

    McIntosh, Benjamin Patrick

    Blindness due to Age-Related Macular Degeneration and Retinitis Pigmentosa is unfortunately both widespread and largely incurable. Advances in visual prostheses that can restore functional vision in those afflicted by these diseases have evolved rapidly from new areas of research in ophthalmology and biomedical engineering. This thesis is focused on further advancing the state-of-the-art of both visual prostheses and implantable biomedical devices. A novel real-time system with a high performance head-mounted display is described that enables enhanced realistic simulation of intraocular retinal prostheses. A set of visual psychophysics experiments is presented using the visual prosthesis simulator that quantify, in several ways, the benefit of foveation afforded by an eye-pointed camera (such as an eye-tracked extraocular camera or an implantable intraocular camera) as compared with a head-pointed camera. A visual search experiment demonstrates a significant improvement in the time to locate a target on a screen when using an eye-pointed camera. A reach and grasp experiment demonstrates a 20% to 70% improvement in time to grasp an object when using an eye-pointed camera, with the improvement maximized when the percept is blurred. A navigation and mobility experiment shows a 10% faster walking speed and a 50% better ability to avoid obstacles when using an eye-pointed camera. Improvements to implantable biomedical devices are also described, including the design and testing of VLSI-integrable positive mobile ion contamination sensors and humidity sensors that can validate the hermeticity of biomedical device packages encapsulated by hermetic coatings, and can provide early warning of leaks or contamination that may jeopardize the implant. The positive mobile ion contamination sensors are shown to be sensitive to externally applied contamination. A model is proposed to describe sensitivity as a function of device geometry, and verified experimentally. Guidelines are provided on the use of spare CMOS oxide and metal layers to maximize the hermeticity of an implantable microchip. In addition, results are presented on the design and testing of small form factor, very low power, integrated CMOS clock generation circuits that are stable enough to drive commercial image sensor arrays, and therefore can be incorporated in an intraocular camera for retinal prostheses.

  9. Using a trichromatic CCD camera for spectral skylight estimation.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Olmo, F J; Cazorla, A; Alados-Arboledas, L

    2008-12-01

    In a previous work [J. Opt. Soc. Am. A 24, 942-956 (2007)] we showed how to design an optimum multispectral system aimed at spectral recovery of skylight. Since high-resolution multispectral images of skylight could be interesting for many scientific disciplines, here we also propose a nonoptimum but much cheaper and faster approach to achieve this goal by using a trichromatic RGB charge-coupled device (CCD) digital camera. The camera is attached to a fish-eye lens, hence permitting us to obtain a spectrum of every point of the skydome corresponding to each pixel of the image. In this work we show how to apply multispectral techniques to the sensors' responses of a common trichromatic camera in order to obtain skylight spectra from them. This spectral information is accurate enough to estimate experimental values of some climate parameters or to be used in algorithms for automatic cloud detection, among many other possible scientific applications.

  10. Soft X-ray streak camera for laser fusion applications

    NASA Astrophysics Data System (ADS)

    Stradling, G. L.

    1981-04-01

    The development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development is reviewed as well as laser fusion and laser fusion diagnostics. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV are also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.

  11. Heart Imaging System

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Johnson Space Flight Center's device to test astronauts' heart function in microgravity has led to the MultiWire Gamma Camera, which images heart conditions six times faster than conventional devices. Dr. Jeffrey Lacy, who developed the technology as a NASA researcher, later formed Proportional Technologies, Inc. to develop a commercially viable process that would enable use of Tantalum-178 (Ta-178), a radio-pharmaceutical. His company supplies the generator for the radioactive Ta-178 to Xenos Medical Systems, which markets the camera. Ta-178 can only be optimally imaged with the camera. Because the body is subjected to it for only nine minutes, the radiation dose is significantly reduced and the technique can be used more frequently. Ta-178 also enables the camera to be used on pediatric patients who are rarely studied with conventional isotopes because of the high radiation dosage.

  12. MS Lonchakov and MS Phillips work with an IMAX film magazine bag in Zarya

    NASA Image and Video Library

    2001-04-23

    S100-E-5345 (23 April 2001) --- Cosmonaut Yuri V. Lonchakov, STS-100 mission specialist representing Rosaviakosmos, changes out a film magazine on an IMAX camera in the Functional Cargo Block (FGB) or Zarya aboard the International Space Station (ISS). Astronaut John L. Phillips, mission specialist, is in the background. The scene was recorded with a digital still camera.

  13. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  14. Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets

    NASA Astrophysics Data System (ADS)

    Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter

    2017-06-01

    The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.

  15. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  16. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  17. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  18. Bayesian inference in camera trapping studies for a class of spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Karanth, K. Ullas; Gopalaswamy, Arjun M.; Kumar, N. Samba

    2009-01-01

    We develop a class of models for inference about abundance or density using spatial capture-recapture data from studies based on camera trapping and related methods. The model is a hierarchical model composed of two components: a point process model describing the distribution of individuals in space (or their home range centers) and a model describing the observation of individuals in traps. We suppose that trap- and individual-specific capture probabilities are a function of distance between individual home range centers and trap locations. We show that the models can be regarded as generalized linear mixed models, where the individual home range centers are random effects. We adopt a Bayesian framework for inference under these models using a formulation based on data augmentation. We apply the models to camera trapping data on tigers from the Nagarahole Reserve, India, collected over 48 nights in 2006. For this study, 120 camera locations were used, but cameras were only operational at 30 locations during any given sample occasion. Movement of traps is common in many camera-trapping studies and represents an important feature of the observation model that we address explicitly in our application.

  19. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras

    PubMed Central

    Garcia, Jair E.; Girard, Madeline B.; Kasumovic, Michael; Petersen, Phred; Wilksch, Philip A.; Dyer, Adrian G.

    2015-01-01

    Background The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer. Main Findings (1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample. Conclusion (1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution. PMID:25965264

  20. Design and evaluation of a filter spectrometer concept for facsimile cameras

    NASA Technical Reports Server (NTRS)

    Kelly, W. L., IV; Jobson, D. J.; Rowland, C. W.

    1974-01-01

    The facsimile camera is an optical-mechanical scanning device which was selected as the imaging system for the Viking '75 lander missions to Mars. A concept which uses an interference filter-photosensor array to integrate a spectrometric capability with the basic imagery function of this camera was proposed for possible application to future missions. This paper is concerned with the design and evaluation of critical electronic circuits and components that are required to implement this concept. The feasibility of obtaining spectroradiometric data is demonstrated, and the performance of a laboratory model is described in terms of spectral range, angular and spectral resolution, and noise-equivalent radiance.

  1. General-Purpose Serial Interface For Remote Control

    NASA Technical Reports Server (NTRS)

    Busquets, Anthony M.; Gupton, Lawrence E.

    1990-01-01

    Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.

  2. Using oblique digital photography for alluvial sandbar monitoring and low-cost change detection

    USGS Publications Warehouse

    Tusso, Robert B.; Buscombe, Daniel D.; Grams, Paul E.

    2015-01-01

    The maintenance of alluvial sandbars is a longstanding management interest along the Colorado River in Grand Canyon. Resource managers are interested in both the long-term trend in sandbar condition and the short-term response to management actions, such as intentional controlled floods released from Glen Canyon Dam. Long-term monitoring is accomplished at a range of scales, by a combination of annual topographic survey at selected sites, daily collection of images from those sites using novel, autonomously operating, digital camera systems (hereafter referred to as 'remote cameras'), and quadrennial remote sensing of sandbars canyonwide. In this paper, we present results from the remote camera images for daily changes in sandbar topography.

  3. Relating transverse ray error and light fields in plenoptic camera images

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim; Tyo, J. Scott

    2013-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.

  4. View From Camera Not Used During Curiosity's First Six Months on Mars

    NASA Image and Video Library

    2017-12-08

    This view of Curiosity's left-front and left-center wheels and of marks made by wheels on the ground in the "Yellowknife Bay" area comes from one of six cameras used on Mars for the first time more than six months after the rover landed. The left Navigation Camera (Navcam) linked to Curiosity's B-side computer took this image during the 223rd Martian day, or sol, of Curiosity's work on Mars (March 22, 2013). The wheels are 20 inches (50 centimeters) in diameter. Curiosity carries a pair of main computers, redundant to each other, in order to have a backup available if one fails. Each of the computers, A-side and B-side, also has other redundant subsystems linked to just that computer. Curiosity operated on its A-side from before the August 2012 landing until Feb. 28, when engineers commanded a switch to the B-side in response to a memory glitch on the A-side. One set of activities after switching to the B-side computer has been to check the six engineering cameras that are hard-linked to that computer. The rover's science instruments, including five science cameras, can each be operated by either the A-side or B-side computer, whichever is active. However, each of Curiosity's 12 engineering cameras is linked to just one of the computers. The engineering cameras are the Navigation Camera (Navcam), the Front Hazard-Avoidance Camera (Front Hazcam) and Rear Hazard-Avoidance Camera (Rear Hazcam). Each of those three named cameras has four cameras as part of it: two stereo pairs of cameras, with one pair linked to each computer. Only the pairs linked to the active computer can be used, and the A-side computer was active from before landing, in August, until Feb. 28. All six of the B-side engineering cameras have been used during March 2013 and checked out OK. Image Credit: NASA/JPL-Caltech NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  5. Development of a portable multispectral thermal infrared camera

    NASA Technical Reports Server (NTRS)

    Osterwisch, Frederick G.

    1991-01-01

    The purpose of this research and development effort was to design and build a prototype instrument designated the 'Thermal Infrared Multispectral Camera' (TIRC). The Phase 2 effort was a continuation of the Phase 1 feasibility study and preliminary design for such an instrument. The completed instrument designated AA465 has application in the field of geologic remote sensing and exploration. The AA465 Thermal Infrared Camera (TIRC) System is a field-portable multispectral thermal infrared camera operating over the 8.0 - 13.0 micron wavelength range. Its primary function is to acquire two-dimensional thermal infrared images of user-selected scenes. Thermal infrared energy emitted by the scene is collected, dispersed into ten 0.5 micron wide channels, and then measured and recorded by the AA465 System. This multispectral information is presented in real time on a color display to be used by the operator to identify spectral and spatial variations in the scenes emissivity and/or irradiance. This fundamental instrument capability has a wide variety of commercial and research applications. While ideally suited for two-man operation in the field, the AA465 System can be transported and operated effectively by a single user. Functionally, the instrument operates as if it were a single exposure camera. System measurement sensitivity requirements dictate relatively long (several minutes) instrument exposure times. As such, the instrument is not suited for recording time-variant information. The AA465 was fabricated, assembled, tested, and documented during this Phase 2 work period. The detailed design and fabrication of the instrument was performed during the period of June 1989 to July 1990. The software development effort and instrument integration/test extended from July 1990 to February 1991. Software development included an operator interface/menu structure, instrument internal control functions, DSP image processing code, and a display algorithm coding program. The instrument was delivered to NASA in March 1991. Potential commercial and research uses for this instrument are in its primary application as a field geologists exploration tool. Other applications have been suggested but not investigated in depth. These are measurements of process control in commercial materials processing and quality control functions which require information on surface heterogeneity.

  6. Modular telerobot control system for accident response

    NASA Astrophysics Data System (ADS)

    Anderson, Richard J. M.; Shirey, David L.

    1999-08-01

    The Accident Response Mobile Manipulator System (ARMMS) is a teleoperated emergency response vehicle that deploys two hydraulic manipulators, five cameras, and an array of sensors to the scene of an incident. It is operated from a remote base station that can be situated up to four kilometers away from the site. Recently, a modular telerobot control architecture called SMART was applied to ARMMS to improve the precision, safety, and operability of the manipulators on board. Using SMART, a prototype manipulator control system was developed in a couple of days, and an integrated working system was demonstrated within a couple of months. New capabilities such as camera-frame teleoperation, autonomous tool changeout and dual manipulator control have been incorporated. The final system incorporates twenty-two separate modules and implements seven different behavior modes. This paper describes the integration of SMART into the ARMMS system.

  7. Potential and Limitations of Low-Cost Unmanned Aerial Systems for Monitoring Altitudinal Vegetation Phenology in the Tropics

    NASA Astrophysics Data System (ADS)

    Silva, T. S. F.; Torres, R. S.; Morellato, P.

    2017-12-01

    Vegetation phenology is a key component of ecosystem function and biogeochemical cycling, and highly susceptible to climatic change. Phenological knowledge in the tropics is limited by lack of monitoring, traditionally done by laborious direct observation. Ground-based digital cameras can automate daily observations, but also offer limited spatial coverage. Imaging by low-cost Unmanned Aerial Systems (UAS) combines the fine resolution of ground-based methods with and unprecedented capability for spatial coverage, but challenges remain in producing color-consistent multitemporal images. We evaluated the applicability of multitemporal UAS imaging to monitor phenology in tropical altitudinal grasslands and forests, answering: 1) Can very-high resolution aerial photography from conventional digital cameras be used to reliably monitor vegetative and reproductive phenology? 2) How is UAS monitoring affected by changes in illumination and by sensor physical limitations? We flew imaging missions monthly from Feb-16 to Feb-17, using a UAS equipped with an RGB Canon SX260 camera. Flights were carried between 10am and 4pm, at 120-150m a.g.l., yielding 5-10cm spatial resolution. To compensate illumination changes caused by time of day, season and cloud cover, calibration was attempted using reference targets and empirical models, as well as color space transformations. For vegetative phenological monitoring, multitemporal response was severely affected by changes in illumination conditions, strongly confounding the phenological signal. These variations could not be adequately corrected through calibration due to sensor limitations. For reproductive phenology, the very-high resolution of the acquired imagery allowed discrimination of individual reproductive structures for some species, and its stark colorimetric differences to vegetative structures allowed detection of the reproductive timing on the HSV color space, despite illumination effects. We conclude that reliable vegetative phenology monitoring may exceed the capabilities of consumer cameras, but reproductive phenology can be successfully monitored for species with conspicuous reproductive structures. Further research is being conducted to improve calibration methods and information extraction through machine learning.

  8. Multi-ion detection by one-shot optical sensors using a colour digital photographic camera.

    PubMed

    Lapresta-Fernández, Alejandro; Capitán-Vallvey, Luis Fermín

    2011-10-07

    The feasibility and performance of a procedure to evaluate previously developed one-shot optical sensors as single and selective analyte sensors for potassium, magnesium and hardness are presented. The procedure uses a conventional colour digital photographic camera as the detection system for simultaneous multianalyte detection. A 6.0 megapixel camera was used, and the procedure describes how it is possible to quantify potassium, magnesium and hardness simultaneously from the images captured, using multianalyte one-shot sensors based on ionophore-chromoionophore chemistry, employing the colour information computed from a defined region of interest on the sensing membrane. One of the colour channels in the red, green, blue (RGB) colour space is used to build the analytical parameter, the effective degree of protonation (1-α(eff)), in good agreement with the theoretical model. The linearization of the sigmoidal response function increases the limit of detection (LOD) and analytical range in all cases studied. The increases were from 5.4 × 10(-6) to 2.7 × 10(-7) M for potassium, from 1.4 × 10(-4) to 2.0 × 10(-6) M for magnesium and from 1.7 to 2.0 × 10(-2) mg L(-1) of CaCO(3) for hardness. The method's precision was determined in terms of the relative standard deviation (RSD%) which was from 2.4 to 7.6 for potassium, from 6.8 to 7.8 for magnesium and from 4.3 to 7.8 for hardness. The procedure was applied to the simultaneous determination of potassium, magnesium and hardness using multianalyte one-shot sensors in different types of waters and beverages in order to cover the entire application range, statistically validating the results against atomic absorption spectrometry as the reference procedure. Accordingly, this paper is an attempt to demonstrate the possibility of using a conventional digital camera as an analytical device to measure this type of one-shot sensor based on ionophore-chromoionophore chemistry instead of using conventional lab instrumentation.

  9. Computing camera heading: A study

    NASA Astrophysics Data System (ADS)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  10. Single crystal diamond detector measurements of deuterium-deuterium and deuterium-tritium neutrons in Joint European Torus fusion plasmas.

    PubMed

    Cazzaniga, C; Sundén, E Andersson; Binda, F; Croci, G; Ericsson, G; Giacomelli, L; Gorini, G; Griesmayer, E; Grosso, G; Kaveney, G; Nocente, M; Perelli Cippo, E; Rebai, M; Syme, B; Tardocchi, M

    2014-04-01

    First simultaneous measurements of deuterium-deuterium (DD) and deuterium-tritium neutrons from deuterium plasmas using a Single crystal Diamond Detector are presented in this paper. The measurements were performed at JET with a dedicated electronic chain that combined high count rate capabilities and high energy resolution. The deposited energy spectrum from DD neutrons was successfully reproduced by means of Monte Carlo calculations of the detector response function and simulations of neutron emission from the plasma, including background contributions. The reported results are of relevance for the development of compact neutron detectors with spectroscopy capabilities for installation in camera systems of present and future high power fusion experiments.

  11. Measuring the spatial resolution of an optical system in an undergraduate optics laboratory

    NASA Astrophysics Data System (ADS)

    Leung, Calvin; Donnelly, T. D.

    2017-06-01

    Two methods of quantifying the spatial resolution of a camera are described, performed, and compared, with the objective of designing an imaging-system experiment for students in an undergraduate optics laboratory. With the goal of characterizing the resolution of a typical digital single-lens reflex (DSLR) camera, we motivate, introduce, and show agreement between traditional test-target contrast measurements and the technique of using Fourier analysis to obtain the modulation transfer function (MTF). The advantages and drawbacks of each method are compared. Finally, we explore the rich optical physics at work in the camera system by calculating the MTF as a function of wavelength and f-number. For example, we find that the Canon 40D demonstrates better spatial resolution at short wavelengths, in accordance with scalar diffraction theory, but is not diffraction-limited, being significantly affected by spherical aberration. The experiment and data analysis routines described here can be built and written in an undergraduate optics lab setting.

  12. Astronaut Susan J. Helms Mounts a Videao Camera in Zarya

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Astronaut Susan J. Helms, Expedition Two flight engineer, mounts a video camera onto a bracket in the Russian Zarya or Functional Cargo Block (FGB) of the International Space Station (ISS). Launched by a Russian Proton rocket from the Baikonu Cosmodrome on November 20, 1998, the Unites States-funded and Russian-built Zarya was the first element of the ISS, followed by the U.S. Unity Node.

  13. Design of a Remote Infrared Images and Other Data Acquisition Station for outdoor applications

    NASA Astrophysics Data System (ADS)

    Béland, M.-A.; Djupkep, F. B. D.; Bendada, A.; Maldague, X.; Ferrarini, G.; Bison, P.; Grinzato, E.

    2013-05-01

    The Infrared Images and Other Data Acquisition Station enables a user, who is located inside a laboratory, to acquire visible and infrared images and distances in an outdoor environment with the help of an Internet connection. This station can acquire data using an infrared camera, a visible camera, and a rangefinder. The system can be used through a web page or through Python functions.

  14. Community structure and diversity of tropical forest mammals: data from a global camera trap network.

    PubMed

    Ahumada, Jorge A; Silva, Carlos E F; Gajapersad, Krisna; Hallam, Chris; Hurtado, Johanna; Martin, Emanuel; McWilliam, Alex; Mugerwa, Badru; O'Brien, Tim; Rovero, Francesco; Sheil, Douglas; Spironello, Wilson R; Winarni, Nurul; Andelman, Sandy J

    2011-09-27

    Terrestrial mammals are a key component of tropical forest communities as indicators of ecosystem health and providers of important ecosystem services. However, there is little quantitative information about how they change with local, regional and global threats. In this paper, the first standardized pantropical forest terrestrial mammal community study, we examine several aspects of terrestrial mammal species and community diversity (species richness, species diversity, evenness, dominance, functional diversity and community structure) at seven sites around the globe using a single standardized camera trapping methodology approach. The sites-located in Uganda, Tanzania, Indonesia, Lao PDR, Suriname, Brazil and Costa Rica-are surrounded by different landscape configurations, from continuous forests to highly fragmented forests. We obtained more than 51 000 images and detected 105 species of mammals with a total sampling effort of 12 687 camera trap days. We find that mammal communities from highly fragmented sites have lower species richness, species diversity, functional diversity and higher dominance when compared with sites in partially fragmented and continuous forest. We emphasize the importance of standardized camera trapping approaches for obtaining baselines for monitoring forest mammal communities so as to adequately understand the effect of global, regional and local threats and appropriately inform conservation actions.

  15. Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.

    PubMed

    Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A

    2014-04-01

    Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.

  16. Dark Energy Camera for Blanco

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images frommore » the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.« less

  17. Lens and Camera Arrays for Sky Surveys and Space Surveillance

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Cox, D.; McGraw, J.; Zimmer, P.

    2016-09-01

    In recent years, a number of sky survey projects have chosen to use arrays of commercial cameras coupled with commercial photographic lenses to enable low-cost, wide-area observation. Projects such as SuperWASP, FAVOR, RAPTOR, Lotis, PANOPTES, and DragonFly rely on multiple cameras with commercial lenses to image wide areas of the sky each night. The sensors are usually commercial astronomical charge coupled devices (CCDs) or digital single reflex (DSLR) cameras, while the lenses are large-aperture, highend consumer items intended for general photography. While much of this equipment is very capable and relatively inexpensive, this approach comes with a number of significant limitations that reduce sensitivity and overall utility of the image data. The most frequently encountered limitations include lens vignetting, narrow spectral bandpass, and a relatively large point spread function. Understanding these limits helps to assess the utility of the data, and identify areas where advanced optical designs could significantly improve survey performance.

  18. Registration of an on-axis see-through head-mounted display and camera system

    NASA Astrophysics Data System (ADS)

    Luo, Gang; Rensing, Noa M.; Weststrate, Evan; Peli, Eli

    2005-02-01

    An optical see-through head-mounted display (HMD) system integrating a miniature camera that is aligned with the user's pupil is developed and tested. Such an HMD system has a potential value in many augmented reality applications, in which registration of the virtual display to the real scene is one of the critical aspects. The camera alignment to the user's pupil results in a simple yet accurate calibration and a low registration error across a wide range of depth. In reality, a small camera-eye misalignment may still occur in such a system due to the inevitable variations of HMD wearing position with respect to the eye. The effects of such errors are measured. Calculation further shows that the registration error as a function of viewing distance behaves nearly the same for different virtual image distances, except for a shift. The impact of prismatic effect of the display lens on registration is also discussed.

  19. The multifocus plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Lumsdaine, Andrew

    2012-01-01

    The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

  20. Product Plan of New Generation System Camera "OLYMPUS PEN E-P1"

    NASA Astrophysics Data System (ADS)

    Ogawa, Haruo

    "OLYMPUS PEN E-P1", which is new generation system camera, is the first product of Olympus which is new standard "Micro Four-thirds System" for high-resolution mirror-less cameras. It continues good sales by the concept of "small and stylish design, easy operation and SLR image quality" since release on July 3, 2009. On the other hand, the half-size film camera "OLYMPUS PEN" was popular by the concept "small and stylish design and original mechanism" since the first product in 1959 and recorded sale number more than 17 million with 17 models. By the 50th anniversary topic and emotional value of the Olympus pen, Olympus pen E-P1 became big sales. I would like to explain the way of thinking of the product plan that included not only the simple functional value but also emotional value on planning the first product of "Micro Four-thirds System".

  1. Mars Exploration Rover Navigation Camera in-flight calibration

    USGS Publications Warehouse

    Soderblom, J.M.; Bell, J.F.; Johnson, J. R.; Joseph, J.; Wolff, M.J.

    2008-01-01

    The Navigation Camera (Navcam) instruments on the Mars Exploration Rover (MER) spacecraft provide support for both tactical operations as well as scientific observations where color information is not necessary: large-scale morphology, atmospheric monitoring including cloud observations and dust devil movies, and context imaging for both the thermal emission spectrometer and the in situ instruments on the Instrument Deployment Device. The Navcams are a panchromatic stereoscopic imaging system built using identical charge-coupled device (CCD) detectors and nearly identical electronics boards as the other cameras on the MER spacecraft. Previous calibration efforts were primarily focused on providing a detailed geometric calibration in line with the principal function of the Navcams, to provide data for the MER navigation team. This paper provides a detailed description of a new Navcam calibration pipeline developed to provide an absolute radiometric calibration that we estimate to have an absolute accuracy of 10% and a relative precision of 2.5%. Our calibration pipeline includes steps to model and remove the bias offset, the dark current charge that accumulates in both the active and readout regions of the CCD, and the shutter smear. It also corrects pixel-to-pixel responsivity variations using flat-field images, and converts from raw instrument-corrected digital number values per second to units of radiance (W m-2 nm-1 sr-1), or to radiance factor (I/F). We also describe here the initial results of two applications where radiance-calibrated Navcam data provide unique information for surface photometric and atmospheric aerosol studies. Copyright 2008 by the American Geophysical Union.

  2. Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.

  3. Visual Enhancement of Laparoscopic Partial Nephrectomy With 3-Charge Coupled Device Camera: Assessing Intraoperative Tissue Perfusion and Vascular Anatomy by Visible Hemoglobin Spectral Response

    DTIC Science & Technology

    2010-10-01

    open nephron spanng surgery a single institution expenence. J Ural 2005; 174: 855 21 Bhayan• SB, Aha KH Pmto PA et al Laparoscopic partial...noninvasively assess laparoscopic intraoperative changes in renal tissue perfusion during and after warm ischemia. Materials and Methods: We analyzed select...TITLE AND SUBTITLE Visual Enhancement of Laparoscopic Partial Nephrectomy With 3-Charge Coupled Device Camera: Assessing Intraoperative Tissue

  4. Social Media and the Arab Spring: How Facebook, Twitter, and Camera Phones Changed the Egyptian Army’s Response to Revolution

    DTIC Science & Technology

    2012-06-08

    Definitions Importantly, as an operational definition of ‘social media,’ I include Facebook, Twitter, YouTube, and social networking sites not specifically...the aforementioned social networking sites . As an operational definition of ‘security operations’ for the purposes of this paper, I use the...the existence of camera phones, Facebook, Twitter, and other social networking sites , individuals’ behavior changed with the advent of the Internet

  5. Instrumental Response Model and Detrending for the Dark Energy Camera

    DOE PAGES

    Bernstein, G. M.; Abbott, T. M. C.; Desai, S.; ...

    2017-09-14

    We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less

  6. Instrumental Response Model and Detrending for the Dark Energy Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, G. M.; Abbott, T. M. C.; Desai, S.

    We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less

  7. Design of a Wireless Sensor Network Platform for Tele-Homecare

    PubMed Central

    Chung, Yu-Fang; Liu, Chia-Hui

    2013-01-01

    The problem of an ageing population has become serious in the past few years as the degeneration of various physiological functions has resulted in distinct chronic diseases in the elderly. Most elderly are not willing to leave home for healthcare centers, but caring for patients at home eats up caregiver resources, and can overwhelm patients' families. Besides, a lot of chronic disease symptoms cause the elderly to visit hospitals frequently. Repeated examinations not only exhaust medical resources, but also waste patients' time and effort. To make matters worse, this healthcare system does not actually appear to be effective as expected. In response to these problems, a wireless remote home care system is designed in this study, where ZigBee is used to set up a wireless network for the users to take measurements anytime and anywhere. Using suitable measuring devices, users' physiological signals are measured, and their daily conditions are monitored by various sensors. Being transferred through ZigBee network, vital signs are analyzed in computers which deliver distinct alerts to remind the users and the family of possible emergencies. The system could be further combined with electric appliances to remotely control the users' environmental conditions. The environmental monitoring function can be activated to transmit in real time dynamic images of the cared to medical personnel through the video function when emergencies occur. Meanwhile, in consideration of privacy, the video camera would be turned on only when it is necessary. The caregiver could adjust the angle of camera to a proper position and observe the current situation of the cared when a sensor on the cared or the environmental monitoring system detects exceptions. All physiological data are stored in the database for family enquiries or accurate diagnoses by medical personnel. PMID:24351630

  8. Design of a wireless sensor network platform for tele-homecare.

    PubMed

    Chung, Yu-Fang; Liu, Chia-Hui

    2013-12-12

    The problem of an ageing population has become serious in the past few years as the degeneration of various physiological functions has resulted in distinct chronic diseases in the elderly. Most elderly are not willing to leave home for healthcare centers, but caring for patients at home eats up caregiver resources, and can overwhelm patients' families. Besides, a lot of chronic disease symptoms cause the elderly to visit hospitals frequently. Repeated examinations not only exhaust medical resources, but also waste patients' time and effort. To make matters worse, this healthcare system does not actually appear to be effective as expected. In response to these problems, a wireless remote home care system is designed in this study, where ZigBee is used to set up a wireless network for the users to take measurements anytime and anywhere. Using suitable measuring devices, users' physiological signals are measured, and their daily conditions are monitored by various sensors. Being transferred through ZigBee network, vital signs are analyzed in computers which deliver distinct alerts to remind the users and the family of possible emergencies. The system could be further combined with electric appliances to remotely control the users' environmental conditions. The environmental monitoring function can be activated to transmit in real time dynamic images of the cared to medical personnel through the video function when emergencies occur. Meanwhile, in consideration of privacy, the video camera would be turned on only when it is necessary. The caregiver could adjust the angle of camera to a proper position and observe the current situation of the cared when a sensor on the cared or the environmental monitoring system detects exceptions. All physiological data are stored in the database for family enquiries or accurate diagnoses by medical personnel.

  9. ACS Data Handbook v.6.0

    NASA Astrophysics Data System (ADS)

    Gonzaga, S.; et al.

    2011-03-01

    ACS was designed to provide a deep, wide-field survey capability from the visible to near-IR using the Wide Field Camera (WFC), high resolution imaging from the near-UV to near-IR with the now-defunct High Resolution Camera (HRC), and solar-blind far-UV imaging using the Solar Blind Camera (SBC). The discovery efficiency of ACS's Wide Field Channel (i.e., the product of WFC's field of view and throughput) is 10 times greater than that of WFPC2. The failure of ACS's CCD electronics in January 2007 brought a temporary halt to CCD imaging until Servicing Mission 4 in May 2009, when WFC functionality was restored. Unfortunately, the high-resolution optical imaging capability of HRC was not recovered.

  10. Measuring Beam Sizes and Ultra-Small Electron Emittances Using an X-ray Pinhole Camera.

    PubMed

    Elleaume, P; Fortgang, C; Penel, C; Tarazona, E

    1995-09-01

    A very simple pinhole camera set-up has been built to diagnose the electron beam emittance of the ESRF. The pinhole is placed in the air next to an Al window. An image is obtained with a CCD camera imaging a fluorescent screen. The emittance is deduced from the size of the image. The relationship between the measured beam size and the electron beam emittance depends upon the lattice functions alpha, beta and eta, the screen resolution, pinhole size and photon beam divergence. The set-up is capable of measuring emittances as low as 5 pm rad and is presently routinely used as both an electron beam imaging device and an emittance diagnostic.

  11. Uncooled radiometric camera performance

    NASA Astrophysics Data System (ADS)

    Meyer, Bill; Hoelter, T.

    1998-07-01

    Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.

  12. The role of an open-space CCTV system in limiting alcohol-related assault injuries in a late-night entertainment precinct in a tropical Queensland city, Australia.

    PubMed

    Pointing, Shane; Hayes-Jonkers, Charmaine; Bohanna, India; Clough, Alan

    2012-02-01

    Closed circuit television (CCTV) systems which incorporate real-time communication links between camera room operators and on-the-ground security may limit injuries resulting from alcohol-related assault. This pilot study examined CCTV footage and operator records of security responses for two periods totalling 22 days in 2010-2011 when 30 alcohol-related assaults were recorded. Semistructured discussions were conducted with camera room operators during 18 h of observation. Camera operators were proactive, efficiently directing street security to assault incidents. The system intervened in 40% (n=12) of alcohol-related assaults, limiting possible injury. This included three incidents judged as potentially preventable. A further five (17%) assault incidents were also judged as potentially preventable, while 43% (n=13) happened too quickly for intervention. Case studies describe security intervention in each category. Further research is recommended, particularly to evaluate the effects on preventing injuries through targeted awareness training to improve responsiveness and enhance the preventative capacity of similar CCTV systems.

  13. Examining wildlife responses to phenology and wildfire using a landscape-scale camera trap network

    USGS Publications Warehouse

    Villarreal, Miguel L.; Gass, Leila; Norman, Laura; Sankeya, Joel B.; Wallace, Cynthia S.A.; McMacken, Dennis; Childs, Jack L.; Petrakis, Roy E.

    2012-01-01

    Between 2001 and 2009, the Borderlands Jaguar Detection Project deployed 174 camera traps in the mountains of southern Arizona to record jaguar activity. In addition to jaguars, the motion-activated cameras, placed along known wildlife travel routes, recorded occurrences of ~ 20 other animal species. We examined temporal relationships of white-tailed deer (Odocoileus virginianus) and javelina (Pecari tajacu) to landscape phenology (as measured by monthly Normalized Difference Vegetation Index data) and the timing of wildfire (Alambre Fire of 2007). Mixed model analyses suggest that temporal dynamics of these two species were related to vegetation phenology and natural disturbance in the Sky Island region, information important for wildlife managers faced with uncertainty regarding changing climate and disturbance regimes.

  14. Experimental investigations of the time and flow-direction responses of shear-stress-sensitive liquid crystal coatings

    NASA Technical Reports Server (NTRS)

    Reda, Daniel C.; Muratore, Joseph J., Jr.; Heineck, James T.

    1993-01-01

    Time and flow-direction responses of shearstress-sensitive liquid crystal coatings were explored experimentally. For the time-response experiments, coatings were exposed to transient, compressible flows created during the startup and off-design operation of an injector-driven supersonic wind tunnel. Flow transients were visualized with a focusing Schlieren system and recorded with a 1000 frame/sec color video camera. Liquid crystal responses to these changing-shear environments were then recorded with the same video system, documenting color-play response times equal to, or faster than, the time interval between sequential frames (i.e., 1 millisecond). For the flow-direction experiments, a planar test surface was exposed to equal-magnitude and known-direction surface shear stresses generated by both normal and tangential subsonic jet-impingement flows. Under shear, the sense of the angular displacement of the liquid crystal dispersed (reflected) spectrum was found to be a function of the instantaneous direction of the applied shear. This technique thus renders dynamic flow reversals or flow divergences visible over entire test surfaces at image recording rates up to 1 KHz. Extensions of the technique to visualize relatively small changes in surface shear stress direction appear feasible.

  15. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  16. Functional response of ungulate browsers in disturbed eastern hemlock forests

    USGS Publications Warehouse

    DeStefano, Stephen

    2015-01-01

    Ungulate browsing in predator depleted North American landscapes is believed to be causing widespread tree recruitment failures. However, canopy disturbances and variations in ungulate densities are sources of heterogeneity that can buffer ecosystems against herbivory. Relatively little is known about the functional response (the rate of consumption in relation to food availability) of ungulates in eastern temperate forests, and therefore how “top down” control of vegetation may vary with disturbance type, intensity, and timing. This knowledge gap is relevant in the Northeastern United States today with the recent arrival of hemlock woolly adelgid (HWA; Adelges tsugae) that is killing eastern hemlocks (Tsuga canadensis) and initiating salvage logging as a management response. We used an existing experiment in central New England begun in 2005, which simulated severe adelgid infestation and intensive logging of intact hemlock forest, to examine the functional response of combined moose (Alces americanus) and white-tailed deer (Odocoileus virginianus) foraging in two different time periods after disturbance (3 and 7 years). We predicted that browsing impacts would be linear or accelerating (Type I or Type III response) in year 3 when regenerating stem densities were relatively low and decelerating (Type II response) in year 7 when stem densities increased. We sampled and compared woody regeneration and browsing among logged and simulated insect attack treatments and two intact controls (hemlock and hardwood forest) in 2008 and again in 2012. We then used AIC model selection to compare the three major functional response models (Types I, II, and III) of ungulate browsing in relation to forage density. We also examined relative use of the different stand types by comparing pellet group density and remote camera images. In 2008, total and proportional browse consumption increased with stem density, and peaked in logged plots, revealing a Type I response. In 2012, stem densities were greatest in girdled plots, but proportional browse consumption was highest at intermediate stem densities in logged plots, exhibiting a Type III (rather than a Type II) functional response. Our results revealed shifting top–down control by herbivores at different stages of stand recovery after disturbance and in different understory conditions resulting from logging vs. simulated adelgid attack. If forest managers wish to promote tree regeneration in hemlock stands that is more resistant to ungulate browsers, leaving HWA-infested stands unmanaged may be a better option than preemptively logging them.

  17. Indirectly Funded Research and Exploratory Development at the Applied Physics Laboratory, Fiscal Year 1978.

    DTIC Science & Technology

    1979-12-01

    used to reduce costs ). The orbital data from the prototype ion composi- tion telescope will not only be of great scientific interest -pro- viding for...active device whose transfer function may be almost arbitrarily defined, and cost and production trends permit contemplation of networks containing...developing solid-state television camera systems based on CCD imagers. RICA hopes to produce a $500 color camera for consumer use. Fairchild and Texas

  18. Matrix Determination of Reflectance of Hidden Object via Indirect Photography

    DTIC Science & Technology

    2012-03-01

    the hidden object. This thesis provides an alternative method of processing the camera images by modeling the system as a set of transport and...Distribution Function ( BRDF ). Figure 1. Indirect photography with camera field of view dictated by point of illumination. 3 1.3 Research Focus In an...would need to be modeled using radiometric principles. A large amount of the improvement in this process was due to the use of a blind

  19. A DirtI Application for LBT Commissioning Campaigns

    NASA Astrophysics Data System (ADS)

    Borelli, J. L.

    2009-09-01

    In order to characterize the Gregorian focal stations and test the performance achieved by the Large Binocular Telescope (LBT) adaptive optics system, two infrared test cameras were constructed within a joint project between INAF (Observatorio Astronomico di Bologna, Italy) and the Max Planck Institute for Astronomy (Germany). Is intended here to describe the functionality and successful results obtained with the Daemon for the Infrared Test Camera Interface (DirtI) during commissioning campaigns.

  20. Characterization of SWIR cameras by MRC measurements

    NASA Astrophysics Data System (ADS)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.

  1. Retrieval of Garstang's emission function from all-sky camera images

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František

    2015-10-01

    The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.

  2. Connecting Digital Repeat Photography to Ecosystem Fluxes in Inland Pacific Northwest, US Cropping Systems

    NASA Astrophysics Data System (ADS)

    Russell, E.; Chi, J.; Waldo, S.; Pressley, S. N.; Lamb, B. K.; Pan, W.

    2017-12-01

    Diurnal and seasonal gas fluxes vary by crop growth stage. Digital cameras are increasingly being used to monitor inter-annual changes in vegetation phenology in a variety of ecosystems. These cameras are not designed as scientific instruments but the information they gather can add value to established measurement techniques (i.e. eddy covariance). This work combined deconstructed digital images with eddy covariance data from five agricultural sites (1 fallow, 4 cropped) in the inland Pacific Northwest, USA. The data were broken down with respect to crop stage and management activities. The fallow field highlighted the camera response to changing net radiation, illumination, and rainfall. At the cropped sites, the net ecosystem exchange, gross primary production, and evapotranspiration were correlated with the greenness and redness values derived from the images over the growing season. However, the color values do not change quickly enough to respond to day-to-day variability in the flux exchange as the two measurement types are based on different processes. The management practices and changes in phenology through the growing season were not visible within the camera data though the camera did capture the general evolution of the ecosystem fluxes.

  3. Dissipation function and adaptive gradient reconstruction based smoke detection in video

    NASA Astrophysics Data System (ADS)

    Li, Bin; Zhang, Qiang; Shi, Chunlei

    2017-11-01

    A method for smoke detection in video is proposed. The camera monitoring the scene is assumed to be stationary. With the atmospheric scattering model, dissipation function is reflected transmissivity between the background objects in the scene and the camera. Dark channel prior and fast bilateral filter are used for estimating dissipation function which is only the function of the depth of field. Based on dissipation function, visual background extractor (ViBe) can be used for detecting smoke as a result of smoke's motion characteristics as well as detecting other moving targets. Since smoke has semi-transparent parts, the things which are covered by these parts can be recovered by poisson equation adaptively. The similarity between the recovered parts and the original background parts in the same position is calculated by Normalized Cross Correlation (NCC) and the original background's value is selected from the frame which is nearest to the current frame. The parts with high similarity are considered as smoke parts.

  4. Walking a Fine Line

    NASA Technical Reports Server (NTRS)

    Bothwell, Mary

    2004-01-01

    My division was charged with building a suite of cameras for the Mars Exploration Rover (MER) project. We were building the science cameras on the mass assembly, the microscope camera, and the hazard and navigation cameras for the rovers. Not surprisingly, a lot of folks were paying attention to our work - because there's really no point in landing on Mars if you can't take pictures. In Spring 2002 things were not looking good. The electronics weren't coming in, and we had to go back to the vendors. The vendors would change the design, send the boards back, and they wouldn't work. On our side, we had an instrument manager in charge who I believe has the potential to become a great manager, but when things got behind schedule he didn't have the experience to know what was needed to catch up. As division manager, I was ultimately responsible for seeing that all my project and instrument managers delivered their work. I had to make the decision whether or not to replace him.

  5. Laying the foundation to use Raspberry Pi 3 V2 camera module imagery for scientific and engineering purposes

    NASA Astrophysics Data System (ADS)

    Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James

    2017-01-01

    A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.

  6. Limited spatial response to direct predation risk by African herbivores following predator reintroduction.

    PubMed

    Davies, Andrew B; Tambling, Craig J; Kerley, Graham I H; Asner, Gregory P

    2016-08-01

    Predators affect ecosystems not only through direct mortality of prey, but also through risk effects on prey behavior, which can exert strong influences on ecosystem function and prey fitness. However, how functionally different prey species respond to predation risk and how prey strategies vary across ecosystems and in response to predator reintroduction are poorly understood. We investigated the spatial distributions of six African herbivores varying in foraging strategy and body size in response to environmental factors and direct predation risk by recently reintroduced lions in the thicket biome of the Addo Elephant National Park, South Africa, using camera trap surveys, GPS telemetry, kill site locations and Light Detection and Ranging. Spatial distributions of all species, apart from buffalo, were driven primarily by environmental factors, with limited responses to direct predation risk. Responses to predation risk were instead indirect, with species distributions driven by environmental factors, and diel patterns being particularly pronounced. Grazers were more responsive to the measured variables than browsers, with more observations in open areas. Terrain ruggedness was a stronger predictor of browser distributions than was vegetation density. Buffalo was the only species to respond to predator encounter risk, avoiding areas with higher lion utilization. Buffalo therefore behaved in similar ways to when lions were absent from the study area. Our results suggest that direct predation risk effects are relatively weak when predator densities are low and the time since reintroduction is short and emphasize the need for robust, long-term monitoring of predator reintroductions to place such events in the broader context of predation risk effects.

  7. Transient thermography testing of unpainted thermal barrier coating surfaces

    NASA Astrophysics Data System (ADS)

    Ptaszek, Grzegorz; Cawley, Peter; Almond, Darryl; Pickering, Simon

    2013-01-01

    This paper has investigated the effects of uneven surface discolouration of a thermal barrier coating (TBC) and of its IR translucency on the thermal responses observed by using mid and long wavelength IR cameras. It has been shown that unpainted blades can be tested satisfactorily by using a more powerful flash heating system and a long wavelength IR camera. The problem of uneven surface emissivity can be overcome by applying 2nd derivative processing of the log-log surface cooling curves.

  8. The ideal subject distance for passport pictures.

    PubMed

    Verhoff, Marcel A; Witzel, Carsten; Kreutz, Kerstin; Ramsthaler, Frank

    2008-07-04

    In an age of global combat against terrorism, the recognition and identification of people on document images is of increasing significance. Experiments and calculations have shown that the camera-to-subject distance - not the focal length of the lens - can have a significant effect on facial proportions. Modern passport pictures should be able to function as a reference image for automatic and manual picture comparisons. This requires a defined subject distance. It is completely unclear which subject distance, in the taking of passport photographs, is ideal for the recognition of the actual person. We show here that the camera-to-subject distance that is perceived as ideal is dependent on the face being photographed, even if the distance of 2m was most frequently preferred. So far the problem of the ideal camera-to-subject distance for faces has only been approached through technical calculations. We have, for the first time, answered this question experimentally with a double-blind experiment. Even if there is apparently no ideal camera-to-subject distance valid for every face, 2m can be proposed as ideal for the taking of passport pictures. The first step would actually be the determination of a camera-to-subject distance for the taking of passport pictures within the standards. From an anthropological point of view it would be interesting to find out which facial features allow the preference of a shorter camera-to-subject distance and which allow the preference of a longer camera-to-subject distance.

  9. USE OF SEDIMENT PROFILE IMAGERY TO ESTIMATE NEAR-BOTTOM DISSOLVED OXYGEN REGIMES

    EPA Science Inventory

    The U.S. EPA, Atlantic Ecology Division is developing empirical stressor-response models for nitrogen pollution in partially enclosed coastal systems using dissolved oxygen (DO) as one of the system responses. We are testing a sediment profile image camera as a surrogate indicat...

  10. Material efficiency studies for a Compton camera designed to measure characteristic prompt gamma rays emitted during proton beam radiotherapy

    PubMed Central

    Robertson, Daniel; Polf, Jerimy C; Peterson, Steve W; Gillin, Michael T; Beddar, Sam

    2011-01-01

    Prompt gamma rays emitted from biological tissues during proton irradiation carry dosimetric and spectroscopic information that can assist with treatment verification and provide an indication of the biological response of the irradiated tissues. Compton cameras are capable of determining the origin and energy of gamma rays. However, prompt gamma monitoring during proton therapy requires new Compton camera designs that perform well at the high gamma energies produced when tissues are bombarded with therapeutic protons. In this study we optimize the materials and geometry of a three-stage Compton camera for prompt gamma detection and calculate the theoretical efficiency of such a detector. The materials evaluated in this study include germanium, bismuth germanate (BGO), NaI, xenon, silicon and lanthanum bromide (LaBr3). For each material, the dimensions of each detector stage were optimized to produce the maximum number of relevant interactions. These results were used to predict the efficiency of various multi-material cameras. The theoretical detection efficiencies of the most promising multi-material cameras were then calculated for the photons emitted from a tissue-equivalent phantom irradiated by therapeutic proton beams ranging from 50 to 250 MeV. The optimized detector stages had a lateral extent of 10 × 10 cm2 with the thickness of the initial two stages dependent on the detector material. The thickness of the third stage was fixed at 10 cm regardless of material. The most efficient single-material cameras were composed of germanium (3 cm) and BGO (2.5 cm). These cameras exhibited efficiencies of 1.15 × 10−4 and 9.58 × 10−5 per incident proton, respectively. The most efficient multi-material camera design consisted of two initial stages of germanium (3 cm) and a final stage of BGO, resulting in a theoretical efficiency of 1.26 × 10−4 per incident proton. PMID:21508442

  11. Detection of low-amplitude in vivo intrinsic signals from an optical imager of retinal function

    NASA Astrophysics Data System (ADS)

    Barriga, Eduardo S.; T'so, Dan; Pattichis, Marios; Kwon, Young; Kardon, Randy; Abramoff, Michael; Soliz, Peter

    2006-02-01

    In the early stages of some retinal diseases, such as glaucoma, loss of retinal activity may be difficult to detect with today's clinical instruments. Many of today's instruments focus on detecting changes in anatomical structures, such as the nerve fiber layer. Our device, which is based on a modified fundus camera, seeks to detect changes in optical signals that reflect functional changes in the retina. The functional imager uses a patterned stimulus at wavelength of 535nm. An intrinsic functional signal is collected at a near infrared wavelength. Measured changes in reflectance in response to the visual stimulus are on the order of 0.1% to 1% of the total reflected intensity level, which makes the functional signal difficult to detect by standard methods because it is masked by other physiological signals and by imaging system noise. In this paper, we analyze the video sequences from a set of 60 experiments with different patterned stimuli from cats. Using a set of statistical techniques known as Independent Component Analysis (ICA), we estimate the signals present in the videos. Through controlled simulation experiments, we quantify the limits of signal strength in order to detect the physiological signal of interest. The results of the analysis show that, in principle, signal levels of 0.1% (-30dB) can be detected. The study found that in 86% of the animal experiments the patterned stimuli effects on the retina can be detected and extracted. The analysis of the different responses extracted from the videos can give an insight of the functional processes present during the stimulation of the retina.

  12. Changes in left ventricular function as determined by the multi-wire gamma camera at near presyncopal levels of lower body negative pressure

    NASA Technical Reports Server (NTRS)

    Pintner, R.; Fortney, S.; Mulvagh, S.; Lacy, J.

    1992-01-01

    At presyncopal levels of lower body negative pressure (LBNP), we have frequently observed electrocardiographic responses that may be due to changes in cardiac position and/or shape, but could be indicative of altered myocardial function. To further investigate this, we evaluated cardiac function using a nuclear imaging technique in 21 healthy subjects (17 men and 4 women) after 30 minutes of supine rest and near the end of a presyncopal-limited LBNP exposure (LBNP averaged 65 plus or minus 3 mmHg at injection). Cardiac first pass images were obtained with a Multi-Wire Gamma Camera following an intravenous bolus injection of 30-50 millicurries of Tantalum-178. Manual blood pressures and electrocardiograms were obtained throughout the 3 minute graded LBNP protocol. Between rest and injection during LBNP, heart rate increased (P less than 0.01) from 67 plus or minus 3 beats per minute to 99 plus or minus beats per minute, systolic blood pressure decreased (P less than 0.01) from 110 plus or minus 3 mmHg to 107 plus or minus 3 mmHg and left ventricular ejection fraction (EF) decreased (P less than 0.01) from 0.57 plus or minus 0.02 to 0.48 plus or minus 0.02. During LBNP, ST segment depression of at least 0.5 mm occurred in 7 subjects. Subjects with ST depression had greater reductions (P = 0.05) in EF than subjects without ST depression (0.15 plus or minus 0.07 versus 0.005 plus or minus 0.03), but also tolerated greater levels (P less than 0.05) of negative pressure (88 plus or minus mmHg versus 69 plus or minus 5 mmHg). There was a significant relationship between presyncopal LBNP level and EF (R(exp 2) = 0.50, P less than 0.05). Our findings suggest there may be a decrease in systolic myocardial function at high levels of LBNP.

  13. Mechanically assisted liquid lens zoom system for mobile phone cameras

    NASA Astrophysics Data System (ADS)

    Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Berge, B.

    2006-08-01

    Camera systems with small form factor are an integral part of today's mobile phones which recently feature auto focus functionality. Ready to market solutions without moving parts have been developed by using the electrowetting technology. Besides virtually no deterioration, easy control electronics and simple and therefore cost-effective fabrication, this type of liquid lenses enables extremely fast settling times compared to mechanical approaches. As a next evolutionary step mobile phone cameras will be equipped with zoom functionality. We present first order considerations for the optical design of a miniaturized zoom system based on liquid-lenses and compare it to its mechanical counterpart. We propose a design of a zoom lens with a zoom factor of 2.5 considering state-of-the-art commercially available liquid lens products. The lens possesses auto focus capability and is based on liquid lenses and one additional mechanical actuator. The combination of liquid lenses and a single mechanical actuator enables extremely short settling times of about 20ms for the auto focus and a simplified mechanical system design leading to lower production cost and longer life time. The camera system has a mechanical outline of 24mm in length and 8mm in diameter. The lens with f/# 3.5 provides market relevant optical performance and is designed for an image circle of 6.25mm (1/2.8" format sensor).

  14. Trained neurons-based motion detection in optical camera communications

    NASA Astrophysics Data System (ADS)

    Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho

    2018-04-01

    A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.

  15. Cloud photogrammetry with dense stereo for fisheye cameras

    NASA Astrophysics Data System (ADS)

    Beekmans, Christoph; Schneider, Johannes; Läbe, Thomas; Lennefer, Martin; Stachniss, Cyrill; Simmer, Clemens

    2016-11-01

    We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.

  16. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.

  17. Signal-to-noise ratio for the wide field-planetary camera of the Space Telescope

    NASA Technical Reports Server (NTRS)

    Zissa, D. E.

    1984-01-01

    Signal-to-noise ratios for the Wide Field Camera and Planetary Camera of the Space Telescope were calculated as a function of integration time. Models of the optical systems and CCD detector arrays were used with a 27th visual magnitude point source and a 25th visual magnitude per arc-sq. second extended source. A 23rd visual magnitude per arc-sq. second background was assumed. The models predicted signal-to-noise ratios of 10 within 4 hours for the point source centered on a signal pixel. Signal-to-noise ratios approaching 10 are estimated for approximately 0.25 x 0.25 arc-second areas within the extended source after 10 hours integration.

  18. Geometric database maintenance using CCTV cameras and overlay graphics

    NASA Astrophysics Data System (ADS)

    Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin

    1988-01-01

    An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.

  19. Improving the off-axis spatial resolution and dynamic range of the NIF X-ray streak cameras (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacPhee, A. G., E-mail: macphee2@llnl.gov; Hatch, B. W.; Bell, P. M.

    2016-11-15

    We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamicmore » range for the relevant part of the streak record.« less

  20. Improving the off-axis spatial resolution and dynamic range of the NIF X-ray streak cameras (invited).

    PubMed

    MacPhee, A G; Dymoke-Bradshaw, A K L; Hares, J D; Hassett, J; Hatch, B W; Meadowcroft, A L; Bell, P M; Bradley, D K; Datte, P S; Landen, O L; Palmer, N E; Piston, K W; Rekow, V V; Hilsabeck, T J; Kilkenny, J D

    2016-11-01

    We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamic range for the relevant part of the streak record.

  1. Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)

    1999-01-01

    A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.

  2. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  3. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.

  4. Interference of mobile phones and digitally enhanced cordless telecommunications mobile phones in renal scintigraphy.

    PubMed

    Stegmayr, Armin; Fessl, Benjamin; Hörtnagl, Richard; Marcadella, Michael; Perkhofer, Susanne

    2013-08-01

    The aim of the study was to assess the potential negative impact of cellular phones and digitally enhanced cordless telecommunication (DECT) devices on the quality of static and dynamic scintigraphy to avoid repeated testing in infant and teenage patients to protect them from unnecessary radiation exposure. The assessment was conducted by performing phantom measurements under real conditions. A functional renal-phantom acting as a pair of kidneys in dynamic scans was created. Data were collected using the setup of cellular phones and DECT phones placed in different positions in relation to a camera head to test the potential interference of cellular phones and DECT phones with the cameras. Cellular phones reproducibly interfered with the oldest type of gamma camera, which, because of its single-head specification, is the device most often used for renal examinations. Curves indicating the renal function were considerably disrupted; cellular phones as well as DECT phones showed a disturbance concerning static acquisition. Variable electromagnetic tolerance in different types of γ-cameras could be identified. Moreover, a straightforward, low-cost method of testing the susceptibility of equipment to interference caused by cellular phones and DECT phones was generated. Even though some departments use newer models of γ-cameras, which are less susceptible to electromagnetic interference, we recommend testing examination rooms to avoid any interference caused by cellular phones. The potential electromagnetic interference should be taken into account when the purchase of new sensitive medical equipment is being considered, not least because the technology of mobile communication is developing fast, which also means that different standards of wave bands will be issued in the future.

  5. Development of on-line laser power monitoring system

    NASA Astrophysics Data System (ADS)

    Ding, Chien-Fang; Lee, Meng-Shiou; Li, Kuan-Ming

    2016-03-01

    Since the laser was invented, laser has been applied in many fields such as material processing, communication, measurement, biomedical engineering, defense industries and etc. Laser power is an important parameter in laser material processing, i.e. laser cutting, and laser drilling. However, the laser power is easily affected by the environment temperature, we tend to monitor the laser power status, ensuring there is an effective material processing. Besides, the response time of current laser power meters is too long, they cannot measure laser power accurately in a short time. To be more precisely, we can know the status of laser power and help us to achieve an effective material processing at the same time. To monitor the laser power, this study utilize a CMOS (Complementary metal-oxide-semiconductor) camera to develop an on-line laser power monitoring system. The CMOS camera captures images of incident laser beam after it is split and attenuated by beam splitter and neutral density filter. By comparing the average brightness of the beam spots and measurement results from laser power meter, laser power can be estimated. Under continuous measuring mode, the average measuring error is about 3%, and the response time is at least 3.6 second shorter than thermopile power meters; under trigger measuring mode which enables the CMOS camera to synchronize with intermittent laser output, the average measuring error is less than 3%, and the shortest response time is 20 millisecond.

  6. Video stroke assessment (VSA) project: design and production of a prototype system for the remote diagnosis of stroke

    NASA Astrophysics Data System (ADS)

    Urias, Adrian R.; Draghic, Nicole; Lui, Janet; Cho, Angie; Curtis, Calvin; Espinosa, Joseluis; Wottawa, Christopher; Wiesmann, William P.; Schwamm, Lee H.

    2005-04-01

    Stroke remains the third most frequent cause of death in the United States and the leading cause of disability in adults. Long-term effects of ischemic stroke can be mitigated by the opportune administration of Tissue Plasminogen Activator (t-PA); however, the decision regarding the appropriate use of this therapy is dependant on timely, effective neurological assessment by a trained specialist. The lack of available stroke expertise is a key barrier preventing frequent use of t-PA. We report here on the development of a prototype research system capable of performing a semi-automated neurological examination from an offsite location via the Internet and a Computed Tomography (CT) scanner to facilitate the diagnosis and treatment of acute stroke. The Video Stroke Assessment (VSA) System consists of a video camera, a camera mounting frame, and a computer with software and algorithms to collect, interpret, and store patient neurological responses to stimuli. The video camera is mounted on a mobility track in front of the patient; camera direction and zoom are remotely controlled on a graphical user interface (GUI) by the specialist. The VSA System also performs a partially-autonomous examination based on the NIH Stroke Scale (NIHSS). Various response data indicative of stroke are recorded, analyzed and transmitted in real time to the specialist. The VSA provides unbiased, quantitative results for most categories of the NIHSS along with video and audio playback to assist in accurate diagnosis. The system archives the complete exam and results.

  7. Direct Reflectance Measurements from Drones: Sensor Absolute Radiometric Calibration and System Tests for Forest Reflectance Characterization.

    PubMed

    Hakala, Teemu; Markelin, Lauri; Honkavaara, Eija; Scott, Barry; Theocharous, Theo; Nevalainen, Olli; Näsi, Roope; Suomalainen, Juha; Viljanen, Niko; Greenwell, Claire; Fox, Nigel

    2018-05-03

    Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK).

  8. Direct Reflectance Measurements from Drones: Sensor Absolute Radiometric Calibration and System Tests for Forest Reflectance Characterization

    PubMed Central

    Hakala, Teemu; Scott, Barry; Theocharous, Theo; Näsi, Roope; Suomalainen, Juha; Greenwell, Claire; Fox, Nigel

    2018-01-01

    Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK). PMID:29751560

  9. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  10. Final Report for the Advanced Camera for Surveys (ACS) from Ball Aerospace and Technologies Corporation

    NASA Technical Reports Server (NTRS)

    Volmer, Paul; Sullivan, Pam (Technical Monitor)

    2003-01-01

    The Advanced Camera for Surveys ACS was launched aboard the Space Shuttle Columbia just before dawn on March 1, 2002. After successfully docking with the Hubble Space Telescope (HST), several components were replaced. One of the components was the Advanced Camera for Surveys built by Ball Aerospace & Technologies Corp. (BATC) in Boulder, Colorado. Over the life of the HST contract at BATC hundreds of employees had the pleasure of working on the concept, design, fabrication, assembly and test of ACS. Those employees thank NASA - Goddard Space Flight Center and the science team at Johns Hopkins University (JHU) for the opportunity to participate in building a great science instrument for HST. After installation in HST a mini-functional test was performed and later a complete functional test. ACS performed well and has continued performing well since then. One of the greatest rewards for the BATC employees is a satisfied science team. Following is an excerpt from the JHU final report, "The foremost promise of ACS was to increase Hubble's capability for surveys in the near infrared by a factor of 10. That promise was kept. "

  11. Low Statistics Reconstruction of the Compton Camera Point Spread Function in 3D Prompt-γ Imaging of Ion Beam Therapy

    NASA Astrophysics Data System (ADS)

    Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy

    2013-10-01

    The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.

  12. Payload topography camera of Chang'e-3

    NASA Astrophysics Data System (ADS)

    Yu, Guo-Bin; Liu, En-Hai; Zhao, Ru-Jin; Zhong, Jie; Zhou, Xiang-Dong; Zhou, Wu-Lin; Wang, Jin; Chen, Yuan-Pei; Hao, Yong-Jie

    2015-11-01

    Chang'e-3 was China's first soft-landing lunar probe that achieved a successful roving exploration on the Moon. A topography camera functioning as the lander's “eye” was one of the main scientific payloads installed on the lander. It was composed of a camera probe, an electronic component that performed image compression, and a cable assembly. Its exploration mission was to obtain optical images of the lunar topography in the landing zone for investigation and research. It also observed rover movement on the lunar surface and finished taking pictures of the lander and rover. After starting up successfully, the topography camera obtained static images and video of rover movement from different directions, 360° panoramic pictures of the lunar surface around the lander from multiple angles, and numerous pictures of the Earth. All images of the rover, lunar surface, and the Earth were clear, and those of the Chinese national flag were recorded in true color. This paper describes the exploration mission, system design, working principle, quality assessment of image compression, and color correction of the topography camera. Finally, test results from the lunar surface are provided to serve as a reference for scientific data processing and application.

  13. Enhanced technologies for unattended ground sensor systems

    NASA Astrophysics Data System (ADS)

    Hartup, David C.

    2010-04-01

    Progress in several technical areas is being leveraged to advantage in Unattended Ground Sensor (UGS) systems. This paper discusses advanced technologies that are appropriate for use in UGS systems. While some technologies provide evolutionary improvements, other technologies result in revolutionary performance advancements for UGS systems. Some specific technologies discussed include wireless cameras and viewers, commercial PDA-based system programmers and monitors, new materials and techniques for packaging improvements, low power cueing sensor radios, advanced long-haul terrestrial and SATCOM radios, and networked communications. Other technologies covered include advanced target detection algorithms, high pixel count cameras for license plate and facial recognition, small cameras that provide large stand-off distances, video transmissions of target activity instead of still images, sensor fusion algorithms, and control center hardware. The impact of each technology on the overall UGS system architecture is discussed, along with the advantages provided to UGS system users. Areas of analysis include required camera parameters as a function of stand-off distance for license plate and facial recognition applications, power consumption for wireless cameras and viewers, sensor fusion communication requirements, and requirements to practically implement video transmission through UGS systems. Examples of devices that have already been fielded using technology from several of these areas are given.

  14. Data Acquisition System of Nobeyama MKID Camera

    NASA Astrophysics Data System (ADS)

    Nagai, M.; Hisamatsu, S.; Zhai, G.; Nitta, T.; Nakai, N.; Kuno, N.; Murayama, Y.; Hattori, S.; Mandal, P.; Sekimoto, Y.; Kiuchi, H.; Noguchi, T.; Matsuo, H.; Dominjon, A.; Sekiguchi, S.; Naruse, M.; Maekawa, J.; Minamidani, T.; Saito, M.

    2018-05-01

    We are developing a superconducting camera based on microwave kinetic inductance detectors (MKIDs) to observe 100-GHz continuum with the Nobeyama 45-m telescope. A data acquisition (DAQ) system for the camera has been designed to operate the MKIDs with the telescope. This system is required to connect the telescope control system (COSMOS) to the readout system of the MKIDs (MKID DAQ) which employs the frequency-sweeping probe scheme. The DAQ system is also required to record the reference signal of the beam switching for the demodulation by the analysis pipeline in order to suppress the sky fluctuation. The system has to be able to merge and save all data acquired both by the camera and by the telescope, including the cryostat temperature and pressure and the telescope pointing. A collection of software which implements these functions and works as a TCP/IP server on a workstation was developed. The server accepts commands and observation scripts from COSMOS and then issues commands to MKID DAQ to configure and start data acquisition. We made a commissioning of the MKID camera on the Nobeyama 45-m telescope and obtained successful scan signals of the atmosphere and of the Moon.

  15. Microbolometer characterization with the electronics prototype of the IRCAM for the JEM-EUSO mission

    NASA Astrophysics Data System (ADS)

    Martín, Yolanda; Joven, Enrique; Reyes, Marcos; Licandro, Javier; Maroto, Oscar; Díez-Merino, Laura; Tomas, Albert; Carbonell, Jordi; Morales de los Ríos, J. A.; del Peral, Luis; Rodríguez-Frías, M. D.

    2014-08-01

    JEM-EUSO is a space observatory that will be attached to the Japanese module of the International Space Station (ISS) to observe the UV photon tracks produced by Ultra High Energy Cosmic Rays (UHECR) interacting with atmospheric nuclei. The observatory comprises an Atmospheric Monitoring System (AMS) to gather data about the status of the atmosphere, including an infrared camera (IRCAM) for cloud coverage and cloud top height detection. This paper describes the design and characterization tests of IRCAM, which is the responsibility of the Spanish JEM-EUSO Consortium. The core of IRCAM is a 640x480 microbolometer array, the ULIS 04171, sensitive to radiation in the range 7 to 14 microns. The microbolometer array has been tested using the Front End Electronics Prototype (FEEP). This custom designed electronics corresponds to the Breadboard Model, a design built to verify the camera requirements in the laboratory. The FEEP controls the configuration of the microbolometer, digitizes the detector output, sends data to the Instrument Control Unit (ICU), and controls the microbolometer temperature to a 10 mK stability. Furthermore, the FEEP allows IRCAM to preprocess images by the addition of a powerful FPGA. This prototype has been characterized in the laboratories of Instituto de Astrofisica de Canarias (IAC). Main results, including detector response as a function of the scene temperature, NETD and Non-Uniformity Correction (NUC) are shown. Results about thermal resolution meet the system requirements with a NETD lower than 1K including the narrow band filters which allow us to retrieve the clouds temperature using stereovision algorithms.

  16. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  17. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  18. Contactless sub-millimeter displacement measurements

    NASA Astrophysics Data System (ADS)

    Sliepen, Guus; Jägers, Aswin P. L.; Bettonvil, Felix C. M.; Hammerschlag, Robert H.

    2008-07-01

    Weather effects on foldable domes, as used at the DOT and GREGOR, are investigated, in particular the correlation between the wind field and the stresses caused to both metal framework and tent clothing. Camera systems measure contactless the displacement of several dome points. The stresses follow from the measured deformation pattern. The cameras placed near the dome floor do not disturb telescope operations. In the set-ups of DOT and GREGOR, these cameras are up to 8 meters away from the measured points and must be able to detect displacements of less than 0.1 mm. The cameras have a FireWire (IEEE1394) interface to eliminate the need for frame grabbers. Each camera captures 15 images of 640 × 480 pixels per second. All data is processed on-site in real-time. In order to get the best estimate for the displacement within the constraints of available processing power, all image processing is done in Fourier-space, with all convolution operations being pre-computed once. A sub-pixel estimate of the peak of the correlation function is made. This enables to process the images of four cameras using only one commodity PC with a dual-core processor, and achieve an effective sensitivity of up to 0.01 mm. The deformation measurements are well correlated to the simultaneous wind measurements. The results are of high interest to upscaling the dome design (ELTs and solar telescopes).

  19. IDEAS and App Development Internship in Hardware and Software Design

    NASA Technical Reports Server (NTRS)

    Alrayes, Rabab D.

    2016-01-01

    In this report, I will discuss the tasks and projects I have completed while working as an electrical engineering intern during the spring semester of 2016 at NASA Kennedy Space Center. In the field of software development, I completed tasks for the G-O Caching Mobile App and the Asbestos Management Information System (AMIS) Web App. The G-O Caching Mobile App was written in HTML, CSS, and JavaScript on the Cordova framework, while the AMIS Web App is written in HTML, CSS, JavaScript, and C# on the AngularJS framework. My goals and objectives on these two projects were to produce an app with an eye-catching and intuitive User Interface (UI), which will attract more employees to participate; to produce a fully-tested, fully functional app which supports workforce engagement and exploration; to produce a fully-tested, fully functional web app that assists technicians working in asbestos management. I also worked in hardware development on the Integrated Display and Environmental Awareness System (IDEAS) wearable technology project. My tasks on this project were focused in PCB design and camera integration. My goals and objectives for this project were to successfully integrate fully functioning custom hardware extenders on the wearable technology headset to minimize the size of hardware on the smart glasses headset for maximum user comfort; to successfully integrate fully functioning camera onto the headset. By the end of this semester, I was able to successfully develop four extender boards to minimize hardware on the headset, and assisted in integrating a fully-functioning camera into the system.

  20. Foliar Temperature Gradients as Drivers of Budburst in Douglas-fir: New Applications of Thermal Infrared Imagery

    NASA Astrophysics Data System (ADS)

    Miller, R.; Lintz, H. E.; Thomas, C. K.; Salino-Hugg, M. J.; Niemeier, J. J.; Kruger, A.

    2014-12-01

    Budburst, the initiation of annual growth in plants, is sensitive to climate and is used to monitor physiological responses to climate change. Accurately forecasting budburst response to these changes demands an understanding of the drivers of budburst. Current research and predictive models focus on population or landscape-level drivers, yet fundamental questions regarding drivers of budburst diversity within an individual tree remain unanswered. We hypothesize that foliar temperature, an important physiological property, may be a dominant driver of differences in the timing of budburst within a single tree. Studying these differences facilitates development of high throughput phenotyping technology used to improve predictive budburst models. We present spatial and temporal variation in foliar temperature as a function of physical drivers culminating in a single-tree budburst model based on foliar temperature. We use a novel remote sensing approach, combined with on-site meteorological measurements, to demonstrate important intra-canopy differences between air and foliar temperature. We mounted a thermal infrared camera within an old-growth canopy at the H.J. Andrews LTER forest and imaged an 8m by 10.6m section of a Douglas-fir crown. Sampling one image per minute, approximately 30,000 thermal infrared images were collected over a one-month period to approximate foliar temperature before, during and after budburst. Using time-lapse photography in the visible spectrum, we documented budburst at fifteen-minute intervals with eight cameras stratified across the thermal infrared camera's field of view. Within the imaged tree's crown, we installed a pyranometer, 2D sonic anemometer and fan-aspirated thermohygrometer and collected 3,000 measurements of net shortwave radiation, wind speed, air temperature and relative humidity. We documented a difference of several days in the timing of budburst across both vertical and horizontal gradients. We also observed clear spatial and temporal foliar temperature gradients. In addition to exploring physical drivers of budburst, this remote sensing approach provides insight into intra-canopy structural complexity and opportunities to advance our understanding of vegetation-­atmospheric interactions.

  1. Digital cartography of Io

    NASA Technical Reports Server (NTRS)

    Mcewen, Alfred S.; Duck, B.; Edwards, Kathleen

    1991-01-01

    A high resolution controlled mosaic of the hemisphere of Io centered on longitude 310 degrees is produced. Digital cartographic techniques were employed. Approximately 80 Voyager 1 clear and blue filter frames were utilized. This mosaic was merged with low-resolution color images. This dataset is compared to the geologic map of this region. Passage of the Voyager spacecraft through the Io plasma torus during acquisition of the highest resolution images exposed the vidicon detectors to ionized radiation, resulting in dark-current buildup on the vidicon. Because the vidicon is scanned from top to bottom, more charge accumulated toward the bottom of the frames, and the additive error increases from top to bottom as a ramp function. This ramp function was removed by using a model. Photometric normalizations were applied using the Minnaert function. An attempt to use Hapke's photometric function revealed that this function does not adequately describe Io's limb darkening at emission angles greater than 80 degrees. In contrast, the Minnaert function accurately describes the limb darkening up to emission angles of about 89 degrees. The improved set of discrete camera angles derived from this effort will be used in conjunction with the space telemetry pointing history file (the IPPS file), corrected on 4 or 12 second intervals to derive a revised time history for the pointing of the Infrared Interferometric Spectrometer (IRIS). For IRIS observations acquired between camera shutterings, the IPPS file can be corrected by linear interpolation, provided that the spacecraft motions were continuous. Image areas corresponding to the fields of view of IRIS spectra acquired between camera shutterings will be extracted from the mosaic to place the IRIS observations and hotspot models into geologic context.

  2. Innovative R.E.A. tools for integrated bathymetric survey

    NASA Astrophysics Data System (ADS)

    Demarte, Maurizio; Ivaldi, Roberta; Sinapi, Luigi; Bruzzone, Gabriele; Caccia, Massimo; Odetti, Angelo; Fontanelli, Giacomo; Masini, Andrea; Simeone, Emilio

    2017-04-01

    The REA (Rapid Environmental Assessment) concept is a methodology finalized to acquire environmental information, process them and return in standard paper-chart or standard digital format. Acquired data become thus available for the ingestion or the valorization of the Civilian Protection Emergency Organization or the Rapid Response Forces. The use of Remotely Piloted Aircraft Systems (RPAS) with the miniaturization of multispectral camera or Hyperspectral camera gives to the operator the capability to react in a short time jointly with the capacity to collect a big amount of different data and to deliver a very large number of products. The proposed methodology incorporates data collected from remote and autonomous sensors that acquire data over areas in a cost-effective manner. The hyperspectral sensors are able to map seafloor morphology, seabed structure, depth of bottom surface and an estimate of sediment development. The considerable spectral portions are selected using an appropriate configuration of hyperspectral cameras to maximize the spectral resolution. Data acquired by hyperspectral camera are geo-referenced synchronously to an Attitude and Heading Reference Systems (AHRS) sensor. The data can be subjected to a first step on-board processing of the unmanned vehicle before be transferred through the Ground Control Station (GCS) to a Processing Exploitation Dissemination (PED) system. The recent introduction of Data Distribution Systems (DDS) capabilities in PED allow a cooperative distributed approach to modern decision making. Two platforms are used in our project, a Remote Piloted Aircraft (RPAS) and an Unmanned Surface Vehicle (USV). The two platforms mutually interact to cover a surveyed area wider than the ones that could be covered by the single vehicles. The USV, especially designed to work in very shallow water, has a modular structure and an open hardware and software architecture allowing for an easy installation and integration of various sensors useful for seabed analysis. The very stable platform located on the top of the USV allows for taking-off and landing of the RPAS. By exploiting its higher power autonomy and load capability, the USV will be used as a mothership for the RPAS. In particular, during the missions the USV will be able to furnish recharging possibility for the RPAS and it will be able to function as a bridge for the communication between the RPAS and its control station. The main advantage of the system is the remote acquisition of high-resolution bathymetric data from RPAS in areas where the possibility to have a systematic and traditional survey are few or none. These tools (USV carrying an RPAS with Hyperspectral camera) constitute an innovative and powerful system that gives to the Emergency Response Unit the right instruments to react quickly. The developing of this support could be solve the classical conflict between resolution, needed to capture the fine scale variability and coverage, needed for the large environmental phenomena, with very high variability over a wide range of spatial and temporal scales as the coastal environment.

  3. Cameras on the NEPTUNE Canada seafloor observatory: Towards monitoring hydrothermal vent ecosystem dynamics

    NASA Astrophysics Data System (ADS)

    Robert, K.; Matabos, M.; Sarrazin, J.; Sarradin, P.; Lee, R. W.; Juniper, K.

    2010-12-01

    Hydrothermal vent environments are among the most dynamic benthic habitats in the ocean. The relative roles of physical and biological factors in shaping vent community structure remain unclear. Undersea cabled observatories offer the power and bandwidth required for high-resolution, time-series study of the dynamics of vent communities and the physico-chemical forces that influence them. The NEPTUNE Canada cabled instrument array at the Endeavour hydrothermal vents provides a unique laboratory for researchers to conduct long-term, integrated studies of hydrothermal vent ecosystem dynamics in relation to environmental variability. Beginning in September-October 2010, NEPTUNE Canada (NC) will be deploying a multi-disciplinary suite of instruments on the Endeavour Segment of the Juan de Fuca Ridge. Two camera and sensor systems will be used to study ecosystem dynamics in relation to hydrothermal discharge. These studies will make use of new experimental protocols for time-series observations that we have been developing since 2008 at other observatory sites connected to the VENUS and NC networks. These protocols include sampling design, camera calibration (i.e. structure, position, light, settings) and image analysis methodologies (see communication by Aron et al.). The camera systems to be deployed in the Main Endeavour vent field include a Sidus high definition video camera (2010) and the TEMPO-mini system (2011), designed by IFREMER (France). Real-time data from three sensors (O2, dissolved Fe, temperature) integrated with the TEMPO-mini system will enhance interpretation of imagery. For the first year of observations, a suite of internally recording temperature probes will be strategically placed in the field of view of the Sidus camera. These installations aim at monitoring variations in vent community structure and dynamics (species composition and abundances, interactions within and among species) in response to changes in environmental conditions at different temporal scales. High-resolution time-series studies also provide a mean of studying population dynamics, biological rhythms, organism growth and faunal succession. In addition to programmed time-series monitoring, the NC infrastructure will also permit manual and automated modification of observational protocols in response to natural events. This will enhance our ability to document potentially critical but short-lived environmental forces affecting vent communities.

  4. Road safety enhancement: an investigation on the visibility of on-road image projections using DMD-based pixel light systems

    NASA Astrophysics Data System (ADS)

    Rizvi, Sadiq; Ley, Peer-Phillip; Knöchelmann, Marvin; Lachmayer, Roland

    2018-02-01

    Research reveals that visual information forms the major portion of the received data for driving. At night -owing to the, sometimes scarcity, sometime inhomogeneity of light- the human physiology and psychology experiences a dramatic alteration. It is found that although the likelihood of accident occurrence is higher during the day due to heavier traffic, the most fatal accidents still occur during night time. How can road safety be improved in limited lighting conditions using DMD-based high resolution headlamps? DMD-based pixel light systems, utilizing HID and LED light sources, are able to address hundreds of thousands of pixels individually. Using camera information, this capability allows 'glare-free' light distributions that perfectly adapt to the needs of all road users. What really enables these systems to stand out however, is their on-road image projection capability. This projection functionality may be used in co-operation with other driver assistance systems as an assist feature for the projection of navigation data, warning signs, car status information etc. Since contrast sensitivity constitutes a decisive measure of the human visual function, here is then a core question: what distributions of luminance in the projection space produce highly visible on-road image projections? This work seeks to address this question. Responses on sets of differently illuminated projections are collected from a group of participants and later interpreted using statistical data obtained using a luminance camera. Some aspects regarding the correlation between contrast ratio, symbol form and attention capture are also discussed.

  5. Methods and new approaches to the calculation of physiological parameters by videodensitometry

    NASA Technical Reports Server (NTRS)

    Kedem, D.; Londstrom, D. P.; Rhea, T. C., Jr.; Nelson, J. H.; Price, R. R.; Smith, C. W.; Graham, T. P., Jr.; Brill, A. B.; Kedem, D.

    1976-01-01

    A complex system featuring a video-camera connected to a video disk, cine (medical motion picture) camera and PDP-9 computer with various input/output facilities has been developed. This system enables the performance of quantitative analysis of various functions recorded in clinical studies. Several studies are described, such as heart chamber volume calculations, left ventricle ejection fraction, blood flow through the lungs and also the possibility of obtaining information about blood flow and constrictions in small cross-section vessels

  6. Real-Time Acquisition and Display of Data and Video

    NASA Technical Reports Server (NTRS)

    Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien

    2007-01-01

    This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.

  7. Visual enhancement of laparoscopic partial nephrectomy with 3-charge coupled device camera: assessing intraoperative tissue perfusion and vascular anatomy by visible hemoglobin spectral response.

    PubMed

    Crane, Nicole J; Gillern, Suzanne M; Tajkarimi, Kambiz; Levin, Ira W; Pinto, Peter A; Elster, Eric A

    2010-10-01

    We report the novel use of 3-charge coupled device camera technology to infer tissue oxygenation. The technique can aid surgeons to reliably differentiate vascular structures and noninvasively assess laparoscopic intraoperative changes in renal tissue perfusion during and after warm ischemia. We analyzed select digital video images from 10 laparoscopic partial nephrectomies for their individual 3-charge coupled device response. We enhanced surgical images by subtracting the red charge coupled device response from the blue response and overlaying the calculated image on the original image. Mean intensity values for regions of interest were compared and used to differentiate arterial and venous vasculature, and ischemic and nonischemic renal parenchyma. The 3-charge coupled device enhanced images clearly delineated the vessels in all cases. Arteries were indicated by an intense red color while veins were shown in blue. Differences in mean region of interest intensity values for arteries and veins were statistically significant (p >0.0001). Three-charge coupled device analysis of pre-clamp and post-clamp renal images revealed visible, dramatic color enhancement for ischemic vs nonischemic kidneys. Differences in the mean region of interest intensity values were also significant (p <0.05). We present a simple use of conventional 3-charge coupled device camera technology in a way that may provide urological surgeons with the ability to reliably distinguish vascular structures during hilar dissection, and detect and monitor changes in renal tissue perfusion during and after warm ischemia. Copyright © 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  8. Target-Tracking Camera for a Metrology System

    NASA Technical Reports Server (NTRS)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  9. Beyond leaf color: Comparing camera-based phenological metrics with leaf biochemical, biophysical, and spectral properties throughout the growing season of a temperate deciduous forest

    NASA Astrophysics Data System (ADS)

    Yang, Xi; Tang, Jianwu; Mustard, John F.

    2014-03-01

    Plant phenology, a sensitive indicator of climate change, influences vegetation-atmosphere interactions by changing the carbon and water cycles from local to global scales. Camera-based phenological observations of the color changes of the vegetation canopy throughout the growing season have become popular in recent years. However, the linkages between camera phenological metrics and leaf biochemical, biophysical, and spectral properties are elusive. We measured key leaf properties including chlorophyll concentration and leaf reflectance on a weekly basis from June to November 2011 in a white oak forest on the island of Martha's Vineyard, Massachusetts, USA. Concurrently, we used a digital camera to automatically acquire daily pictures of the tree canopies. We found that there was a mismatch between the camera-based phenological metric for the canopy greenness (green chromatic coordinate, gcc) and the total chlorophyll and carotenoids concentration and leaf mass per area during late spring/early summer. The seasonal peak of gcc is approximately 20 days earlier than the peak of the total chlorophyll concentration. During the fall, both canopy and leaf redness were significantly correlated with the vegetation index for anthocyanin concentration, opening a new window to quantify vegetation senescence remotely. Satellite- and camera-based vegetation indices agreed well, suggesting that camera-based observations can be used as the ground validation for satellites. Using the high-temporal resolution dataset of leaf biochemical, biophysical, and spectral properties, our results show the strengths and potential uncertainties to use canopy color as the proxy of ecosystem functioning.

  10. Color correction pipeline optimization for digital cameras

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  11. A CMOS One-chip Wireless Camera with Digital Image Transmission Function for Capsule Endoscopes

    NASA Astrophysics Data System (ADS)

    Itoh, Shinya; Kawahito, Shoji; Terakawa, Susumu

    This paper presents the design and implementation of a one-chip camera device for capsule endoscopes. This experimental chip integrates functional circuits required for capsule endoscopes and digital image transmission function. The integrated functional blocks include an image array, a timing generator, a clock generator, a voltage regulator, a 10b cyclic A/D converter, and a BPSK modulator. It can be operated autonomously with 3 pins (VDD, GND, and DATAOUT). A prototype image sensor chip which has 320x240 effective pixels was fabricated using 0.25μm CMOS image sensor process and the autonomous imaging was demonstrated. The chip size is 4.84mmx4.34mm. With a 2.0 V power supply, the analog part consumes 950μW and the total power consumption at 2 frames per second (fps) is 2.6mW. Error-free image transmission over a distance of 48cm at 2.5Mbps corresponding to 2fps has been succeeded with inductive coupling.

  12. Triton Mosaic

    NASA Image and Video Library

    1999-08-25

    Mosaic of Triton constructed from 16 individual images. After globally minimizing the camera pointing errors, the frames we reprocessed by map projections, photometric function removal and placement in the mosaic.

  13. Analyzing RCD30 Oblique Performance in a Production Environment

    NASA Astrophysics Data System (ADS)

    Soler, M. E.; Kornus, W.; Magariños, A.; Pla, M.

    2016-06-01

    In 2014 the Institut Cartogràfic i Geològic de Catalunya (ICGC) decided to incorporate digital oblique imagery in its portfolio in response to the growing demand for this product. The reason can be attributed to its useful applications in a wide variety of fields and, most recently, to an increasing interest in 3d modeling. The selection phase for a digital oblique camera led to the purchase of the Leica RCD30 Oblique system, an 80MPixel multispectral medium-format camera which consists of one Nadir camera and four oblique viewing cameras acquiring images at an off-Nadir angle of 35º. The system also has a multi-directional motion compensation on-board system to deliver the highest image quality. The emergence of airborne oblique cameras has run in parallel to the inclusion of computer vision algorithms into the traditional photogrammetric workflows. Such algorithms rely on having multiple views of the same area of interest and take advantage of the image redundancy for automatic feature extraction. The multiview capability is highly fostered by the use of oblique systems which capture simultaneously different points of view for each camera shot. Different companies and NMAs have started pilot projects to assess the capabilities of the 3D mesh that can be obtained using correlation techniques. Beyond a software prototyping phase, and taking into account the currently immature state of several components of the oblique imagery workflow, the ICGC has focused on deploying a real production environment with special interest on matching the performance and quality of the existing production lines based on classical Nadir images. This paper introduces different test scenarios and layouts to analyze the impact of different variables on the geometric and radiometric performance. Different variables such as flight altitude, side and forward overlap and ground control point measurements and location have been considered for the evaluation of aerial triangulation and stereo plotting. Furthermore, two different flight configurations have been designed to measure the quality of the absolute radiometric calibration and the resolving power of the system. To quantify the effective resolution power of RCD30 Oblique images, a tool based on the computation of the Line Spread Function has been developed. The tool processes a region of interest that contains a single contour in order to extract a numerical measure of edge smoothness for a same flight session. The ICGC is highly devoted to derive information from satellite and airborne multispectral remote sensing imagery. A seamless Normalized Difference Vegetation Index (NDVI) retrieved from Digital Metric Camera (DMC) reflectance imagery is one of the products of ICGC's portfolio. As an evolution of this well-defined product, this paper presents an evaluation of the absolute radiometric calibration of the RCD30 Oblique sensor. To assess the quality of the measure, the ICGC has developed a procedure based on simultaneous acquisition of RCD30 Oblique imagery and radiometric calibrated AISA (Airborne Hyperspectral Imaging System) imagery.

  14. Humidity compensation of bad-smell sensing system using a detector tube and a built-in camera

    NASA Astrophysics Data System (ADS)

    Hirano, Hiroyuki; Nakamoto, Takamichi

    2011-09-01

    We developed a low-cost sensing system robust against humidity change for detecting and estimating concentration of bad smell, such as hydrogen sulfide and ammonia. In the previous study, we developed automated measurement system for a gas detector tube using a built-in camera instead of the conventional manual inspection of the gas detector tube. Concentration detectable by the developed system ranges from a few tens of ppb to a few tens of ppm. However, we previously found that the estimated concentration depends not only on actual concentration, but on humidity. Here, we established the method to correct the influence of humidity by creating regression function with its inputs of discoloration rate and humidity. We studied 2 methods (Backpropagation, Radial basis function network) to get regression function and evaluated them. Consequently, the system successfully estimated the concentration on a practical level even when humidity changes.

  15. "Stereo Compton cameras" for the 3-D localization of radioisotopes

    NASA Astrophysics Data System (ADS)

    Takeuchi, K.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Kishimoto, A.; Ohsuka, S.; Nakamura, S.; Adachi, S.; Hirayanagi, M.; Uchiyama, T.; Ishikawa, Y.; Kato, T.

    2014-11-01

    The Compton camera is a viable and convenient tool used to visualize the distribution of radioactive isotopes that emit gamma rays. After the nuclear disaster in Fukushima in 2011, there is a particularly urgent need to develop "gamma cameras", which can visualize the distribution of such radioisotopes. In response, we propose a portable Compton camera, which comprises 3-D position-sensitive GAGG scintillators coupled with thin monolithic MPPC arrays. The pulse-height ratio of two MPPC-arrays allocated at both ends of the scintillator block determines the depth of interaction (DOI), which dramatically improves the position resolution of the scintillation detectors. We report on the detailed optimization of the detector design, based on Geant4 simulation. The results indicate that detection efficiency reaches up to 0.54%, or more than 10 times that of other cameras being tested in Fukushima, along with a moderate angular resolution of 8.1° (FWHM). By applying the triangular surveying method, we also propose a new concept for the stereo measurement of gamma rays by using two Compton cameras, thus enabling the 3-D positional measurement of radioactive isotopes for the first time. From one point source simulation data, we ensured that the source position and the distance to the same could be determined typically to within 2 meters' accuracy and we also confirmed that more than two sources are clearly separated by the event selection from two point sources of simulation data.

  16. Modulated CMOS camera for fluorescence lifetime microscopy.

    PubMed

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.

  17. Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras

    NASA Astrophysics Data System (ADS)

    Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi

    1997-04-01

    Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.

  18. Time and flow-direction responses of shear-styress-sensitive liquid crystal coatings

    NASA Technical Reports Server (NTRS)

    Reda, Daniel C.; Muraqtore, J. J.; Heinick, James T.

    1994-01-01

    Time and flow-direction responses of shear-stress liquid crystal coatings were exploresd experimentally. For the time-response experiments, coatings were exposed to transient, compressible flows created during the startup and off-design operation of an injector-driven supersonic wind tunnel. Flow transients were visualized with a focusing schlieren system and recorded with a 100 frame/s color video camera.

  19. Photogrammetry Toolbox Reference Manual

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Burner, Alpheus W.

    2014-01-01

    Specialized photogrammetric and image processing MATLAB functions useful for wind tunnel and other ground-based testing of aerospace structures are described. These functions include single view and multi-view photogrammetric solutions, basic image processing to determine image coordinates, 2D and 3D coordinate transformations and least squares solutions, spatial and radiometric camera calibration, epipolar relations, and various supporting utility functions.

  20. More About Hazard-Response Robot For Combustible Atmospheres

    NASA Technical Reports Server (NTRS)

    Stone, Henry W.; Ohm, Timothy R.

    1995-01-01

    Report presents additional information about design and capabilities of mobile hazard-response robot called "Hazbot III." Designed to operate safely in combustible and/or toxic atmosphere. Includes cameras and chemical sensors helping human technicians determine location and nature of hazard so human emergency team can decide how to eliminate hazard without approaching themselves.

  1. Response of captive, breeding mallards to oiled water

    USGS Publications Warehouse

    Custer, T.W.; Albers, P.H.

    1980-01-01

    Behavioral response of mallard ducks to Prudhoe Bay crude oil slicks on water basins. Water basins were oiled with either 5 or 100 ul of oil and monitored with time-lapse cameras for 24 hr before and after water treatment. Measured time of first entry and amount of time spent on the water.

  2. A New Digital Imaging and Analysis System for Plant and Ecosystem Phenological Studies

    NASA Astrophysics Data System (ADS)

    Ramirez, G.; Ramirez, G. A.; Vargas, S. A., Jr.; Luna, N. R.; Tweedie, C. E.

    2015-12-01

    Over the past decade, environmental scientists have increasingly used low-cost sensors and custom software to gather and analyze environmental data. Included in this trend has been the use of imagery from field-mounted static digital cameras. Published literature has highlighted the challenge scientists have encountered with poor and problematic camera performance and power consumption, limited data download and wireless communication options, general ruggedness of off the shelf camera solutions, and time consuming and hard-to-reproduce digital image analysis options. Data loggers and sensors are typically limited to data storage in situ (requiring manual downloading) and/or expensive data streaming options. Here we highlight the features and functionality of a newly invented camera/data logger system and coupled image analysis software suited to plant and ecosystem phenological studies (patent pending). The camera has resulted from several years of development and prototype testing supported by several grants funded by the US NSF. These inventions have several unique features and functionality and have been field tested in desert, arctic, and tropical rainforest ecosystems. The system can be used to acquire imagery/data from static and mobile platforms. Data is collected, preprocessed, and streamed to the cloud without the need of an external computer and can run for extended time periods. The camera module is capable of acquiring RGB, IR, and thermal (LWIR) data and storing it in a variety of formats including RAW. The system is full customizable with a wide variety of passive and smart sensors. The camera can be triggered by state conditions detected by sensors and/or select time intervals. The device includes USB, Wi-Fi, Bluetooth, serial, GSM, Ethernet, and Iridium connections and can be connected to commercial cloud servers such as Dropbox. The complementary image analysis software is compatible with all popular operating systems. Imagery can be viewed and analyzed in RGB, HSV, and l*a*b color space. Users can select a spectral index, which have been derived from published literature and/or choose to have analytical output reported as separate channel strengths for a given color space. Results of the analysis can be viewed in a plot and/or saved as a .csv file for additional analysis and visualization.

  3. A math model for high velocity sensoring with a focal plane shuttered camera.

    NASA Technical Reports Server (NTRS)

    Morgan, P.

    1971-01-01

    A new mathematical model is presented which describes the image produced by a focal plane shutter-equipped camera. The model is based upon the well-known collinearity condition equations and incorporates both the translational and rotational motion of the camera during the exposure interval. The first differentials of the model with respect to exposure interval, delta t, yield the general matrix expressions for image velocities which may be simplified to known cases. The exposure interval, delta t, may be replaced under certain circumstances with a function incorporating blind velocity and image position if desired. The model is tested using simulated Lunar Orbiter data and found to be computationally stable as well as providing excellent results, provided that some external information is available on the velocity parameters.

  4. Geometry-driven distributed compression of the plenoptic function: performance bounds and constructive algorithms.

    PubMed

    Gehrig, Nicolas; Dragotti, Pier Luigi

    2009-03-01

    In this paper, we study the sampling and the distributed compression of the data acquired by a camera sensor network. The effective design of these sampling and compression schemes requires, however, the understanding of the structure of the acquired data. To this end, we show that the a priori knowledge of the configuration of the camera sensor network can lead to an effective estimation of such structure and to the design of effective distributed compression algorithms. For idealized scenarios, we derive the fundamental performance bounds of a camera sensor network and clarify the connection between sampling and distributed compression. We then present a distributed compression algorithm that takes advantage of the structure of the data and that outperforms independent compression algorithms on real multiview images.

  5. Image acquisition in the Pi-of-the-Sky project

    NASA Astrophysics Data System (ADS)

    Jegier, M.; Nawrocki, K.; Poźniak, K.; Sokołowski, M.

    2006-10-01

    Modern astronomical image acquisition systems dedicated for sky surveys provide large amount of data in a single measurement session. During one session that lasts a few hours it is possible to get as much as 100 GB of data. This large amount of data needs to be transferred from camera and processed. This paper presents some aspects of image acquisition in a sky survey image acquisition system. It describes a dedicated USB linux driver for the first version of the "Pi of The Sky" CCD camera (later versions have also Ethernet interface) and the test program for the camera together with a driver-wrapper providing core device functionality. Finally, the paper contains description of an algorithm for matching several images based on image features, i.e. star positions and their brightness.

  6. Controlled impact demonstration on-board (interior) photographic system

    NASA Technical Reports Server (NTRS)

    May, C. J.

    1986-01-01

    Langley Research Center (LaRC) was responsible for the design, manufacture, and integration of all hardware required for the photographic system used to film the interior of the controlled impact demonstration (CID) B-720 aircraft during actual crash conditions. Four independent power supplies were constructed to operate the ten high-speed 16 mm cameras and twenty-four floodlights. An up-link command system, furnished by Ames Dryden Flight Research Facility (ADFRF), was necessary to activate the power supplies and start the cameras. These events were accomplished by initiation of relays located on each of the photo power pallets. The photographic system performed beyond expectations. All four power distribution pallets with their 20 year old Minuteman batteries performed flawlessly. All 24 lamps worked. All ten on-board high speed (400 fps) 16 mm cameras containing good resolution film data were recovered.

  7. Pulse Based Time-of-Flight Range Sensing.

    PubMed

    Sarbolandi, Hamed; Plack, Markus; Kolb, Andreas

    2018-05-23

    Pulse-based Time-of-Flight (PB-ToF) cameras are an attractive alternative range imaging approach, compared to the widely commercialized Amplitude Modulated Continuous-Wave Time-of-Flight (AMCW-ToF) approach. This paper presents an in-depth evaluation of a PB-ToF camera prototype based on the Hamamatsu area sensor S11963-01CR. We evaluate different ToF-related effects, i.e., temperature drift, systematic error, depth inhomogeneity, multi-path effects, and motion artefacts. Furthermore, we evaluate the systematic error of the system in more detail, and introduce novel concepts to improve the quality of range measurements by modifying the mode of operation of the PB-ToF camera. Finally, we describe the means of measuring the gate response of the PB-ToF sensor and using this information for PB-ToF sensor simulation.

  8. High frequency modal identification on noisy high-speed camera data

    NASA Astrophysics Data System (ADS)

    Javh, Jaka; Slavič, Janko; Boltežar, Miha

    2018-01-01

    Vibration measurements using optical full-field systems based on high-speed footage are typically heavily burdened by noise, as the displacement amplitudes of the vibrating structures are often very small (in the range of micrometers, depending on the structure). The modal information is troublesome to measure as the structure's response is close to, or below, the noise level of the camera-based measurement system. This paper demonstrates modal parameter identification for such noisy measurements. It is shown that by using the Least-Squares Complex-Frequency method combined with the Least-Squares Frequency-Domain method, identification at high-frequencies is still possible. By additionally incorporating a more precise sensor to identify the eigenvalues, a hybrid accelerometer/high-speed camera mode shape identification is possible even below the noise floor. An accelerometer measurement is used to identify the eigenvalues, while the camera measurement is used to produce the full-field mode shapes close to 10 kHz. The identified modal parameters improve the quality of the measured modal data and serve as a reduced model of the structure's dynamics.

  9. Data indicating temperature response of Ti-6Al-4V thin-walled structure during its additive manufacture via Laser Engineered Net Shaping.

    PubMed

    Marshall, Garrett J; Thompson, Scott M; Shamsaei, Nima

    2016-06-01

    An OPTOMEC Laser Engineered Net Shaping (LENS(™)) 750 system was retrofitted with a melt pool pyrometer and in-chamber infrared (IR) camera for nondestructive thermal inspection of the blown-powder, direct laser deposition (DLD) process. Data indicative of temperature and heat transfer within the melt pool and heat affected zone atop a thin-walled structure of Ti-6Al-4V during its additive manufacture are provided. Melt pool temperature data were collected via the dual-wavelength pyrometer while the dynamic, bulk part temperature distribution was collected using the IR camera. Such data are provided in Comma Separated Values (CSV) file format, containing a 752×480 matrix and a 320×240 matrix of temperatures corresponding to individual pixels of the pyrometer and IR camera, respectively. The IR camera and pyrometer temperature data are provided in blackbody-calibrated, raw forms. Provided thermal data can aid in generating and refining process-property-performance relationships between laser manufacturing and its fabricated materials.

  10. Data indicating temperature response of Ti–6Al–4V thin-walled structure during its additive manufacture via Laser Engineered Net Shaping

    PubMed Central

    Marshall, Garrett J.; Thompson, Scott M.; Shamsaei, Nima

    2016-01-01

    An OPTOMEC Laser Engineered Net Shaping (LENS™) 750 system was retrofitted with a melt pool pyrometer and in-chamber infrared (IR) camera for nondestructive thermal inspection of the blown-powder, direct laser deposition (DLD) process. Data indicative of temperature and heat transfer within the melt pool and heat affected zone atop a thin-walled structure of Ti–6Al–4V during its additive manufacture are provided. Melt pool temperature data were collected via the dual-wavelength pyrometer while the dynamic, bulk part temperature distribution was collected using the IR camera. Such data are provided in Comma Separated Values (CSV) file format, containing a 752×480 matrix and a 320×240 matrix of temperatures corresponding to individual pixels of the pyrometer and IR camera, respectively. The IR camera and pyrometer temperature data are provided in blackbody-calibrated, raw forms. Provided thermal data can aid in generating and refining process-property-performance relationships between laser manufacturing and its fabricated materials. PMID:27054180

  11. SPECT detectors: the Anger Camera and beyond

    PubMed Central

    Peterson, Todd E.; Furenlid, Lars R.

    2011-01-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous NaI(Tl) scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic. PMID:21828904

  12. Alaskan Auroral All-Sky Images on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Stenbaek-Nielsen, H. C.

    1997-01-01

    In response to a 1995 NASA SPDS announcement of support for preservation and distribution of important data sets online, the Geophysical Institute, University of Alaska Fairbanks, Alaska, proposed to provide World Wide Web access to the Poker Flat Auroral All-sky Camera images in real time. The Poker auroral all-sky camera is located in the Davis Science Operation Center at Poker Flat Rocket Range about 30 miles north-east of Fairbanks, Alaska, and is connected, through a microwave link, with the Geophysical Institute where we maintain the data base linked to the Web. To protect the low light-level all-sky TV camera from damage due to excessive light, we only operate during the winter season when the moon is down. The camera and data acquisition is now fully computer controlled. Digital images are transmitted each minute to the Web linked data base where the data are available in a number of different presentations: (1) Individual JPEG compressed images (1 minute resolution); (2) Time lapse MPEG movie of the stored images; and (3) A meridional plot of the entire night activity.

  13. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres

    NASA Astrophysics Data System (ADS)

    Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki

    2017-08-01

    This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.

  14. A new high-speed IR camera system

    NASA Technical Reports Server (NTRS)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  15. Chandra's Ultimate Angular Resolution: Studies of the HRC-I Point Spread Function

    NASA Astrophysics Data System (ADS)

    Juda, Michael; Karovska, M.

    2010-03-01

    The Chandra High Resolution Camera (HRC) should provide an ideal imaging match to the High-Resolution Mirror Assembly (HRMA). The laboratory-measured intrinsic resolution of the HRC is 20 microns FWHM. HRC event positions are determined via a centroiding method rather than by using discrete pixels. This event position reconstruction method and any non-ideal performance of the detector electronics can introduce distortions in event locations that, when combined with spacecraft dither, produce artifacts in source images. We compare ray-traces of the HRMA response to "on-axis" observations of AR Lac and Capella as they move through their dither patterns to images produced from filtered event lists to characterize the effective intrinsic PSF of the HRC-I. A two-dimensional Gaussian, which is often used to represent the detector response, is NOT a good representation of the intrinsic PSF of the HRC-I; the actual PSF has a sharper peak and additional structure which will be discussed. This work was supported under NASA contract NAS8-03060.

  16. Absolute calibration of the OMEGA streaked optical pyrometer for temperature measurements of compressed materials

    DOE PAGES

    Gregor, M. C.; Boni, R.; Sorce, A.; ...

    2016-11-29

    Experiments in high-energy-density physics often use optical pyrometry to determine temperatures of dynamically compressed materials. In combination with simultaneous shock-velocity and optical-reflectivity measurements using velocity interferometry, these experiments provide accurate equation-of-state data at extreme pressures (P > 1 Mbar) and temperatures (T > 0.5 eV). This paper reports on the absolute calibration of the streaked optical pyrometer (SOP) at the Omega Laser Facility. The wavelength-dependent system response was determined by measuring the optical emission from a National Institute of Standards and Technology–traceable tungsten-filament lamp through various narrowband (40 nm-wide) filters. The integrated signal over the SOP’s ~250-nm operating range ismore » then related to that of a blackbody radiator using the calibrated response. We present a simple closed-form equation for the brightness temperature as a function of streak-camera signal derived from this calibration. As a result, error estimates indicate that brightness temperature can be inferred to a precision of <5%.« less

  17. Absolute calibration of the OMEGA streaked optical pyrometer for temperature measurements of compressed materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregor, M. C.; Boni, R.; Sorce, A.

    Experiments in high-energy-density physics often use optical pyrometry to determine temperatures of dynamically compressed materials. In combination with simultaneous shock-velocity and optical-reflectivity measurements using velocity interferometry, these experiments provide accurate equation-of-state data at extreme pressures (P > 1 Mbar) and temperatures (T > 0.5 eV). This paper reports on the absolute calibration of the streaked optical pyrometer (SOP) at the Omega Laser Facility. The wavelength-dependent system response was determined by measuring the optical emission from a National Institute of Standards and Technology–traceable tungsten-filament lamp through various narrowband (40 nm-wide) filters. The integrated signal over the SOP’s ~250-nm operating range ismore » then related to that of a blackbody radiator using the calibrated response. We present a simple closed-form equation for the brightness temperature as a function of streak-camera signal derived from this calibration. As a result, error estimates indicate that brightness temperature can be inferred to a precision of <5%.« less

  18. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp; Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signalsmore » for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors’ method based on the use of a commercially available color camera is useful to evaluate and understand the display performances of both monochrome and color LCDs in radiology departments.« less

  19. Lightdrum—Portable Light Stage for Accurate BTF Measurement on Site

    PubMed Central

    Havran, Vlastimil; Hošek, Jan; Němcová, Šárka; Čáp, Jiří; Bittner, Jiří

    2017-01-01

    We propose a miniaturised light stage for measuring the bidirectional reflectance distribution function (BRDF) and the bidirectional texture function (BTF) of surfaces on site in real world application scenarios. The main principle of our lightweight BTF acquisition gantry is a compact hemispherical skeleton with cameras along the meridian and with light emitting diode (LED) modules shining light onto a sample surface. The proposed device is portable and achieves a high speed of measurement while maintaining high degree of accuracy. While the positions of the LEDs are fixed on the hemisphere, the cameras allow us to cover the range of the zenith angle from 0∘ to 75∘ and by rotating the cameras along the axis of the hemisphere we can cover all possible camera directions. This allows us to take measurements with almost the same quality as existing stationary BTF gantries. Two degrees of freedom can be set arbitrarily for measurements and the other two degrees of freedom are fixed, which provides a tradeoff between accuracy of measurements and practical applicability. Assuming that a measured sample is locally flat and spatially accessible, we can set the correct perpendicular direction against the measured sample by means of an auto-collimator prior to measuring. Further, we have designed and used a marker sticker method to allow for the easy rectification and alignment of acquired images during data processing. We show the results of our approach by images rendered for 36 measured material samples. PMID:28241466

  20. Characterization of a smartphone camera's response to ultraviolet A radiation.

    PubMed

    Igoe, Damien; Parisi, Alfio; Carter, Brad

    2013-01-01

    As part of a wider study into the use of smartphones as solar ultraviolet radiation monitors, this article characterizes the ultraviolet A (UVA; 320-400 nm) response of a consumer complementary metal oxide semiconductor (CMOS)-based smartphone image sensor in a controlled laboratory environment. The CMOS image sensor in the camera possesses inherent sensitivity to UVA, and despite the attenuation due to the lens and neutral density and wavelength-specific bandpass filters, the measured relative UVA irradiances relative to the incident irradiances range from 0.0065% at 380 nm to 0.0051% at 340 nm. In addition, the sensor demonstrates a predictable response to low-intensity discrete UVA stimuli that can be modelled using the ratio of recorded digital values to the incident UVA irradiance for a given automatic exposure time, and resulting in measurement errors that are typically less than 5%. Our results support the idea that smartphones can be used for scientific monitoring of UVA radiation. © 2012 Wiley Periodicals, Inc. Photochemistry and Photobiology © 2012 The American Society of Photobiology.

  1. International Space Station Data Collection for Disaster Response

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.; Evans, Cynthia A.

    2015-01-01

    Remotely sensed data acquired by orbital sensor systems has emerged as a vital tool to identify the extent of damage resulting from a natural disaster, as well as providing near-real time mapping support to response efforts on the ground and humanitarian aid efforts. The International Space Station (ISS) is a unique terrestrial remote sensing platform for acquiring disaster response imagery. Unlike automated remote-sensing platforms it has a human crew; is equipped with both internal and externally-mounted remote sensing instruments; and has an inclined, low-Earth orbit that provides variable views and lighting (day and night) over 95 percent of the inhabited surface of the Earth. As such, it provides a useful complement to autonomous sensor systems in higher altitude polar orbits. NASA remote sensing assets on the station began collecting International Disaster Charter (IDC) response data in May 2012. The initial NASA ISS sensor systems responding to IDC activations included the ISS Agricultural Camera (ISSAC), mounted in the Window Observational Research Facility (WORF); the Crew Earth Observations (CEO) Facility, where the crew collects imagery using off-the-shelf handheld digital cameras; and the Hyperspectral Imager for the Coastal Ocean (HICO), a visible to near-infrared system mounted externally on the Japan Experiment Module Exposed Facility. The ISSAC completed its primary mission in January 2013. It was replaced by the very high resolution ISS SERVIR Environmental Research and Visualization System (ISERV) Pathfinder, a visible-wavelength digital camera, telescope, and pointing system. Since the start of IDC response in 2012 there have been 108 IDC activations; NASA sensor systems have collected data for thirty-two of these events. Of the successful data collections, eight involved two or more ISS sensor systems responding to the same event. Data has also been collected by International Partners in response to natural disasters, most notably JAXA and Roscosmos/Energia through the Urugan program.

  2. Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera

    NASA Astrophysics Data System (ADS)

    Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund

    2016-03-01

    We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.

  3. Capturing the Initiation and Spatial Variability of Runoff on Soils Affected by Wildfire

    NASA Astrophysics Data System (ADS)

    Martin, D. A.; Wickert, A. D.; Moody, J. A.

    2011-12-01

    Rainfall after wildfire often leads to intense runoff and erosion, since fire removes ground cover that impedes overland flow and water is unable to efficiently infiltrate into the fire-affected soils. In order to understand the relation between rainfall, infiltration, and runoff, we modified a camera to be triggered by a rain gage to take time-lapse photographs of the ground surface every 10 seconds until the rain stops. This camera allows us to observe directly the patterns of ground surface ponding, the initiation of overland flow, and erosion/deposition during single rainfall events. The camera was deployed on a hillslope (average slope = 23 degrees) that was severely burned by the 2010 Fourmile Canyon Fire near Boulder, Colorado. The camera's field of view is approximately 3 m2. We integrate the photographs with rainfall and overland flow measurements to determine thresholds for the initiation of overland flow and erosion. We have recorded the spatial variability of wetted patches of ground and the connection of these patches together to initiate overland flow. To date we have recorded images for rain storms with 30-minute maximum intensities ranging from 5 mm/h (our threshold to trigger continuous photographs) to 32 mm/h. In the near future we will update the camera's control system to 1) include a clock to enable time-lapse photographs at a lower frequency in addition to the event-triggered images, and 2) to add a radio to allow the camera to be triggered remotely. Radio communication will provide a means of starting the camera in response to non-local events, allowing us to capture images or video of flash flood surge fronts and debris flows, and to synchronize the operations of multiple cameras in the field. Schematics and instructions to build this camera station, which can be used to take either photos or video, are open-source licensed and are available online at http://instaar.colorado.edu/~wickert/atvis. It is our hope that this tool can be used by other researchers to better understand processes in burned watersheds and other sensitive areas that are likely to respond rapidly to rainfall.

  4. Optical analysis of electro-optical systems by MTF calculus

    NASA Astrophysics Data System (ADS)

    Barbarini, Elisa Signoreto; Dos Santos, Daniel, Jr.; Stefani, Mário Antonio; Yasuoka, Fátima Maria Mitsue; Castro Neto, Jarbas C.; Rodrigues, Evandro Luís Linhari

    2011-08-01

    One of the widely used methods for performance analysis of an optical system is the determination of the Modulation Transfer Function (MTF). The MTF represents a quantitative and direct measure of image quality, and, besides being an objective test, it can be used on concatenated optical system. This paper presents the application of software called SMTF (software modulation transfer function), built in C++ and Open CV platforms for MTF calculation on electro-optical system. Through this technique, it is possible to develop specific method to measure the real time performance of a digital fundus camera, an infrared sensor and an ophthalmological surgery microscope. Each optical instrument mentioned has a particular device to measure the MTF response, which is being developed. Then the MTF information assists the analysis of the optical system alignment, and also defines its resolution limit by the MTF graphic. The result obtained from the implemented software is compared with the theoretical MTF curve from the analyzed systems.

  5. Astronaut Owen Garriott - Test Subject - Human Vestibular Function Experiment

    NASA Image and Video Library

    1973-08-09

    S73-34171 (9 Aug. 1973) --- Scientist-astronaut Owen K. Garriott, Skylab 3 science pilot, serves as test subject for the Skylab ?Human Vestibular Function? M131 Experiment, as seen in this photographic reproduction taken from a television transmission made by a color TV camera aboard the Skylab space station in Earth orbit. The objectives of the Skylab M131 experiment are to obtain data pertinent to establishing the validity of measurements of specific behavioral/physiological responses influenced by vestibular activity under one-g and zero-g conditions; to determine man?s adaptability to unusual vestibular conditions and predict habitability of future spacecraft conditions involving reduced gravity and Coriollis forces; and to measure the accuracy and variability in man?s judgment of spatial coordinates based on atypical gravity receptor cues and inadequate visual cures. Dr. Garriott is seated in the experiment?s litter chair which can rotate the test subject at predetermined rotational velocity or programmed acceleration/decelerational profile. Photo credit: NASA

  6. Mechanical and Functional Properties of Nickel Titanium Adhesively Bonded Joints

    NASA Astrophysics Data System (ADS)

    Niccoli, F.; Alfano, M.; Bruno, L.; Furgiuele, F.; Maletta, C.

    2014-07-01

    In this study, adhesive joints made up of commercial NiTi sheets with shape memory capabilities are analyzed. Suitable surface pre-treatments, i.e., degreasing, sandblasting, and chemical etching, are preliminary compared in terms of surface roughness, surface energy, and substrate thinning. Results indicate that chemical etching induces marked substrate thinning without substantial gains in terms of surface roughness and free energy. Therefore, adhesive joints with degreased and sandblasted substrates are prepared and tested under both static and cyclic conditions, and damage development within the adhesive layer is monitored in situ using a CCD camera. Sandblasted specimens have a significantly higher mechanical static strength with respect to degreased ones, although they essentially fail in similar fashion, i.e., formation of microcracks followed by decohesion along the adhesive/substrate interface. In addition, the joints show a good functional response with almost complete shape memory recovery after thermo-mechanical cycling, i.e., a small accumulation of residual deformations occurs. The present results show that adhesive bonding is a viable joining technique for NiTi alloys.

  7. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  8. Invisible marker based augmented reality system

    NASA Astrophysics Data System (ADS)

    Park, Hanhoon; Park, Jong-Il

    2005-07-01

    Augmented reality (AR) has recently gained significant attention. The previous AR techniques usually need a fiducial marker with known geometry or objects of which the structure can be easily estimated such as cube. Placing a marker in the workspace of the user can be intrusive. To overcome this limitation, we present an AR system using invisible markers which are created/drawn with an infrared (IR) fluorescent pen. Two cameras are used: an IR camera and a visible camera, which are positioned in each side of a cold mirror so that their optical centers coincide with each other. We track the invisible markers using IR camera and visualize AR in the view of visible camera. Additional algorithms are employed for the system to have a reliable performance in the cluttered background. Experimental results are given to demonstrate the viability of the proposed system. As an application of the proposed system, the invisible marker can act as a Vision-Based Identity and Geometry (VBIG) tag, which can significantly extend the functionality of RFID. The invisible tag is the same as RFID in that it is not perceivable while more powerful in that the tag information can be presented to the user by direct projection using a mobile projector or by visualizing AR on the screen of mobile PDA.

  9. A Combined Laser-Communication and Imager for Microspacecraft (ACLAIM)

    NASA Technical Reports Server (NTRS)

    Hemmati, H.; Lesh, J.

    1998-01-01

    ACLAIM is a multi-function instrument consisting of a laser communication terminal and an imaging camera that share a common telescope. A single APS- (Active Pixel Sensor) based focal-plane-array is used to perform both the acquisition and tracking (for laser communication) and science imaging functions.

  10. Speed enforcement camera systems operational guidelines

    DOT National Transportation Integrated Search

    2008-03-01

    The ASE guidelines are intended to serve program managers, administrators, law enforcement, traffic engineers, program evaluators, and other individuals responsible for the strategic vision and daily op-erations of the program. The guidelines are wri...

  11. Improving models to predict phenological responses to global change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, Andrew D.

    2015-11-25

    The term phenology describes both the seasonal rhythms of plants and animals, and the study of these rhythms. Plant phenological processes, including, for example, when leaves emerge in the spring and change color in the autumn, are highly responsive to variation in weather (e.g. a warm vs. cold spring) as well as longer-term changes in climate (e.g. warming trends and changes in the timing and amount of rainfall). We conducted a study to investigate the phenological response of northern peatland communities to global change. Field work was conducted at the SPRUCE experiment in northern Minnesota, where we installed 10 digitalmore » cameras. Imagery from the cameras is being used to track shifts in plant phenology driven by elevated carbon dioxide and elevated temperature in the different SPRUCE experimental treatments. Camera imagery and derived products (“greenness”) is being posted in near-real time on a publicly available web page (http://phenocam.sr.unh.edu/webcam/gallery/). The images will provide a permanent visual record of the progression of the experiment over the next 10 years. Integrated with other measurements collected as part of the SPRUCE program, this study is providing insight into the degree to which phenology may mediate future shifts in carbon uptake and storage by peatland ecosystems. In the future, these data will be used to develop improved models of vegetation phenology, which will be tested against ground observations collected by a local collaborator.« less

  12. Pose-free structure from motion using depth from motion constraints.

    PubMed

    Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G

    2011-10-01

    Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE

  13. Metric for evaluation of filter efficiency in spectral cameras.

    PubMed

    Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani

    2016-11-10

    Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.

  14. Bubbles are responsive materials interesting for nonequilibrium physics

    NASA Astrophysics Data System (ADS)

    Andreeva, Daria; Granick, Steve

    Understanding of nature and conditions of non-equilibrium transformations of bubbles, droplets, polysomes and vesicles in a gradient filed is a breath-taking question that dissipative systems raise. We ask: how to establish a dynamic control of useful characteristics, for example dynamic control of morphology and composition modulation in soft matter. A possible answer is to develop a new generation of dynamic impactors that can trigger spatiotemporal oscillations of structures and functions. We aim to apply acoustic filed for development of temperature and pressure oscillations at a microscale area. We demonstrate amazing dynamic behavior of gas-filled bubbles in pressure gradient field using a unique technique combining optical imaging, high intensity ultrasound and high speed camera. We find that pressure oscillations trigger continuous phase transformations that are considered to be impossible in physical systems.

  15. SKYLAB (SL)-3 - ASTRONAUT GARRIOTT, OWEN

    NASA Image and Video Library

    1973-08-09

    S73-32113 (9 Aug. 1973) --- Scientist-astronaut Owen K. Garriott, Skylab 3 science pilot, serves as test subject for the Skylab ?Human Vestibular Function? M131 Experiment, as seen in this photographic reproduction taken from a television transmission made by a color TV camera aboard the Skylab space station in Earth orbit. The objectives of the Skylab M131 experiment are to obtain data pertinent to establishing the validity of measurements of specific behavioral/physiological responses influenced by vestibular activity under one-g and zero-g conditions; to determine man?s adaptability to unusual vestibular conditions and predict habitability of future spacecraft conditions involving reduced gravity and Coriollis forces; and to measure the accuracy and variability in man?s judgment of spatial coordinates based on atypical gravity receptor cues and inadequate visual cues. Photo credit: NASA

  16. Miniaturization of dielectric liquid microlens in package

    PubMed Central

    Yang, Chih-Cheng; Tsai, C. Gary; Yeh, J. Andrew

    2010-01-01

    This study presents packaged microscale liquid lenses actuated with liquid droplets of 300–700 μm in diameter using the dielectric force manipulation. The liquid microlens demonstrated function focal length tunability in a plastic package. The focal length of the liquid lens with a lens droplet of 500 μm in diameter is shortened from 4.4 to 2.2 mm when voltages applied change from 0 to 79 Vrms. Dynamic responses that are analyzed using 2000 frames∕s high speed motion cameras show that the advancing and receding times are measured to be 90 and 60 ms, respectively. The size effect of dielectric liquid microlens is characterized for a lens droplet of 300–700 μm in diameter in an aspect of focal length. PMID:21267438

  17. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

    PubMed Central

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  18. Time-resolved optical measurements of the post-detonation combustion of aluminized explosives

    NASA Astrophysics Data System (ADS)

    Carney, Joel R.; Miller, J. Scott; Gump, Jared C.; Pangilinan, G. I.

    2006-06-01

    The dynamic observation and characterization of light emission following the detonation and subsequent combustion of an aluminized explosive is described. The temporal, spatial, and spectral specificity of the light emission are achieved using a combination of optical diagnostics. Aluminum and aluminum monoxide emission peaks are monitored as a function of time and space using streak camera based spectroscopy in a number of light collection configurations. Peak areas of selected aluminum containing species are tracked as a function of time to ascertain the relative kinetics (growth and decay of emitting species) during the energetic event. At the chosen streak camera sensitivity, aluminum emission is observed for 10μs following the detonation of a confined 20g charge of PBXN-113, while aluminum monoxide emission persists longer than 20μs. A broadband optical emission gauge, shock velocity gauge, and fast digital framing camera are used as supplemental optical diagnostics. In-line, collimated detection is determined to be the optimum light collection geometry because it is independent of distance between the optics and the explosive charge. The chosen optical configuration also promotes a constant cylindrical collection volume that should facilitate future modeling efforts.

  19. Addressing challenges of modulation transfer function measurement with fisheye lens cameras

    NASA Astrophysics Data System (ADS)

    Deegan, Brian M.; Denny, Patrick E.; Zlokolica, Vladimir; Dever, Barry; Russell, Laura

    2015-03-01

    Modulation transfer function (MTF) is a well defined and accepted method of measuring image sharpness. The slanted edge test, as defined in ISO12233 is a standard method of calculating MTF, and is widely used for lens alignment and auto-focus algorithm verification. However, there are a number of challenges which should be considered when measuring MTF in cameras with fisheye lenses. Due to trade-offs related Petzval curvature, planarity of the optical plane is difficult to achieve in fisheye lenses. It is therefore critical to have the ability to accurately measure sharpness throughout the entire image, particularly for lens alignment. One challenge for fisheye lenses is that, because of the radial distortion, the slanted edges will have different angles, depending on the location within the image and on the distortion profile of the lens. Previous work in the literature indicates that MTF measurements are robust for angles between 2 and 10 degrees. Outside of this range, MTF measurements become unreliable. Also, the slanted edge itself will be curved by the lens distortion, causing further measurement problems. This study summarises the difficulties in the use of MTF for sharpness measurement in fisheye lens cameras, and proposes mitigations and alternative methods.

  20. OSMOSIS: a new joint laboratory between SOFRADIR and ONERA for the development of advanced DDCA with integrated optics

    NASA Astrophysics Data System (ADS)

    Druart, Guillaume; Matallah, Noura; Guerineau, Nicolas; Magli, Serge; Chambon, Mathieu; Jenouvrier, Pierre; Mallet, Eric; Reibel, Yann

    2014-06-01

    Today, both military and civilian applications require miniaturized optical systems in order to give an imagery function to vehicles with small payload capacity. After the development of megapixel focal plane arrays (FPA) with micro-sized pixels, this miniaturization will become feasible with the integration of optical functions in the detector area. In the field of cooled infrared imaging systems, the detector area is the Detector-Dewar-Cooler Assembly (DDCA). SOFRADIR and ONERA have launched a new research and innovation partnership, called OSMOSIS, to develop disruptive technologies for DDCA to improve the performance and compactness of optronic systems. With this collaboration, we will break down the technological barriers of DDCA, a sealed and cooled environment dedicated to the infrared detectors, to explore Dewar-level integration of optics. This technological breakthrough will bring more compact multipurpose thermal imaging products, as well as new thermal capabilities such as 3D imagery or multispectral imagery. Previous developments will be recalled (SOIE and FISBI cameras) and new developments will be presented. In particular, we will focus on a dual-band MWIR-LWIR camera and a multichannel camera.

  1. Driving behaviour responses to a moose encounter, automatic speed camera, wildlife warning sign and radio message determined in a factorial simulator study.

    PubMed

    Jägerbrand, Annika K; Antonson, Hans

    2016-01-01

    In a driving simulator study, driving behaviour responses (speed and deceleration) to encountering a moose, automatic speed camera, wildlife warning sign and radio message, with or without a wildlife fence and in dense forest or open landscape, were analysed. The study consisted of a factorial experiment that examined responses to factors singly and in combination over 9-km road stretches driven eight times by 25 participants (10 men, 15 women). The aims were to: determine the most effective animal-vehicle collision (AVC) countermeasures in reducing vehicle speed and test whether these are more effective in combination for reducing vehicle speed; identify the most effective countermeasures on encountering moose; and determine whether the driving responses to AVC countermeasures are affected by the presence of wildlife fences and landscape characteristics. The AVC countermeasures that proved most effective in reducing vehicle speed were a wildlife warning sign and radio message, while automatic speed cameras had a speed-increasing effect. There were no statistically significant interactions between different countermeasures and moose encounters. However, there was a tendency for a stronger speed-reducing effect from the radio message warning and from a combination of a radio message and wildlife warning sign in velocity profiles covering longer driving distances than the statistical tests. Encountering a moose during the drive had the overall strongest speed-reducing effect and gave the strongest deceleration, indicating that moose decoys or moose artwork might be useful as speed-reducing countermeasures. Furthermore, drivers reduced speed earlier on encountering a moose in open landscape and had lower velocity when driving past it. The presence of a wildlife fence on encountering the moose resulted in smaller deceleration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. CMOS image sensor with organic photoconductive layer having narrow absorption band and proposal of stack type solid-state image sensors

    NASA Astrophysics Data System (ADS)

    Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi

    2006-02-01

    Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.

  3. Printed products for digital cameras and mobile devices

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Schmidt-Sacht, Wulf

    2005-01-01

    Digital photography is no longer simply a successor to film. The digital market is now driven by additional devices such as mobile phones with camera and video functions (camphones) as well as innovative products derived from digital files. A large number of consumers do not print their images and non-printing has become the major enemy of wholesale printers, home printing suppliers and retailers. This paper addresses the challenge facing our industry, namely how to encourage the consumer to print images easily and conveniently from all types of digital media.

  4. Rapid and economical data acquisition in ultrafast frequency-resolved spectroscopy using choppers and a microcontroller.

    PubMed

    Guo, Liang; Monahan, Daniele M; Fleming, Graham

    2016-08-08

    Spectrometers and cameras are used in ultrafast spectroscopy to achieve high resolution in both time and frequency domains. Frequency-resolved signals from the camera pixels cannot be processed by common lock-in amplifiers, which have only a limited number of input channels. Here we demonstrate a rapid and economical method that achieves the function of a lock-in amplifier using mechanical choppers and a programmable microcontroller. We demonstrate the method's effectiveness by performing a frequency-resolved pump-probe measurement on the dye Nile Blue in solution.

  5. Image Quality of the Helioseismic and Magnetic Imager (HMI) Onboard the Solar Dynamics Observatory (SDO)

    NASA Technical Reports Server (NTRS)

    Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.

    2011-01-01

    We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.

  6. Faraday Accelerator With Radio-Frequency Assisted Discharge (FARAD): A New Electrodeless Concept for Plasma Propulsion

    DTIC Science & Technology

    2008-10-01

    which acts as a transformer with mutual inductance M. The value of M is a function of the current sheet position c. A i witch Bl ft...at an angle where the " film plane" of the camera is parallel to the plane FINAL REPORT FOR FA9550-06-1-0149: FARAD 45 Figure 3.3: Idealized...surface from time-integrated photographs obtained with a camera whose film plane is not parallel to the cone’s axis of symmetry. Due to these

  7. A novel fully integrated handheld gamma camera

    NASA Astrophysics Data System (ADS)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  8. A compressed sensing X-ray camera with a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.

    2018-01-01

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  9. Lunar orbital photogaphic planning charts for candidate Apollo J-missions

    NASA Technical Reports Server (NTRS)

    Hickson, P. J.; Piotrowski, W. L.

    1971-01-01

    A technique is presented for minimizing Mapping Camera film usage by reducing redundant coverage while meeting the desired sidelap of greater than or equal to 55%. The technique uses the normal groundtrack separation determined as a function of the number of revolutions between the respective tracks, of the initial and final nodal azimuths (or orbital inclination), and of the lunar latitude. The technique is also applicable for planning Panoramic Camera photography such that photographic contiguity is attained but redundant coverage is minimized. Graphs are included for planning mapping camera (MC) and panoramic camera (PC) photographic passes for a specific mission (i.e., specific groundtracks) to Descartes (Apollo 16), for specific missions to potential Apollo 17 sites such as Alphonsus, Proclus, Gassendi, Davy, and Tycho, and for a potential Apollo orbit-only mission with a nodal azimuth of 85 deg. Graphs are also included for determining the maximum number of revolutions which can elapse between successive MC and PC passes, for greater than or equal 55% sidelap and rectified contiguity respectively, for nodal azimuths between 5 deg and 85 deg.

  10. Thermal imaging as a smartphone application: exploring and implementing a new concept

    NASA Astrophysics Data System (ADS)

    Yanai, Omer

    2014-06-01

    Today's world is going mobile. Smartphone devices have become an important part of everyday life for billions of people around the globe. Thermal imaging cameras have been around for half a century and are now making their way into our daily lives. Originally built for military applications, thermal cameras are starting to be considered for personal use, enabling enhanced vision and temperature mapping for different groups of professional individuals. Through a revolutionary concept that turns smartphones into fully functional thermal cameras, we have explored how these two worlds can converge by utilizing the best of each technology. We will present the thought process, design considerations and outcome of our development process, resulting in a low-power, high resolution, lightweight USB thermal imaging device that turns Android smartphones into thermal cameras. We will discuss the technological challenges that we faced during the development of the product, and what are the system design decisions taken during the implementation. We will provide some insights we came across during this development process. Finally, we will discuss the opportunities that this innovative technology brings to the market.

  11. Refocusing distance of a standard plenoptic camera.

    PubMed

    Hahne, Christopher; Aggoun, Amar; Velisavljevic, Vladan; Fiebig, Susanne; Pesch, Matthias

    2016-09-19

    Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera's complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs.

  12. An overview of the CILBO spectral observation program

    NASA Astrophysics Data System (ADS)

    Rudawska, R.; Zender, J.; Koschny, D.

    2016-01-01

    The video equipment can be easily adopted with a spectral grating to obtain spectral information from meteors. Therefore, in recent years spectroscopic observations of meteors have become quite popular. The Meteor Research Group (MRG) of the European Space Agency has been working on upgrating the analysis of meteor spectra as well, operating image-intensified camera with objective grating (ICC8). ICC8 is located on Tenerife station of the double-station camera setup CILBO (Canary Island Long-Baseline Observatory). The pipeline software processes data with the standard calibration procedure (dark current, flat field, lens distortion corrections). While using the position of a meteor recorded by ICC7 camera (zero order), the position of the 1st order spectrum as a function of wavelength is computed Moreover, thanks to the double meteor observations carried by ICC7 (Tenerife) and ICC9 (La Palma), trajectory of a meteor and its orbit is determined. Which merged with simultaneously measurement of meteor spectrum from ICC8, allow us to identify the source of the meteoroid. Here, we report on preliminary results from a sample of meteor spectra collected by CILBO-ICC8 camera since 2012.

  13. Pancam Mast Assembly on Mars Rover

    NASA Technical Reports Server (NTRS)

    Warden, Robert M.; Cross, Mike; Harvison, Doug

    2004-01-01

    The Pancam Mast Assembly (PMA) for the 2003 Mars Rover is a deployable structure that provides an elevated platform for several cameras. The PMA consists of several mechanisms that enable it to raise the cameras as well as point the cameras in all directions. This paper describes the function of the various mechanisms as well as a description of the mechanisms and some test parameters. Designing these mechanisms to operate on the surface of Mars presented several challenges. Typical spacecraft mechanisms must operate in zero-gravity and high vacuum. These mechanisms needed to be designed to operate in Martian gravity and atmosphere. Testing conditions were a little easier because the mechanisms are not required to operate in a vacuum. All of the materials are vacuum compatible, but the mechanisms were tested in a dry nitrogen atmosphere at various cold temperatures.

  14. Note: Optics design of a periscope for the KSTAR visible inspection system with mitigated neutron damages on the camera

    NASA Astrophysics Data System (ADS)

    Lee, Kyuhang; Ko, Jinseok; Wi, Hanmin; Chung, Jinil; Seo, Hyeonjin; Jo, Jae Heung

    2018-06-01

    The visible TV system used in the Korea Superconducting Tokamak Advanced Research device has been equipped with a periscope to minimize the damage on its CCD pixels from neutron radiation. The periscope with more than 2.3 m in overall length has been designed for the visible camera system with its semi-diagonal field of view as wide as 30° and its effective focal length as short as 5.57 mm. The design performance of the periscope includes the modulation transfer function greater than 0.25 at 68 cycles/mm with low distortion. The installed periscope system has confirmed the image qualities as designed and also as comparable as those from its predecessor but with far less probabilities of neutral damages on the camera.

  15. A filter spectrometer concept for facsimile cameras

    NASA Technical Reports Server (NTRS)

    Jobson, D. J.; Kelly, W. L., IV; Wall, S. D.

    1974-01-01

    A concept which utilizes interference filters and photodetector arrays to integrate spectrometry with the basic imagery function of a facsimile camera is described and analyzed. The analysis considers spectral resolution, instantaneous field of view, spectral range, and signal-to-noise ratio. Specific performance predictions for the Martian environment, the Viking facsimile camera design parameters, and a signal-to-noise ratio for each spectral band equal to or greater than 256 indicate the feasibility of obtaining a spectral resolution of 0.01 micrometers with an instantaneous field of view of about 0.1 deg in the 0.425 micrometers to 1.025 micrometers range using silicon photodetectors. A spectral resolution of 0.05 micrometers with an instantaneous field of view of about 0.6 deg in the 1.0 to 2.7 micrometers range using lead sulfide photodetectors is also feasible.

  16. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  17. Persons with multiple disabilities select environmental stimuli through a smile response monitored via camera-based technology.

    PubMed

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'reilly, Mark F; Lang, Russell; Didden, Robert; Bosco, Andrea

    2011-01-01

    To assess whether two persons with multiple disabilities could use smile expressions and new camera-based microswitch technology to select environmental stimuli. Within each session, a computer system provided samples/reminders of preferred and non-preferred stimuli. The camera-based microswitch determined whether the participants had smile expressions in relation to those samples. If they did, stimuli matching the specific samples to which they responded were presented for 20 seconds. The smile expression could be profitably used by the participants who managed to select means of ∼70% or 75% of the preferred stimulus opportunities made available by the environment while avoiding almost all the non-preferred stimulus opportunities. Smile expressions (a) might be an effective and rapid means for selecting preferred stimulation and (b) might develop into cognitively more elaborate forms of responding through the learning experience (i.e. their consistent association with positive/reinforcing consequences).

  18. Ensuring long-term stability of infrared camera absolute calibration.

    PubMed

    Kattnig, Alain; Thetas, Sophie; Primot, Jérôme

    2015-07-13

    Absolute calibration of cryogenic 3-5 µm and 8-10 µm infrared cameras is notoriously instable and thus has to be repeated before actual measurements. Moreover, the signal to noise ratio of the imagery is lowered, decreasing its quality. These performances degradations strongly lessen the suitability of Infrared Imaging. These defaults are often blamed on detectors reaching a different "response state" after each return to cryogenic conditions, while accounting for the detrimental effects of imperfect stray light management. We show here that detectors are not to be blamed and that the culprit can also dwell in proximity electronics. We identify an unexpected source of instability in the initial voltage of the integrating capacity of detectors. Then we show that this parameter can be easily measured and taken into account. This way we demonstrate that a one month old calibration of a 3-5 µm camera has retained its validity.

  19. High speed movies of turbulence in Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, J.L.; Zweben, S.J.; Bose, B.

    2004-10-01

    A high speed (250 kHz), 300 frame charge coupled device camera has been used to image turbulence in the Alcator C-Mod Tokamak. The camera system is described and some of its important characteristics are measured, including time response and uniformity over the field-of-view. The diagnostic has been used in two applications. One uses gas-puff imaging to illuminate the turbulence in the edge/scrape-off-layer region, where D{sub 2} gas puffs localize the emission in a plane perpendicular to the magnetic field when viewed by the camera system. The dynamics of the underlying turbulence around and outside the separatrix are detected in thismore » manner. In a second diagnostic application, the light from an injected, ablating, high speed Li pellet is observed radially from the outer midplane, and fast poloidal motion of toroidal striations are seen in the Li{sup +} light well inside the separatrix.« less

  20. High energy X-ray pinhole imaging at the Z facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McPherson, L. Armon; Ampleford, David J.; Coverdale, Christine A.

    A new high photon energy (hv > 15 keV) time-integrated pinhole camera (TIPC) has become available at the Z facility for diagnostic applications. This camera employs five pinholes in a linear array for recording five images at once onto an image plate detector. Each pinhole may be independently filtered to yield five different spectral responses. The pinhole array is fabricated from a 1-cm thick tungsten block and is available with either straight pinholes or conical pinholes. Each pinhole within the array block is 250 μm in diameter. The five pinholes are splayed with respect to each other such that theymore » point to the same location in space, and hence present the same view of the target load at the Z facility. The fielding distance is 66 cm and the nominal image magnification is 0.374. Initial experimental results are shown to illustrate the performance of the camera.« less

  1. A digital underwater video camera system for aquatic research in regulated rivers

    USGS Publications Warehouse

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  2. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2017-02-01

    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  3. Pixel-based characterisation of CMOS high-speed camera systems

    NASA Astrophysics Data System (ADS)

    Weber, V.; Brübach, J.; Gordon, R. L.; Dreizler, A.

    2011-05-01

    Quantifying high-repetition rate laser diagnostic techniques for measuring scalars in turbulent combustion relies on a complete description of the relationship between detected photons and the signal produced by the detector. CMOS-chip based cameras are becoming an accepted tool for capturing high frame rate cinematographic sequences for laser-based techniques such as Particle Image Velocimetry (PIV) and Planar Laser Induced Fluorescence (PLIF) and can be used with thermographic phosphors to determine surface temperatures. At low repetition rates, imaging techniques have benefitted from significant developments in the quality of CCD-based camera systems, particularly with the uniformity of pixel response and minimal non-linearities in the photon-to-signal conversion. The state of the art in CMOS technology displays a significant number of technical aspects that must be accounted for before these detectors can be used for quantitative diagnostics. This paper addresses these issues.

  4. High energy X-ray pinhole imaging at the Z facility

    DOE PAGES

    McPherson, L. Armon; Ampleford, David J.; Coverdale, Christine A.; ...

    2016-06-06

    A new high photon energy (hv > 15 keV) time-integrated pinhole camera (TIPC) has become available at the Z facility for diagnostic applications. This camera employs five pinholes in a linear array for recording five images at once onto an image plate detector. Each pinhole may be independently filtered to yield five different spectral responses. The pinhole array is fabricated from a 1-cm thick tungsten block and is available with either straight pinholes or conical pinholes. Each pinhole within the array block is 250 μm in diameter. The five pinholes are splayed with respect to each other such that theymore » point to the same location in space, and hence present the same view of the target load at the Z facility. The fielding distance is 66 cm and the nominal image magnification is 0.374. Initial experimental results are shown to illustrate the performance of the camera.« less

  5. Design Study of the Absorber Detector of a Compton Camera for On-Line Control in Ion Beam Therapy

    NASA Astrophysics Data System (ADS)

    Richard, M.-H.; Dahoumane, M.; Dauvergne, D.; De Rydt, M.; Dedes, G.; Freud, N.; Krimmer, J.; Letang, J. M.; Lojacono, X.; Maxim, V.; Montarou, G.; Ray, C.; Roellinghoff, F.; Testa, E.; Walenta, A. H.

    2012-10-01

    The goal of this study is to tune the design of the absorber detector of a Compton camera for prompt γ-ray imaging during ion beam therapy. The response of the Compton camera to a photon point source with a realistic energy spectrum (corresponding to the prompt γ-ray spectrum emitted during the carbon irradiation of a water phantom) is studied by means of Geant4 simulations. Our Compton camera consists of a stack of 2 mm thick silicon strip detectors as a scatter detector and of a scintillator plate as an absorber detector. Four scintillators are considered: LYSO, NaI, LaBr3 and BGO. LYSO and BGO appear as the most suitable materials, due to their high photo-electric cross-sections, which leads to a high percentage of fully absorbed photons. Depth-of-interaction measurements are shown to have limited influence on the spatial resolution of the camera. In our case, the thickness which gives the best compromise between a high percentage of photons that are fully absorbed and a low parallax error is about 4 cm for the LYSO detector and 4.5 cm for the BGO detector. The influence of the width of the absorber detector on the spatial resolution is not very pronounced as long as it is lower than 30 cm.

  6. A Comparison of the Visual Attention Patterns of People With Aphasia and Adults Without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes.

    PubMed

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-04-01

    The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.

  7. Studies on a silicon-photomultiplier-based camera for Imaging Atmospheric Cherenkov Telescopes

    NASA Astrophysics Data System (ADS)

    Arcaro, C.; Corti, D.; De Angelis, A.; Doro, M.; Manea, C.; Mariotti, M.; Rando, R.; Reichardt, I.; Tescaro, D.

    2017-12-01

    Imaging Atmospheric Cherenkov Telescopes (IACTs) represent a class of instruments which are dedicated to the ground-based observation of cosmic VHE gamma ray emission based on the detection of the Cherenkov radiation produced in the interaction of gamma rays with the Earth atmosphere. One of the key elements of such instruments is a pixelized focal-plane camera consisting of photodetectors. To date, photomultiplier tubes (PMTs) have been the common choice given their high photon detection efficiency (PDE) and fast time response. Recently, silicon photomultipliers (SiPMs) are emerging as an alternative. This rapidly evolving technology has strong potential to become superior to that based on PMTs in terms of PDE, which would further improve the sensitivity of IACTs, and see a price reduction per square millimeter of detector area. We are working to develop a SiPM-based module for the focal-plane cameras of the MAGIC telescopes to probe this technology for IACTs with large focal plane cameras of an area of few square meters. We will describe the solutions we are exploring in order to balance a competitive performance with a minimal impact on the overall MAGIC camera design using ray tracing simulations. We further present a comparative study of the overall light throughput based on Monte Carlo simulations and considering the properties of the major hardware elements of an IACT.

  8. Face recognition system for set-top box-based intelligent TV.

    PubMed

    Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung

    2014-11-18

    Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively.

  9. Thermodynamic free-energy minimization for unsupervised fusion of dual-color infrared breast images

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Miao, Lidan; Qi, Hairong

    2006-04-01

    This paper presents algorithmic details of an unsupervised neural network and unbiased diagnostic methodology, that is, no lookup table is needed that labels the input training data with desired outputs. We deploy the smart algorithm on two satellite-grade infrared (IR) cameras. Although an early malignant tumor must be small in size and cannot be resolved by a single pixel that images about hundreds cells, these cells reveal themselves physiologically by emitting spontaneously thermal radiation due to the rapid cell growth angiogenesis effect (In Greek: vessels generation for increasing tumor blood supply), shifting toward, according to physics, a shorter IR wavelengths emission band. If we use those exceedingly sensitive IR spectral band cameras, we can in principle detect whether or not the breast tumor is perhaps malignant through a thin blouse in a close-up dark room. If this protocol turns out to be reliable in a large scale follow-on Vatican experiment in 2006, which might generate business investment interests of nano-engineering manufacture of nano-camera made of 1-D Carbon Nano-Tubes without traditional liquid Nitrogen coolant for Mid IR camera, then one can accumulate the probability of any type of malignant tumor at every pixel over time in the comfort of privacy without religious or other concerns. Such a non-intrusive protocol alone may not have enough information to make the decision, but the changes tracked over time will be surely becoming significant. Such an ill-posed inverse heat source transfer problem can be solved because of the universal constraint of equilibrium physics governing the blackbody Planck radiation distribution, to be spatio-temporally sampled. Thus, we must gather two snapshots with two IR cameras to form a vector data X(t) per pixel to invert the matrix-vector equation X=[A]S pixel-by-pixel independently, known as a single-pixel blind sources separation (BSS). Because the unknown heat transfer matrix or the impulse response function [A] may vary from the point tumor to its neighborhood, we could not rely on neighborhood statistics as did in a popular unsupervised independent component analysis (ICA) mathematical statistical method, we instead impose the physics equilibrium condition of the minimum of Helmholtz free-energy, H = E - T °S. In case of the point breast cancer, we can assume the constant ground state energy E ° to be normalized by those benign neighborhood tissue, and then the excited state can be computed by means of Taylor series expansion in terms of the pixel I/O data. We can augment the X-ray mammogram technique with passive IR imaging to reduce the unwanted X-rays during the chemotherapy recovery. When the sequence is animated into a movie, and the recovery dynamics is played backward in time, the movie simulates the cameras' potential for early detection without suffering the PD=0.1 search uncertainty. In summary, we applied two satellite-grade dual-color IR imaging cameras and advanced military (automatic target recognition) ATR spectrum fusion algorithm at the middle wavelength IR (3 - 5μm) and long wavelength IR (8 - 12μm), which are capable to screen malignant tumors proved by the time-reverse fashion of the animated movie experiments. On the contrary, the traditional thermal breast scanning/imaging, known as thermograms over decades, was IR spectrum-blind, and limited to a single night-vision camera and the necessary waiting for the cool down period for taking a second look for change detection suffers too many environmental and personnel variabilities.

  10. Resolving the shape of a sonoluminescence pulse in sulfuric acid by the use of streak camera.

    PubMed

    Huang, Wei; Chen, Weizhong; Cui, Weicheng

    2009-06-01

    A streak camera is used to measure the shape of sonoluminescence pulses from a cavitation bubble levitated stably in a sulfuric acid solution. The shape and response to an acoustic pressure field of the sonoluminescence pulse in 85% by weight sulfuric acid are qualitatively similar to those in water. However, the pulse width in sulfuric acid is wider than that in water by over one order of magnitude. The width of the sonoluminescence pulse is strongly dependent on the concentration of the sulfuric acid solution, while the skewed distribution of the shape remains unchanged.

  11. Stereo optical guidance system for control of industrial robots

    NASA Technical Reports Server (NTRS)

    Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)

    1992-01-01

    A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.

  12. Soft X-ray and XUV imaging with a charge-coupled device /CCD/-based detector

    NASA Technical Reports Server (NTRS)

    Loter, N. G.; Burstein, P.; Krieger, A.; Ross, D.; Harrison, D.; Michels, D. J.

    1981-01-01

    A soft X-ray/XUV imaging camera which uses a thinned, back-illuminated, all-buried channel RCA CCD for radiation sensing has been built and tested. The camera is a slow-scan device which makes possible frame integration if necessary. The detection characteristics of the device have been tested over the 15-1500 eV range. The response was linear with exposure up to 0.2-0.4 erg/sq cm; saturation occurred at greater exposures. Attention is given to attempts to resolve single photons with energies of 1.5 keV.

  13. Image quality prediction: an aid to the Viking Lander imaging investigation on Mars.

    PubMed

    Huck, F O; Wall, S D

    1976-07-01

    Two Viking spacecraft scheduled to land on Mars in the summer of 1976 will return multispectral panoramas of the Martian surface with resolutions 4 orders of magnitude higher than have been previously obtained and stereo views with resolutions approaching that of the human eye. Mission constraints and uncertainties require a carefully planned imaging investigation that is supported by a computer model of camera response and surface features to aid in diagnosing camera performance, in establishing a preflight imaging strategy, and in rapidly revising this strategy if pictures returned from Mars reveal unfavorable or unanticipated conditions.

  14. Micrometeoroid Impacts on the Hubble Space Telescope Wide Field and Planetary Camera 2: Larger Particles

    NASA Technical Reports Server (NTRS)

    Kearsley, A. T.; Grime, G. W.; Webb, R. P.; Jeynes, C.; Palitsin, V.; Colaux, J. L.; Ross, D. K.; Anz-Meador, P.; Liou, J. C.; Opiela, J.; hide

    2014-01-01

    The Wide Field and Planetary Camera 2 (WFPC2) was returned from the Hubble Space Telescope (HST) by shuttle mission STS-125 in 2009. In space for 16 years, the surface accumulated hundreds of impact features on the zinc orthotitanate paint, some penetrating through into underlying metal. Larger impacts were seen in photographs taken from within the shuttle orbiter during service missions, with spallation of paint in areas reaching 1.6 cm across, exposing alloy beneath. Here we describe larger impact shapes, the analysis of impactor composition, and the micrometeoroid (MM) types responsible.

  15. Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground processing. All three cameras were calibrated in the laboratory under ambient conditions. Future thermal vacuum tests will characterize critical behaviors across the full range of lunar operating temperatures. In-flight tests will check for changes in response after launch and provide key data for meeting the requirements of 1% relative and 10% absolute radiometric calibration.

  16. Design of tangential multi-energy SXR cameras for tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Yamazaki, H.; Delgado-Aparicio, L. F.; Pablant, N.; Hill, K.; Bitter, M.; Takase, Y.; Ono, M.; Stratton, B.

    2017-10-01

    A new synthetic diagnostic capability has been built to study the response of tangential multi-energy soft x-ray pin-hole cameras for arbitrary plasma densities (ne , D), temperature (Te) and ion concentrations (nZ). For tokamaks and future facilities to operate safely in a high-pressure long-pulse discharge, it is imperative to address key issues associated with impurity sources, core transport and high-Z impurity accumulation. Multi-energy soft xray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (e.g. Te, nZ and ΔZeff). These systems are designed to sample the continuum- and line-emission from low- to high-Z impurities (e.g. C, O, Al, Si, Ar, Ca, Fe, Ni and Mo) in multiple energy-ranges. These x-ray cameras will be installed in the MST-RFP, as well as NSTX-U and DIII-D tokamaks, measuring the radial structure of the photon emissivity with a radial resolution below 1 cm at a 500 Hz frame rate and a photon-energy resolution of 500 eV. The layout and response expected for the new systems will be shown for different plasma conditions and impurity concentrations. The effect of toroidal rotation driving poloidal asymmetries in the core radiation is also addressed for the case of NSTX-U.

  17. Design principles and applications of a cooled CCD camera for electron microscopy.

    PubMed

    Faruqi, A R

    1998-01-01

    Cooled CCD cameras offer a number of advantages in recording electron microscope images with CCDs rather than film which include: immediate availability of the image in a digital format suitable for further computer processing, high dynamic range, excellent linearity and a high detective quantum efficiency for recording electrons. In one important respect however, film has superior properties: the spatial resolution of CCD detectors tested so far (in terms of point spread function or modulation transfer function) are inferior to film and a great deal of our effort has been spent in designing detectors with improved spatial resolution. Various instrumental contributions to spatial resolution have been analysed and in this paper we discuss the contribution of the phosphor-fibre optics system in this measurement. We have evaluated the performance of a number of detector components and parameters, e.g. different phosphors (and a scintillator), optical coupling with lens or fibre optics with various demagnification factors, to improve the detector performance. The camera described in this paper, which is based on this analysis, uses a tapered fibre optics coupling between the phosphor and the CCD and is installed on a Philips CM12 electron microscope equipped to perform cryo-microscopy. The main use of the camera so far has been in recording electron diffraction patterns from two dimensional crystals of bacteriorhodopsin--from wild type and from different trapped states during the photocycle. As one example of the type of data obtained with the CCD camera a two dimensional Fourier projection map from the trapped O-state is also included. With faster computers, it will soon be possible to undertake this type of work on an on-line basis. Also, with improvements in detector size and resolution, CCD detectors, already ideal for diffraction, will be able to compete with film in the recording of high resolution images.

  18. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.

  19. A new radiometric unit of measure to characterize SWIR illumination

    NASA Astrophysics Data System (ADS)

    Richards, A.; Hübner, M.

    2017-05-01

    We propose a new radiometric unit of measure we call the `swux' to unambiguously characterize scene illumination in the SWIR spectral band between 0.8μm-1.8μm, where most of the ever-increasing numbers of deployed SWIR cameras (based on standard InGaAs focal plane arrays) are sensitive. Both military and surveillance applications in the SWIR currently suffer from a lack of a standardized SWIR radiometric unit of measure that can be used to definitively compare or predict SWIR camera performance with respect to SNR and range metrics. We propose a unit comparable to the photometric illuminance lux unit; see Ref. [1]. The lack of a SWIR radiometric unit becomes even more critical if one uses lux levels to describe SWIR sensor performance at twilight or even low light condition, since in clear, no-moon conditions in rural areas, the naturally-occurring SWIR radiation from nightglow produces a much higher irradiance than visible starlight. Thus, even well-intentioned efforts to characterize a test site's ambient illumination levels in the SWIR band may fail based on photometric instruments that only measure visible light. A study of this by one of the authors in Ref. [2] showed that the correspondence between lux values and total SWIR irradiance in typical illumination conditions can vary by more than two orders of magnitude, depending on the spectrum of the ambient background. In analogy to the photometric lux definition, we propose the SWIR irradiance equivalent `swux' level, derived by integration over the scene SWIR spectral irradiance weighted by a spectral sensitivity function S(λ), a SWIR analog of the V(λ) photopic response function.

  20. Mapping and correcting the influence of gaze position on pupil size measurements

    PubMed Central

    Petrov, Alexander A.

    2015-01-01

    Pupil size is correlated with a wide variety of important cognitive variables and is increasingly being used by cognitive scientists. Pupil data can be recorded inexpensively and non-invasively by many commonly used video-based eye-tracking cameras. Despite the relative ease of data collection and increasing prevalence of pupil data in the cognitive literature, researchers often underestimate the methodological challenges associated with controlling for confounds that can result in misinterpretation of their data. One serious confound that is often not properly controlled is pupil foreshortening error (PFE)—the foreshortening of the pupil image as the eye rotates away from the camera. Here we systematically map PFE using an artificial eye model and then apply a geometric model correction. Three artificial eyes with different fixed pupil sizes were used to systematically measure changes in pupil size as a function of gaze position with a desktop EyeLink 1000 tracker. A grid-based map of pupil measurements was recorded with each artificial eye across three experimental layouts of the eye-tracking camera and display. Large, systematic deviations in pupil size were observed across all nine maps. The measured PFE was corrected by a geometric model that expressed the foreshortening of the pupil area as a function of the cosine of the angle between the eye-to-camera axis and the eye-to-stimulus axis. The model reduced the root mean squared error of pupil measurements by 82.5 % when the model parameters were pre-set to the physical layout dimensions, and by 97.5 % when they were optimized to fit the empirical error surface. PMID:25953668

  1. Sublimation of icy aggregates in the coma of comet 67P/Churyumov-Gerasimenko detected with the OSIRIS cameras on board Rosetta

    NASA Astrophysics Data System (ADS)

    Gicquel, A.; Vincent, J.-B.; Agarwal, J.; A'Hearn, M. F.; Bertini, I.; Bodewits, D.; Sierks, H.; Lin, Z.-Y.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Koschny, D.; Rickman, H.; Keller, H. U.; Barucci, M. A.; Bertaux, J.-L.; Besse, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; Deller, J.; De Cecco, M.; Frattin, E.; El-Maarry, M. R.; Fornasier, S.; Fulle, M.; Groussin, O.; Gutiérrez, P. J.; Gutiérrez-Marquez, P.; Güttler, C.; Höfner, S.; Hofmann, M.; Hu, X.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Knollenberg, J.; Kovacs, G.; Kramm, J.-R.; Kührt, E.; Küppers, M.; Lara, L. M.; Lazzarin, M.; Moreno, J. J. Lopez; Lowry, S.; Marzari, F.; Masoumzadeh, N.; Massironi, M.; Moreno, F.; Mottola, S.; Naletto, G.; Oklay, N.; Pajola, M.; Pommerol, A.; Preusker, F.; Scholten, F.; Shi, X.; Thomas, N.; Toth, I.; Tubiana, C.

    2016-11-01

    Beginning in 2014 March, the OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras began capturing images of the nucleus and coma (gas and dust) of comet 67P/Churyumov-Gerasimenko using both the wide angle camera (WAC) and the narrow angle camera (NAC). The many observations taken since July of 2014 have been used to study the morphology, location, and temporal variation of the comet's dust jets. We analysed the dust monitoring observations shortly after the southern vernal equinox on 2015 May 30 and 31 with the WAC at the heliocentric distance Rh = 1.53 AU, where it is possible to observe that the jet rotates with the nucleus. We found that the decline of brightness as a function of the distance of the jet is much steeper than the background coma, which is a first indication of sublimation. We adapted a model of sublimation of icy aggregates and studied the effect as a function of the physical properties of the aggregates (composition and size). The major finding of this paper was that through the sublimation of the aggregates of dirty grains (radius a between 5 and 50 μm) we were able to completely reproduce the radial brightness profile of a jet beyond 4 km from the nucleus. To reproduce the data, we needed to inject a number of aggregates between 8.5 × 1013 and 8.5 × 1010 for a = 5 and 50 μm, respectively, or an initial mass of H2O ice around 22 kg.

  2. On the synchrotron emission in kinetic simulations of runaway electrons in magnetic confinement fusion plasmas

    NASA Astrophysics Data System (ADS)

    Carbajal, L.; del-Castillo-Negrete, D.

    2017-12-01

    Developing avoidance or mitigation strategies of runaway electrons (REs) in magnetic confinement fusion (MCF) plasmas is of crucial importance for the safe operation of ITER. In order to develop these strategies, an accurate diagnostic capability that allows good estimates of the RE distribution function in these plasmas is needed. Synchrotron radiation (SR) of RE in MCF, besides of being one of the main damping mechanisms for RE in the high energy relativistic regime, is routinely used in current MCF experiments to infer the parameters of RE energy and pitch angle distribution functions. In the present paper we address the long standing question about what are the relationships between different REs distribution functions and their corresponding synchrotron emission simultaneously including: full-orbit effects, information of the spectral and angular distribution of SR of each electron, and basic geometric optics of a camera. We study the spatial distribution of the SR on the poloidal plane, and the statistical properties of the expected value of the synchrotron spectra of REs. We observe a strong dependence of the synchrotron emission measured by the camera on the pitch angle distribution of runaways, namely we find that crescent shapes of the spatial distribution of the SR as measured by the camera relate to RE distributions with small pitch angles, while ellipse shapes relate to distributions of runaways with larger the pitch angles. A weak dependence of the synchrotron emission measured by the camera with the RE energy, value of the q-profile at the edge, and the chosen range of wavelengths is observed. Furthermore, we find that oversimplifying the angular dependence of the SR changes the shape of the synchrotron spectra, and overestimates its amplitude by approximately 20 times for avalanching runaways and by approximately 60 times for mono-energetic distributions of runaways1.

  3. Gamma ray camera

    DOEpatents

    Perez-Mendez, V.

    1997-01-21

    A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.

  4. Gamma ray camera

    DOEpatents

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  5. Opto-mechanical design of the G-CLEF flexure control camera system

    NASA Astrophysics Data System (ADS)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  6. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the literature. Statistically, the 3D accuracy obtained in the in-air environment was poorer (p<10 -5 ) than the one in the underwater environment, across all the tested camera configurations. Related to the repeatability of the camera parameters, we found a very low variability in both environments (1.7% and 2.9%, in-air and underwater). This result encourage the use of ASC technology to perform quantitative reconstruction both in-air and underwater environments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Sharing resources, coordinating response : deploying and operating incident management systems

    DOT National Transportation Integrated Search

    1999-01-05

    This brochure describes how cost-effective incident management technologies can be useful in handling traffic congestion. Embedded sensors, closed circuit television cameras, and variable message signs are examples of existing technologies that can b...

  8. Microgravity

    NASA Image and Video Library

    2003-01-22

    One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.

  9. STS-109 Crew Interviews - Currie

    NASA Technical Reports Server (NTRS)

    2002-01-01

    STS-109 Mission Specialist 2 Nancy Jane Currie is seen during a prelaunch interview. She answers questions about her inspiration to become an astronaut and her career path. She gives details on the Columbia Orbiter mission which has as its main tasks the maintenance and augmentation of the Hubble Space Telescope (HST). While she will do many things during the mission, the most important will be her role as the primary operator of the robotic arm, which is responsible for grappling the HST, bringing it to the Orbiter bay, and providing support for the astronauts during their EVAs (Extravehicular Activities). Additionally, the robotic arm will be responsible for transferring new and replacement equipment from the Orbiter to the HST. This equipment includes: two solar arrays, a Power Control Unit (PCU), the Advanced Camera for Surveys, and a replacement cooling system for NICMOS (Near Infrared Camera Multi-Object Spectrometer).

  10. Improved photo response non-uniformity (PRNU) based source camera identification.

    PubMed

    Cooper, Alan J

    2013-03-10

    The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Understanding Visible Perception

    NASA Technical Reports Server (NTRS)

    2003-01-01

    One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.

  12. 3D imaging and wavefront sensing with a plenoptic objective

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  13. An evaluation of red light camera (photo-red) enforcement programs in Virginia : a report in response to a request by Virginia's Secretary of Transportation.

    DOT National Transportation Integrated Search

    2005-01-01

    Red light running, which is defined as the act of a motorist entering an intersection after the traffic signal has turned red, caused almost 5,000 crashes in Virginia in 2003, resulting in at least 18 deaths and more than 3,800 injuries. In response ...

  14. Social Network Analysis of Crowds

    DTIC Science & Technology

    2009-08-06

    crowd responses to non-lethal weapons d tan sys ems – Prior, existing social relationships – Real time social interactions – Formal/informal...Crowd Behavior Testbed Layout Video Cameras on Trusses Importance of Social Factors • Response to non-lethal weapons fire depends on social ... relationships among crowd members – Pre-existing Personal Relationships – Ongoing Real Time Social Interactions – Formal/Informal Hierarchies • Therefore

  15. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  16. Ecological Relationships of Meso-Scale Distribution in 25 Neotropical Vertebrate Species

    PubMed Central

    Michalski, Lincoln José; Norris, Darren; de Oliveira, Tadeu Gomes; Michalski, Fernanda

    2015-01-01

    Vertebrates are a vital ecological component of Amazon forest biodiversity. Although vertebrates are a functionally important part of various ecosystem services they continue to be threatened by anthropogenic impacts throughout the Amazon. Here we use a standardized, regularly spaced arrangement of camera traps within 25km2 to provide a baseline assessment of vertebrate species diversity in a sustainable use protected area in the eastern Brazilian Amazon. We examined seasonal differences in the per species encounter rates (number of photos per camera trap and number of cameras with photos). Generalized linear models (GLMs) were then used to examine the influence of five variables (altitude, canopy cover, basal area, distance to nearest river and distance to nearest large river) on the number of photos per species and on functional groups. GLMs were also used to examine the relationships between large predators [Jaguar (Panthera onca) and Puma (Puma concolor)] and their prey. A total of 649 independent photos of 25 species were obtained from 1,800 camera trap days (900 each during wet and dry seasons). Only ungulates and rodents showed significant seasonal differences in the number of photos per camera. The number of photos differed between seasons for only three species (Mazama americana, Dasyprocta leporina and Myoprocta acouchy) all of which were photographed more (3 to 10 fold increase) during the wet season. Mazama americana was the only species where a significant difference was found in occupancy, with more photos in more cameras during the wet season. For most groups and species variation in the number of photos per camera was only explained weakly by the GLMs (deviance explained ranging from 10.3 to 54.4%). Terrestrial birds (Crax alector, Psophia crepitans and Tinamus major) and rodents (Cuniculus paca, Dasyprocta leporina and M. acouchy) were the notable exceptions, with our GLMs significantly explaining variation in the distribution of all species (deviance explained ranging from 21.0 to 54.5%). The group and species GLMs showed some novel ecological information from this relatively pristine area. We found no association between large cats and their potential prey. We also found that rodent and bird species were more often recorded closer to streams. As hunters gain access via rivers this finding suggests that there is currently little anthropogenic impact on the species. Our findings provide a standardized baseline for comparison with other sites and with which planned management and extractive activities can be evaluated. PMID:25938582

  17. Ecological relationships of meso-scale distribution in 25 neotropical vertebrate species.

    PubMed

    Michalski, Lincoln José; Norris, Darren; de Oliveira, Tadeu Gomes; Michalski, Fernanda

    2015-01-01

    Vertebrates are a vital ecological component of Amazon forest biodiversity. Although vertebrates are a functionally important part of various ecosystem services they continue to be threatened by anthropogenic impacts throughout the Amazon. Here we use a standardized, regularly spaced arrangement of camera traps within 25km2 to provide a baseline assessment of vertebrate species diversity in a sustainable use protected area in the eastern Brazilian Amazon. We examined seasonal differences in the per species encounter rates (number of photos per camera trap and number of cameras with photos). Generalized linear models (GLMs) were then used to examine the influence of five variables (altitude, canopy cover, basal area, distance to nearest river and distance to nearest large river) on the number of photos per species and on functional groups. GLMs were also used to examine the relationships between large predators [Jaguar (Panthera onca) and Puma (Puma concolor)] and their prey. A total of 649 independent photos of 25 species were obtained from 1,800 camera trap days (900 each during wet and dry seasons). Only ungulates and rodents showed significant seasonal differences in the number of photos per camera. The number of photos differed between seasons for only three species (Mazama americana, Dasyprocta leporina and Myoprocta acouchy) all of which were photographed more (3 to 10 fold increase) during the wet season. Mazama americana was the only species where a significant difference was found in occupancy, with more photos in more cameras during the wet season. For most groups and species variation in the number of photos per camera was only explained weakly by the GLMs (deviance explained ranging from 10.3 to 54.4%). Terrestrial birds (Crax alector, Psophia crepitans and Tinamus major) and rodents (Cuniculus paca, Dasyprocta leporina and M. acouchy) were the notable exceptions, with our GLMs significantly explaining variation in the distribution of all species (deviance explained ranging from 21.0 to 54.5%). The group and species GLMs showed some novel ecological information from this relatively pristine area. We found no association between large cats and their potential prey. We also found that rodent and bird species were more often recorded closer to streams. As hunters gain access via rivers this finding suggests that there is currently little anthropogenic impact on the species. Our findings provide a standardized baseline for comparison with other sites and with which planned management and extractive activities can be evaluated.

  18. Simulation Study of the Localization of a Near-Surface Crack Using an Air-Coupled Ultrasonic Sensor Array

    PubMed Central

    Delrue, Steven; Aleshin, Vladislav; Sørensen, Mikael; De Lathauwer, Lieven

    2017-01-01

    The importance of Non-Destructive Testing (NDT) to check the integrity of materials in different fields of industry has increased significantly in recent years. Actually, industry demands NDT methods that allow fast (preferably non-contact) detection and localization of early-stage defects with easy-to-interpret results, so that even a non-expert field worker can carry out the testing. The main challenge is to combine as many of these requirements into one single technique. The concept of acoustic cameras, developed for low frequency NDT, meets most of the above-mentioned requirements. These cameras make use of an array of microphones to visualize noise sources by estimating the Direction Of Arrival (DOA) of the impinging sound waves. Until now, however, because of limitations in the frequency range and the lack of integrated nonlinear post-processing, acoustic camera systems have never been used for the localization of incipient damage. The goal of the current paper is to numerically investigate the capabilities of locating incipient damage by measuring the nonlinear airborne emission of the defect using a non-contact ultrasonic sensor array. We will consider a simple case of a sample with a single near-surface crack and prove that after efficient excitation of the defect sample, the nonlinear defect responses can be detected by a uniform linear sensor array. These responses are then used to determine the location of the defect by means of three different DOA algorithms. The results obtained in this study can be considered as a first step towards the development of a nonlinear ultrasonic camera system, comprising the ultrasonic sensor array as the hardware and nonlinear post-processing and source localization software. PMID:28441738

  19. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  20. Spatial and temporal skin blood volume and saturation estimation using a multispectral snapshot imaging camera

    NASA Astrophysics Data System (ADS)

    Ewerlöf, Maria; Larsson, Marcus; Salerud, E. Göran

    2017-02-01

    Hyperspectral imaging (HSI) can estimate the spatial distribution of skin blood oxygenation, using visible to near-infrared light. HSI oximeters often use a liquid-crystal tunable filter, an acousto-optic tunable filter or mechanically adjustable filter wheels, which has too long response/switching times to monitor tissue hemodynamics. This work aims to evaluate a multispectral snapshot imaging system to estimate skin blood volume and oxygen saturation with high temporal and spatial resolution. We use a snapshot imager, the xiSpec camera (MQ022HG-IM-SM4X4-VIS, XIMEA), having 16 wavelength-specific Fabry-Perot filters overlaid on the custom CMOS-chip. The spectral distribution of the bands is however substantially overlapping, which needs to be taken into account for an accurate analysis. An inverse Monte Carlo analysis is performed using a two-layered skin tissue model, defined by epidermal thickness, haemoglobin concentration and oxygen saturation, melanin concentration and spectrally dependent reduced-scattering coefficient, all parameters relevant for human skin. The analysis takes into account the spectral detector response of the xiSpec camera. At each spatial location in the field-of-view, we compare the simulated output to the detected diffusively backscattered spectra to find the best fit. The imager is evaluated for spatial and temporal variations during arterial and venous occlusion protocols applied to the forearm. Estimated blood volume changes and oxygenation maps at 512x272 pixels show values that are comparable to reference measurements performed in contact with the skin tissue. We conclude that the snapshot xiSpec camera, paired with an inverse Monte Carlo algorithm, permits us to use this sensor for spatial and temporal measurement of varying physiological parameters, such as skin tissue blood volume and oxygenation.

Top