Science.gov

Sample records for video ccd camera

  1. CCD Camera

    DOEpatents

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  2. CCD Camera

    DOEpatents

    Roth, R.R.

    1983-08-02

    A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

  3. CCD Luminescence Camera

    NASA Technical Reports Server (NTRS)

    Janesick, James R.; Elliott, Tom

    1987-01-01

    New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

  4. Advanced CCD camera developments

    SciTech Connect

    Condor, A.

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  5. Biofeedback control analysis using a synchronized system of two CCD video cameras and a force-plate sensor

    NASA Astrophysics Data System (ADS)

    Tsuruoka, Masako; Shibasaki, Ryosuke; Murai, Shunji

    1999-01-01

    The biofeedback control analysis of human movement has become increasingly important in rehabilitation, sports medicine and physical fitness. In this study, a synchronized system was developed for acquiring sequential data of a person's movement. The setup employs a video recorder system linked with two CCD video cameras and fore-plate sensor system, which are configured to stop and start simultaneously. The feedback control movement of postural stability was selected as a subject for analysis. The person's center of body gravity (COG) was calculated by measured 3-D coordinates of major joints using videometry with bundle adjustment and self-calibration. The raw serial data of COG and foot pressure by measured force plate sensor are difficult to analyze directly because of their complex fluctuations. Utilizing auto regressive modeling, the power spectrum and the impulse response of movement factors, enable analysis of their dynamic relations. This new biomedical engineering approach provides efficient information for medical evaluation of a person's stability.

  6. Transmission electron microscope CCD camera

    DOEpatents

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  7. Calibration Tests of Industrial and Scientific CCD Cameras

    NASA Technical Reports Server (NTRS)

    Shortis, M. R.; Burner, A. W.; Snow, W. L.; Goad, W. K.

    1991-01-01

    Small format, medium resolution CCD cameras are at present widely used for industrial metrology applications. Large format, high resolution CCD cameras are primarily in use for scientific applications, but in due course should increase both the range of applications and the object space accuracy achievable by close range measurement. Slow scan, cooled scientific CCD cameras provide the additional benefit of additional quantisation levels which enables improved radiometric resolution. The calibration of all types of CCD cameras is necessary in order to characterize the geometry of the sensors and lenses. A number of different types of CCD cameras have been calibrated a the NASA Langley Research Center using self calibration and a small test object. The results of these calibration tests will be described, with particular emphasis on the differences between standard CCD video cameras and scientific slow scan CCD cameras.

  8. CCD Camera Observations

    NASA Astrophysics Data System (ADS)

    Buchheim, Bob; Argyle, R. W.

    One night late in 1918, astronomer William Milburn, observing the region of Cassiopeia from Reverend T.H.E.C. Espin's observatory in Tow Law (England), discovered a hitherto unrecorded double star (Wright 1993). He reported it to Rev. Espin, who measured the pair using his 24-in. reflector: the fainter star was 6.0 arcsec from the primary, at position angle 162.4 ^{circ } (i.e. the fainter star was south-by-southeast from the primary) (Espin 1919). Some time later, it was recognized that the astrograph of the Vatican Observatory had taken an image of the same star-field a dozen years earlier, in late 1906. At that earlier epoch, the fainter star had been separated from the brighter one by only 4.8 arcsec, at position angle 186.2 ^{circ } (i.e. almost due south). Were these stars a binary pair, or were they just two unrelated stars sailing past each other? Some additional measurements might have begun to answer this question. If the secondary star was following a curved path, that would be a clue of orbital motion; if it followed a straight-line path, that would be a clue that these are just two stars passing in the night. Unfortunately, nobody took the trouble to re-examine this pair for almost a century, until the 2MASS astrometric/photometric survey recorded it in late 1998. After almost another decade, this amateur astronomer took some CCD images of the field in 2007, and added another data point on the star's trajectory, as shown in Fig. 15.1.

  9. The OCA CCD Camera Controller

    DTIC Science & Technology

    1996-01-01

    blank) -2. REPORT DATE 3 . REPORT TYPE AND DATES COVERED •. . ..December 1996 , 1996 Final Report - Ř. TITLE AND SUBTITLE 5. FUNDING NUMBERS The OCA...Physical. implementation of a multi CCD camera Appendix 1: Contrbller schematics Appendix 2: Data sheets of the the major components Appendix 3 ...the final-report for EOARD cbntract ##SPC-93-4007. R? 3 %o-/ Ob. 7(, It contains the following sections: - Requirements analysis - Description of the

  10. Vacuum compatible miniature CCD camera head

    DOEpatents

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  11. Solid state television camera (CCD-buried channel), revision 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  12. Solid state television camera (CCD-buried channel)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  13. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  14. Photon counting micrometer and video CCD.

    NASA Astrophysics Data System (ADS)

    Tie, Qiongxian; Li, Chennfei

    The structure and observational method of the photon counting slotted micrometer are proposed. The micrometer is made up of a piece of slotted plate and a photomultiplier. The photon counting micrometer is replaced by a video CCD for regular trial observation and as a test for the equipment of one scientific CCD, because the micrometer transmission in the instrumental vertical angle transmission mechanism is dull, and the telescope is not able to observe regularly since the optical axis changes greatly as the telescope points to different vertical distance. The video CCD is fixed in the course of observation, recording a picture every forty milliseconds, or one hundred pictures within four seconds, resulting in simultaneously after smoothing treatment the moment and stellar zenith distance when a star passes through the meridian or prime vertical.

  15. CCD emulator design for LSST camera

    NASA Astrophysics Data System (ADS)

    Lu, W.; O'Connor, P.; Fried, J.; Kuczewski, J.

    2016-07-01

    As part of the LSST project, a comprehensive CCD emulator that operates three CCDs simultaneously has been developed for testing multichannel readout electronics. Based on an Altera Cyclone V FPGA for timing and control, the emulator generates 48 channels of simulated video waveform in response to appropriate sequencing of parallel and serial clocks. Two 256Mb serial memory chips are adopted for storage of arbitrary grayscale images. The arbitrary image or fixed pattern image can be generated from the emulator in triple as three real CCDs perform, for qualifying and testing the LSST 3-stripe Science Raft Electronics Board (REB) simultaneously. Using the method of comparator threshold scanning, all 24 parallel clocks and 24 serial clocks from the REB are qualified for sequence, duration and level before the video signal is generated. In addition, 66 channels of input bias and voltages are sampled through the multi-channel ADC to verify that correct values are applied to the CCD. In addition, either a Gigabit Ethernet connector or USB bus can be used to control and read back from the emulator board. A user-friendly PC software package has been developed for controlling and communicating with the emulator.

  16. Evryscope Robotilter automated camera / ccd alignment system

    NASA Astrophysics Data System (ADS)

    Ratzloff, Jeff K.; Law, Nicholas M.; Fors, Octavi; Ser, Daniel d.; Corbett, Henry T.

    2016-08-01

    We have deployed a new class of telescope, the Evryscope, which opens a new parameter space in optical astronomy - the ability to detect short time scale events across the entire sky simultaneously. The system is a gigapixel-scale array camera with an 8000 sq. deg. field of view, 13 arcsec per pixel sampling, and the ability to detect objects brighter than g = 16 in each 2-minute exposure. The Evryscope is designed to find transiting exoplanets around exotic stars, as well as detect nearby supernovae and provide continuous records of distant relativistic explosions like gamma-ray-bursts. The Evryscope uses commercially available CCDs and optics; the machine and assembly tolerances inherent in the mass production of these parts introduce problematic variations in the lens / CCD alignment which degrades image quality. We have built an automated alignment system (Robotilters) to solve this challenge. In this paper we describe the Robotilter system, mechanical and software design, image quality improvement, and current status.

  17. High-performance digital color video camera

    NASA Astrophysics Data System (ADS)

    Parulski, Kenneth A.; D'Luna, Lionel J.; Benamati, Brian L.; Shelley, Paul R.

    1992-01-01

    Typical one-chip color cameras use analog video processing circuits. An improved digital camera architecture has been developed using a dual-slope A/D conversion technique and two full-custom CMOS digital video processing integrated circuits, the color filter array (CFA) processor and the RGB postprocessor. The system used a 768 X 484 active element interline transfer CCD with a new field-staggered 3G color filter pattern and a lenslet overlay, which doubles the sensitivity of the camera. The industrial-quality digital camera design offers improved image quality, reliability, manufacturability, while meeting aggressive size, power, and cost constraints. The CFA processor digital VLSI chip includes color filter interpolation processing, an optical black clamp, defect correction, white balance, and gain control. The RGB postprocessor digital integrated circuit includes a color correction matrix, gamma correction, 2D edge enhancement, and circuits to control the black balance, lens aperture, and focus.

  18. High-performance digital color video camera

    NASA Astrophysics Data System (ADS)

    Parulski, Kenneth A.; Benamati, Brian L.; D'Luna, Lionel J.; Shelley, Paul R.

    1991-06-01

    Typical one-chip color cameras use analog video processing circuits. An improved digital camera architecture has been developed using a dual-slope A/D conversion technique, and two full custom CMOS digital video processing ICs, the 'CFA processor' and the 'RGB post- processor.' The system uses a 768 X 484 active element interline transfer CCD with a new 'field-staggered 3G' color filter pattern and a 'lenslet' overlay, which doubles the sensitivity of the camera. The digital camera design offers improved image quality, reliability, and manufacturability, while meeting aggressive size, power, and cost constraints. The CFA processor digital VLSI chip includes color filter interpolation processing, an optical black clamp, defect correction, white balance, and gain control. The RGB post-processor digital IC includes a color correction matrix, gamma correction, two-dimensional edge-enhancement, and circuits to control the black balance, lens aperture, and focus.

  19. Impact of CCD camera SNR on polarimetric accuracy.

    PubMed

    Chen, Zhenyue; Wang, Xia; Pacheco, Shaun; Liang, Rongguang

    2014-11-10

    A comprehensive charge-coupled device (CCD) camera noise model is employed to study the impact of CCD camera signal-to-noise ratio (SNR) on polarimetric accuracy. The study shows that the standard deviations of the measured degree of linear polarization (DoLP) and angle of linear polarization (AoLP) are mainly dependent on the camera SNR. With increase in the camera SNR, both the measurement errors and the standard deviations caused by the CCD camera noise decrease. When the DoLP of the incident light is smaller than 0.1, the camera SNR should be at least 75 to achieve a measurement error of less than 0.01. When the input DoLP is larger than 0.5, a SNR of 15 is sufficient to achieve the same measurement accuracy. An experiment is carried out to verify the simulation results.

  20. High-speed optical shutter coupled to fast-readout CCD camera

    NASA Astrophysics Data System (ADS)

    Yates, George J.; Pena, Claudine R.; McDonald, Thomas E., Jr.; Gallegos, Robert A.; Numkena, Dustin M.; Turko, Bojan T.; Ziska, George; Millaud, Jacques E.; Diaz, Rick; Buckley, John; Anthony, Glen; Araki, Takae; Larson, Eric D.

    1999-04-01

    A high frame rate optically shuttered CCD camera for radiometric imaging of transient optical phenomena has been designed and several prototypes fabricated, which are now in evaluation phase. the camera design incorporates stripline geometry image intensifiers for ultra fast image shutters capable of 200ps exposures. The intensifiers are fiber optically coupled to a multiport CCD capable of 75 MHz pixel clocking to achieve 4KHz frame rate for 512 X 512 pixels from simultaneous readout of 16 individual segments of the CCD array. The intensifier, Philips XX1412MH/E03 is generically a Generation II proximity-focused micro channel plate intensifier (MCPII) redesigned for high speed gating by Los Alamos National Laboratory and manufactured by Philips Components. The CCD is a Reticon HSO512 split storage with bi-direcitonal vertical readout architecture. The camera main frame is designed utilizing a multilayer motherboard for transporting CCD video signals and clocks via imbedded stripline buses designed for 100MHz operation. The MCPII gate duration and gain variables are controlled and measured in real time and up-dated for data logging each frame, with 10-bit resolution, selectable either locally or by computer. The camera provides both analog and 10-bit digital video. The camera's architecture, salient design characteristics, and current test data depicting resolution, dynamic range, shutter sequences, and image reconstruction will be presented and discussed.

  1. Streak Camera Performance with Large-Format CCD Readout

    SciTech Connect

    Lerche, R A; Andrews, D S; Bell, P M; Griffith, R L; McDonald, J W; Torres, P III; Vergel de Dios, G

    2003-07-08

    The ICF program at Livermore has a large inventory of optical streak cameras that were built in the 1970s and 1980s. The cameras include micro-channel plate image-intensifier tubes (IIT) that provide signal amplification and early lens-coupled CCD readouts. Today, these cameras are still very functional, but some replacement parts such as the original streak tube, CCD, and IIT are scarce and obsolete. This article describes recent efforts to improve the performance of these cameras using today's advanced CCD readout technologies. Very sensitive, large-format CCD arrays with efficient fiber-optic input faceplates are now available for direct coupling with the streak tube. Measurements of camera performance characteristics including linearity, spatial and temporal resolution, line-spread function, contrast transfer ratio (CTR), and dynamic range have been made for several different camera configurations: CCD coupled directly to the streak tube, CCD directly coupled to the IIT, and the original configuration with a smaller CCD lens coupled to the IIT output. Spatial resolution (limiting visual) with and without the IIT is 8 and 20 lp/mm, respectively, for photocathode current density up to 25% of the Child-Langmuir (C-L) space-charge limit. Temporal resolution (fwhm) deteriorates by about 20% when the cathode current density reaches 10% of the C-L space charge limit. Streak tube operation with large average tube current was observed by illuminating the entire slit region through a Ronchi ruling and measuring the CTR. Sensitivity (CCD electrons per streak tube photoelectron) for the various configurations ranged from 7.5 to 2,700 with read noise of 7.5 to 10.5 electrons. Optimum spatial resolution is achieved when the IIT is removed. Maximum dynamic range requires a configuration where a single photoelectron from the photocathode produces a signal that is 3 to 5 times the read noise.

  2. Printed circuit board for a CCD camera head

    DOEpatents

    Conder, Alan D.

    2002-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  3. Solid state, CCD-buried channel, television camera study and design

    NASA Technical Reports Server (NTRS)

    Hoagland, K. A.; Balopole, H.

    1976-01-01

    An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.

  4. Structural and thermal modeling of a cooled CCD camera

    NASA Astrophysics Data System (ADS)

    Ahmad, Anees; Arndt, Thomas D.; Gross, Robert; Hahn, Mark; Panasiti, Mark

    2001-11-01

    This paper presents structural and thermal modeling of a high-performance CCD camera designed to operate under severe environments. Minimizing the dark current noise required the CCD to be maintained at low temperature while the camera operated in a 70 degrees C environment. A thermoelectric cooler (TEC) was selected due to its simplicity, and relatively low cost. Minimizing the thermal parasitic loads due to conduction and convection, and maximizing the heat sink performance was critical in this design. The critical structural features of this camera are the CCD leads and the bond joint that holds the CCD in alignment relative to the lens. The CCD leads are susceptible to fatigue failure when subjected to random vibrations for an extended period of time. This paper outlines the methods used to model and analyze the CCD leads for fatigue, the supportive vibration testing performed and the steps taken to correct for structural inadequacies found in the original design. The key results of all this thermal and structural modeling and testing are presented.

  5. Video cameras on wild birds.

    PubMed

    Rutz, Christian; Bluff, Lucas A; Weir, Alex A S; Kacelnik, Alex

    2007-11-02

    New Caledonian crows (Corvus moneduloides) are renowned for using tools for extractive foraging, but the ecological context of this unusual behavior is largely unknown. We developed miniaturized, animal-borne video cameras to record the undisturbed behavior and foraging ecology of wild, free-ranging crows. Our video recordings enabled an estimate of the species' natural foraging efficiency and revealed that tool use, and choice of tool materials, are more diverse than previously thought. Video tracking has potential for studying the behavior and ecology of many other bird species that are shy or live in inaccessible habitats.

  6. Visual enhancement of laparoscopic nephrectomies using the 3-CCD camera

    NASA Astrophysics Data System (ADS)

    Crane, Nicole J.; Kansal, Neil S.; Dhanani, Nadeem; Alemozaffar, Mehrdad; Kirk, Allan D.; Pinto, Peter A.; Elster, Eric A.; Huffman, Scott W.; Levin, Ira W.

    2006-02-01

    Many surgical techniques are currently shifting from the more conventional, open approach towards minimally invasive laparoscopic procedures. Laparoscopy results in smaller incisions, potentially leading to less postoperative pain and more rapid recoveries . One key disadvantage of laparoscopic surgery is the loss of three-dimensional assessment of organs and tissue perfusion. Advances in laparoscopic technology include high-definition monitors for improved visualization and upgraded single charge coupled device (CCD) detectors to 3-CCD cameras, to provide a larger, more sensitive color palette to increase the perception of detail. In this discussion, we further advance existing laparoscopic technology to create greater enhancement of images obtained during radical and partial nephrectomies in which the assessment of tissue perfusion is crucial but limited with current 3-CCD cameras. By separating the signals received by each CCD in the 3-CCD camera and by introducing a straight forward algorithm, rapid differentiation of renal vessels and perfusion is accomplished and could be performed real time. The newly acquired images are overlaid onto conventional images for reference and comparison. This affords the surgeon the ability to accurately detect changes in tissue oxygenation despite inherent limitations of the visible light image. Such additional capability should impact procedures in which visual assessment of organ vitality is critical.

  7. Colour pictures with a CCD camera.

    NASA Astrophysics Data System (ADS)

    Véron-Cetty, M.-P.; Véron, P.

    1983-12-01

    The 1.5 m Oanish telescope at La Silla has been used to photograph a number of galaxies with a CCO camera (1) through three different filters: blue (Johnson B), red and infrared (Gunn r (L) and z). The images have been reduced with the ESO image processing system IHAP and then transferred to the VAX computer to use OICOMEO, the high quality hard copy device which produces colour slides. These photographs are in real but not natural colours in the sense that instead of using blue, green and red images, we have used blue, red and infrared. The colour balance is arbitrary but the same for all pictures, except #2. The seeing was 1.2 to 1.5 arcsec. In all cases, north is at the top, east to the left.

  8. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  9. Development of an all-in-one gamma camera/CCD system for safeguard verification

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo

    2014-12-01

    For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.

  10. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  11. High frame rate CCD camera with fast optical shutter

    SciTech Connect

    Yates, G.J.; McDonald, T.E. Jr.; Turko, B.T.

    1998-09-01

    A high frame rate CCD camera coupled with a fast optical shutter has been designed for high repetition rate imaging applications. The design uses state-of-the-art microchannel plate image intensifier (MCPII) technology fostered/developed by Los Alamos National Laboratory to support nuclear, military, and medical research requiring high-speed imagery. Key design features include asynchronous resetting of the camera to acquire random transient images, patented real-time analog signal processing with 10-bit digitization at 40--75 MHz pixel rates, synchronized shutter exposures as short as 200pS, sustained continuous readout of 512 x 512 pixels per frame at 1--5Hz rates via parallel multiport (16-port CCD) data transfer. Salient characterization/performance test data for the prototype camera are presented, temporally and spatially resolved images obtained from range-gated LADAR field testing are included, an alternative system configuration using several cameras sequenced to deliver discrete numbers of consecutive frames at effective burst rates up to 5GHz (accomplished by time-phasing of consecutive MCPII shutter gates without overlap) is discussed. Potential applications including dynamic radiography and optical correlation will be presented.

  12. Multispectral synthesis of daylight using a commercial digital CCD camera.

    PubMed

    Nieves, Juan L; Valero, Eva M; Nascimento, Sérgio M C; Hernández-Andrés, Javier; Romero, Javier

    2005-09-20

    Performance of multispectral devices in recovering spectral data has been intensively investigated in some applications, as in spectral characterization of art paintings, but has received little attention in the context of spectral characterization of natural illumination. This study investigated the quality of the spectral estimation of daylight-type illuminants using a commercial digital CCD camera and a set of broadband colored filters. Several recovery algorithms that did not need information about spectral sensitivities of the camera sensors nor eigenvectors to describe the spectra were tested. Tests were carried out both with virtual data, using simulated camera responses, and real data obtained from real measurements. It was found that it is possible to recover daylight spectra with high spectral and colorimetric accuracy with a reduced number of three to nine spectral bands.

  13. Video indirect ophthalmoscopy using a hand-held video camera.

    PubMed

    Shanmugam, Mahesh P

    2011-01-01

    Fundus photography in adults and cooperative children is possible with a fundus camera or by using a slit lamp-mounted digital camera. Retcam TM or a video indirect ophthalmoscope is necessary for fundus imaging in infants and young children under anesthesia. Herein, a technique of converting and using a digital video camera into a video indirect ophthalmoscope for fundus imaging is described. This device will allow anyone with a hand-held video camera to obtain fundus images. Limitations of this technique involve a learning curve and inability to perform scleral depression.

  14. Television camera video level control system

    NASA Technical Reports Server (NTRS)

    Kravitz, M.; Freedman, L. A.; Fredd, E. H.; Denef, D. E. (Inventor)

    1985-01-01

    A video level control system is provided which generates a normalized video signal for a camera processing circuit. The video level control system includes a lens iris which provides a controlled light signal to a camera tube. The camera tube converts the light signal provided by the lens iris into electrical signals. A feedback circuit in response to the electrical signals generated by the camera tube, provides feedback signals to the lens iris and the camera tube. This assures that a normalized video signal is provided in a first illumination range. An automatic gain control loop, which is also responsive to the electrical signals generated by the camera tube 4, operates in tandem with the feedback circuit. This assures that the normalized video signal is maintained in a second illumination range.

  15. Wind dynamic range video camera

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

  16. High-performance LLLTV CCD camera for nighttime pilotage

    NASA Astrophysics Data System (ADS)

    Williams, George M., Jr.

    1992-06-01

    Nighttime, nap-of-the-earth pilotage requires information from several sensors including thermal and image intensified sensors. Traditionally, the thermal imagery is displayed on a CRT; the image intensified imagery is displayed with a night vision goggle (NVG), a direct- view device worn immediately in front of the pilot''s eyes. If electronic output data from the image intensifier could be displayed on a CRT, the pilot''s safety and mission effectiveness would be greatly enhanced. Conventional approaches to using charge coupled devices fiberoptically coupled to image intensifier tubes have failed to provide the resolution, contrast, and sensitivity that pilots are accustomed to with night vision goggles. To produce image intensified sensors with performance comparable to an NVG, an intensified sensor that is optimized for coupling to solid state sensors and eliminates all fiberoptic-to-fiberoptic interfaces was fabricated. The Integrated Taper Assembly (ITA) sensor has a fiberoptic taper built into the vacuum of the image tube. The fiberoptic taper minifies the 18 or 25 millimeter (mm) output of the image intensifier tube to the 11 mm diagonal of the high resolution CCD. This requires one optical coupling -- at the CCD surface. By offering high resolution, high sensitivity, and a simplified optical path, the ITA image intensifier overcomes the shortcomings that normally limit the performance of intensified CCD cameras.

  17. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    NASA Technical Reports Server (NTRS)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  18. Design of 300 frames per second 16-port CCD video processing circuit

    NASA Astrophysics Data System (ADS)

    Yang, Shao-hua; Guo, Ming-an; Li, Bin-kang; Xia, Jing-tao; Wang, Qunshu

    2011-08-01

    It is hard to achieve the speed of hundreds frames per second in high resolution charge coupled device (CCD) cameras, because the pixels' charge must be read out one by one in serial mode, this cost a lot of time. The multiple-port CCD technology is a new efficiency way to realize high frame rate high resolution solid state imaging systems. The pixel charge is read out from a multiple-port CCD through several ports in parallel mode, witch decrease the reading time of the CCD. But it is hard for the multiple-port CCDs' video processing circuit design, and the real time high speed image data acquisition is also a knotty problem. A 16-port high frame rate CCD video processing circuit based on Complex Programmable Logic Device (CPLD) and VSP5010 has been developed around a specialized back illuminated, 512 x 512 pixels, 400fps (frames per second) frame transfer CCD sensor from Sarnoff Ltd. A CPLD is used to produce high precision sample clock and timing, and the high accurate CCD video voltage sample is achieved with Correlated Double Sampling (CDS) technology. 8 chips of VSP5010 with CDS function is adopted to achieve sample and digitize CCD analog signal into 12 bit digital image data. Thus the 16 analog CCD output was digitized into 192 bit 6.67MHz parallel digital image data. Then CPLD and Time Division Multiplexing (TDM) technology are used to encode the 192 bit wide data into two 640MHz serial data and transmitted to remote data acquisition module via two fibers. The acquisition module decodes the serial data into original image data and stores the data into a frame cache, and then the software reads the data from the frame cache based on USB2.0 technology and stores the data in a hard disk. The digital image data with 12bit per pixel was collected and displayed with system software. The results show that the 16-por 300fps CCD output signals could be digitized and transmitted with the video processing circuit, and the remote data acquisition has been realized.

  19. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  20. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  1. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  2. CCD camera system design for the beam diagnostic by using OTR

    NASA Astrophysics Data System (ADS)

    Yu, Ying; Yang, Ke-Hu; Dai, Jian-Ping

    2008-01-01

    In this paper, a new CCD camera system used in the OTR beam measurement is presented, the basic principle of OTR beam measurement and the application of CCD chips-ICX208CL and AD9929 in camera system design are introduced in detail. Supported by Major State Basic Besearch Development Program (2002CB713606)

  3. Laboratory Calibration and Characterization of Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1989-01-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of non-perpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitable aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  4. Laboratory calibration and characterization of video cameras

    NASA Astrophysics Data System (ADS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1990-08-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of nonperpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitably aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  5. Video Analysis with a Web Camera

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  6. Measuring neutron fluences and gamma/x ray fluxes with CCD cameras

    NASA Astrophysics Data System (ADS)

    Yates, G. J.; Smith, G. W.; Zagarino, P.; Thomas, M. C.

    The capability to measure bursts of neutron fluences and gamma/x-ray fluxes directly with charge coupled device (CCD) cameras while being able to distinguish between the video signals produced by these two types of radiation, even when they occur simultaneously, has been demonstrated. Volume and area measurements of transient radiation-induced pixel charge in English Electric Valve (EEV) Frame Transfer (FT) charge coupled devices (CCD's) from irradiation with pulsed neutrons (14 MeV) and Bremsstrahlung photons (4-12 MeV endpoint) are utilized to calibrate the devices as radiometric imaging sensors capable of distinguishing between the two types of ionizing radiation. Measurements indicate approx. = .05 V/rad responsivity with greater than or = 1 rad required for saturation from photon irradiation. Neutron-generated localized charge centers or 'peaks' binned by area and amplitude as functions of fluence in the 105 to 107 n/cc range indicate smearing over approx. 1 to 10 percent of the CCD array with charge per pixel ranging between noise and saturation levels.

  7. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  8. Auto-measuring system of aero-camera lens focus using linear CCD

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-ye; Zhao, Yu-liang; Wang, Shu-juan

    2014-09-01

    The automatic and accurate focal length measurement of aviation camera lens is of great significance and practical value. The traditional measurement method depends on the human eye to read the scribed line on the focal plane of parallel light pipe by means of reading microscope. The method is of low efficiency and the measuring results are influenced by artificial factors easily. Our method used linear array solid-state image sensor instead of reading microscope to transfer the imaging size of specific object to be electrical signal pulse width, and used computer to measure the focal length automatically. In the process of measurement, the lens to be tested placed in front of the object lens of parallel light tube. A couple of scribed line on the surface of the parallel light pipe's focal plane were imaging on the focal plane of the lens to be tested. Placed the linear CCD drive circuit on the image plane, the linear CCD can convert the light intensity distribution of one dimension signal into time series of electrical signals. After converting, a path of electrical signals is directly brought to the video monitor by image acquisition card for optical path adjustment and focusing. The other path of electrical signals is processed to obtain the pulse width corresponding to the scribed line by electrical circuit. The computer processed the pulse width and output focal length measurement result. Practical measurement results showed that the relative error was about 0.10%, which was in good agreement with the theory.

  9. Development of filter exchangeable 3CCD camera for multispectral imaging acquisition

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Park, Soo Hyun; Kim, Moon S.; Noh, Sang Ha

    2012-05-01

    There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallel image signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.

  10. CID25: radiation hardened color video camera

    NASA Astrophysics Data System (ADS)

    Baiko, D. A.; Bhaskaran, S. K.; Czebiniak, S. W.

    2006-02-01

    The charge injection device, CID25, is presented. The CID25 is a color video imager. The imager is compliant with the NTSC interlaced TV standard. It has 484 by 710 displayable pixels and is capable of producing 30 frames-per-second color video. The CID25 is equipped with the preamplifier-per-pixel technology combined with parallel row processing to achieve high conversion gain and low noise bandwidth. The on-chip correlated double sampling circuitry serves to reduce the low frequency noise components. The CID25 is operated by a camera system consisting of two parts, the head assembly and the camera control unit (CCU). The head assembly and the CCU can be separated by up to 150 meter long cable. The CID25 imager and the head portion of the camera are radiation hardened. They can produce color video with insignificant SNR degradation out to at least 2.85 Mrad of total dose of Co 60 γ-radiation. This represents the first in industry radiation hardened color video system, based on a semiconductor photo-detector that has an adequate sensitivity for room light operation.

  11. Calibrating Video Cameras For Meteor Works

    NASA Astrophysics Data System (ADS)

    Khaleghy-Rad, Mona; Campbell-Brown, M.

    2006-09-01

    The calculation of the intensity of light produced by a meteor ablating in the atmosphere is crucial to determination of meteoroid masses, and to uncovering the meteoroid's physical structure through ablation modeling. A necessary step in the determination is to use cameras which have been end-to-end calibrated to determine their precise spectral response. We report here a new procedure for calibrating low-light video cameras used for meteor observing, which will be used in conjunction with average meteor spectra to determine absolute light intensities.

  12. Radiation damage of the PCO Pixelfly VGA CCD camera of the BES system on KSTAR tokamak

    NASA Astrophysics Data System (ADS)

    Náfrádi, Gábor; Kovácsik, Ákos; Pór, Gábor; Lampert, Máté; Un Nam, Yong; Zoletnik, Sándor

    2015-01-01

    A PCO Pixelfly VGA CCD camera which is part a of the Beam Emission Spectroscopy (BES) diagnostic system of the Korea Superconducting Tokamak Advanced Research (KSTAR) used for spatial calibrations, suffered from serious radiation damage, white pixel defects have been generated in it. The main goal of this work was to identify the origin of the radiation damage and to give solutions to avoid it. Monte Carlo N-Particle eXtended (MCNPX) model was built using Monte Carlo Modeling Interface Program (MCAM) and calculations were carried out to predict the neutron and gamma-ray fields in the camera position. Besides the MCNPX calculations pure gamma-ray irradiations of the CCD camera were carried out in the Training Reactor of BME. Before, during and after the irradiations numerous frames were taken with the camera with 5 s long exposure times. The evaluation of these frames showed that with the applied high gamma-ray dose (1.7 Gy) and dose rate levels (up to 2 Gy/h) the number of the white pixels did not increase. We have found that the origin of the white pixel generation was the neutron-induced thermal hopping of the electrons which means that in the future only neutron shielding is necessary around the CCD camera. Another solution could be to replace the CCD camera with a more radiation tolerant one for example with a suitable CMOS camera or apply both solutions simultaneously.

  13. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  14. Characterization of a CCD-camera-based system for measurement of the solar radial energy distribution

    NASA Astrophysics Data System (ADS)

    Gambardella, A.; Galleano, R.

    2011-10-01

    Charge-coupled device (CCD)-camera-based measurement systems offer the possibility to gather information on the solar radial energy distribution (sunshape). Sunshape measurements are very useful in designing high concentration photovoltaic systems and heliostats as they collect light only within a narrow field of view, the dimension of which has to be defined in the context of several different system design parameters. However, in this regard the CCD camera response needs to be adequately characterized. In this paper, uncertainty components for optical and other CCD-specific sources have been evaluated using indoor test procedures. We have considered CCD linearity and background noise, blooming, lens aberration, exposure time linearity and quantization error. Uncertainty calculation showed that a 0.94% (k = 2) combined expanded uncertainty on the solar radial energy distribution can be assumed.

  15. The In-flight Spectroscopic Performance of the Swift XRT CCD Camera During 2006-2007

    NASA Technical Reports Server (NTRS)

    Godet, O.; Beardmore, A.P.; Abbey, A.F.; Osborne, J.P.; Page, K.L.; Evans, P.; Starling, R.; Wells, A.A.; Angelini, L.; Burrows, D.N.; Kennea, J.; Campana, S.; Chincarini, G.; Citterio, O.; Cusumano, G.; LaParola, V.; Mangano, V.; Mineo, T.; Giommi, P.; Perri, M.; Capalbi, M.; Tamburelli, F.

    2007-01-01

    The Swift X-ray Telescope focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 135 eV FWHM at 5.9 keV as measured before launch. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Comparison of observed spectra with models folded through the instrument response produces negative residuals around and below the Oxygen edge. We discuss several possible causes for such residuals. Traps created by proton damage on the CCD increase the charge transfer inefficiency (CTI) over time. We describe the evolution of the CTI since the launch and its effect on the CCD spectral resolution and the gain.

  16. A method for measuring modulation transfer function of CCD device in remote camera with grating pattern

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Shen, Weimin

    2008-03-01

    The remote camera that is developed by us is the exclusive functional load of a micro-satellite. Modulation transfer function (MTF) is a direct and accurate parameter to evaluate the system performance of a remote camera, and the MTF of a camera is jointly decided by the MTF of camera lens and its CCD device. The MTF of the camera lens can be tested directly with commercial optical system testing instrument, but it is indispensable to measure the MTF of the CCD device accurately before setting up the whole camera to evaluate the performance of the whole camera in advance. Compared with other existed MTF measuring methods, this method using grating pattern requires less equipment and simpler arithmetic. Only one complete scan of the grating pattern and later data processing and interpolation are needed to get the continuous MTF curves of the whole camera and its CCD device. High-precision optical system testing instrument guarantees the precision of this indirect measuring method. This indirect method to measure MTF is of reference use for the method of testing MTF of electronic device and for gaining MTF indirectly from corresponding CTF.

  17. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    SciTech Connect

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.; Bell, P. M.; Sims, G.; Toerne, K.; Rodenburg, K.; Croft, M.; Haugh, M. J.; Charest, M. R.; Romano, E. D.; Jacoby, K. D.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effort was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.

  18. Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing

    NASA Technical Reports Server (NTRS)

    Crooke, Julie A.

    2003-01-01

    The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.

  19. A CCD camera for guidance of 100-cm balloon-borne far-infrared telescope

    NASA Astrophysics Data System (ADS)

    D'Costa, S. L.; Ghosh, S. K.; Tandon, S. N.

    1991-08-01

    A charge coupled device (CCD) camera using the 488 x 380 element Fairchild CCD 222 imaging device has been developed for guidance of the 100 cm balloonborne far infrared telescope. The hardware consists of an imaging device along with its associated optics, a clock generating circuitry, the clock drivers, an 8086 microprocessor-based system, and the power supplies. The software processes the CCD image data and uses a selected star in its field of view as the guide star for pointing the telescope with an accuracy of around 4 arc s.

  20. Optical synthesizer for a large quadrant-array CCD camera: Center director's discretionary fund

    NASA Technical Reports Server (NTRS)

    Hagyard, Mona J.

    1992-01-01

    The objective of this program was to design and develop an optical device, an optical synthesizer, that focuses four contiguous quadrants of a solar image on four spatially separated CCD arrays that are part of a unique CCD camera system. This camera and the optical synthesizer will be part of the new NASA-Marshall Experimental Vector Magnetograph, and instrument developed to measure the Sun's magnetic field as accurately as present technology allows. The tasks undertaken in the program are outlined and the final detailed optical design is presented.

  1. Video inpainting under constrained camera motion.

    PubMed

    Patwardhan, Kedar A; Sapiro, Guillermo; Bertalmío, Marcelo

    2007-02-01

    A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: it may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step, we reconstruct moving objects in the foreground that are "occluded" by the region to be inpainted. To this end, we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples taken with a consumer hand-held camera are shown supporting these findings.

  2. Research on detecting heterogeneous fibre from cotton based on linear CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-bin; Cao, Bing; Zhang, Xin-peng; Shi, Wei

    2009-07-01

    The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton, advance the quality of cotton textile and reduce production cost. There are favorable market value and future development for this technology. An optical detecting system obtains the widespread application. In this system, we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first, then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.

  3. Sports video categorizing method using camera motion parameters

    NASA Astrophysics Data System (ADS)

    Takagi, Shinichi; Hattori, Shinobu; Yokoyama, Kazumasa; Kodate, Akihisa; Tominaga, Hideyoshi

    2003-06-01

    In this paper, we propose a content based video categorizing method for broadcasted sports videos using camera motion parameters. We define and introduce two new features in the proposed method; "Camera motion extraction ratio" and "Camera motion transition". Camera motion parameters in the video sequence contain very significant information for categorization of broadcasted sports video, because in most of sports video, camera motions are closely related to the actions taken in the sports, which are mostly based on a certain rule depending on types of sports. Based on the charactersitics, we design a sports video categorization algorithm for identifying 6 major different sports types. In our algorithm, the features automatically extracted from videos are analysed statistically. The experimental results show a clear tendency and the applicability of the proposed method for sports genre identification.

  4. A CCD Camera with Electron Decelerator for Intermediate Voltage Electron Microscopy

    SciTech Connect

    Downing, Kenneth H; Downing, Kenneth H.; Mooney, Paul E.

    2008-03-17

    Electron microscopists are increasingly turning to Intermediate Voltage Electron Microscopes (IVEMs) operating at 300 - 400 kV for a wide range of studies. They are also increasingly taking advantage of slow-scan charge coupled device (CCD) cameras, which have become widely used on electron microscopes. Under some conditions CCDs provide an improvement in data quality over photographic film, as well as the many advantages of direct digital readout. However, CCD performance is seriously degraded on IVEMs compared to the more conventional 100 kV microscopes. In order to increase the efficiency and quality of data recording on IVEMs, we have developed a CCD camera system in which the electrons are decelerated to below 100 kV before impacting the camera, resulting in greatly improved performance in both signal quality and resolution compared to other CCDs used in electron microscopy. These improvements will allow high-quality image and diffraction data to be collected directly with the CCD, enabling improvements in data collection for applications including high-resolution electron crystallography, single-particle reconstruction of protein structures, tomographic studies of cell ultrastructure and remote microscope operation. This approach will enable us to use even larger format CCD chips that are being developed with smaller pixels.

  5. Video-Based Point Cloud Generation Using Multiple Action Cameras

    NASA Astrophysics Data System (ADS)

    Teo, T.

    2015-05-01

    Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1) camera calibration, (2) video conversion and alignment, (3) orientation modelling, (4) dense matching, and (5) evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM) technique is utilized to obtain the image orientations. Then, semi-global matching (SGM) algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  6. Research on radiometric calibration of interline transfer CCD camera based on TDI working mode

    NASA Astrophysics Data System (ADS)

    Wu, Xing-xing; Liu, Jin-guo

    2010-10-01

    Interline transfer CCD camera can be designed to work in time delay and integration mode similar to TDI CCD to obtain higher responsivity and spatial resolution under poor illumination condition. However it was found that outputs of some pixels were much lower than others' as interline transfer CCD camera work in TDI mode in laboratory radiometric calibration experiments. As a result photo response non-uniformity(PRNU) and signal noise ratio(SNR) of the system turned for the worse. This phenomenon's mechanism was analyzed and improved PRNU and SNR algorithms of interline transfer CCD camera were advanced to solve this problem. In this way TDI stage was used as a variant in PRNU and SNR algorithms and system performance was improved observably with few influences on use. In validation experiments the improved algorithms was applied in radiometric calibration of a camera with KAI-0340s as detector. Results of validation experiments proved that the improved algorithms could effectively improve SNR and lower PRNU of the system. At the same time characteristic of the system could be reflected better. As working in 16 TDI stages, PRUN was reduced from 2.25% to 0.82% and SNR was improved about 2%.

  7. Perfecting the Photometric Calibration of the ACS CCD Cameras

    NASA Astrophysics Data System (ADS)

    Bohlin, Ralph C.

    2016-09-01

    Newly acquired data and improved data reduction algorithms mandate a fresh look at the absolute flux calibration of the charge-coupled device cameras on the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS). The goals are to achieve a 1% accuracy and to make this calibration more accessible to the HST guest investigator. Absolute fluxes from the CALSPEC1 database for three primary hot 30,000-60,000K WDs define the sensitivity calibrations for the Wide Field Channel (WFC) and High Resolution Channel (HRC) filters. The external uncertainty for the absolute flux is ˜1%, while the internal consistency of the sensitivities in the broadband ACS filters is ˜0.3% among the three primary WD flux standards. For stars as cool as K type, the agreement with the CALSPEC standards is within 1% at the WFC1-1K subarray position, which achieves the 1% precision goal for the first time. After making a small adjustment to the filter bandpass for F814W, the 1% precision goal is achieved over the full F814W WFC field of view for stars of K type and hotter. New encircled energies and absolute sensitivities replace the seminal results of Sirianni et al. that were published in 2005. After implementing the throughput updates, synthetic predictions of the WFC and HRC count rates for the average of the three primary WD standard stars agree with the observations to 0.1%.

  8. Design principles and applications of a cooled CCD camera for electron microscopy.

    PubMed

    Faruqi, A R

    1998-01-01

    Cooled CCD cameras offer a number of advantages in recording electron microscope images with CCDs rather than film which include: immediate availability of the image in a digital format suitable for further computer processing, high dynamic range, excellent linearity and a high detective quantum efficiency for recording electrons. In one important respect however, film has superior properties: the spatial resolution of CCD detectors tested so far (in terms of point spread function or modulation transfer function) are inferior to film and a great deal of our effort has been spent in designing detectors with improved spatial resolution. Various instrumental contributions to spatial resolution have been analysed and in this paper we discuss the contribution of the phosphor-fibre optics system in this measurement. We have evaluated the performance of a number of detector components and parameters, e.g. different phosphors (and a scintillator), optical coupling with lens or fibre optics with various demagnification factors, to improve the detector performance. The camera described in this paper, which is based on this analysis, uses a tapered fibre optics coupling between the phosphor and the CCD and is installed on a Philips CM12 electron microscope equipped to perform cryo-microscopy. The main use of the camera so far has been in recording electron diffraction patterns from two dimensional crystals of bacteriorhodopsin--from wild type and from different trapped states during the photocycle. As one example of the type of data obtained with the CCD camera a two dimensional Fourier projection map from the trapped O-state is also included. With faster computers, it will soon be possible to undertake this type of work on an on-line basis. Also, with improvements in detector size and resolution, CCD detectors, already ideal for diffraction, will be able to compete with film in the recording of high resolution images.

  9. High-definition slit-lamp video camera system.

    PubMed

    Yamamoto, Satoru; Manabe, Noriyoshi; Yamamoto, Kenji

    2010-01-01

    Using a high-definition video camera for slit-lamp examination is now possible with the assistance of an adaptor. The authors describe the easy manipulation, convenience of use, and performance of a high-definition slit-lamp video camera system and provide images of eyes that were obtained using the system.

  10. BroCam: a versatile PC-based CCD camera system

    NASA Astrophysics Data System (ADS)

    Klougart, Jens

    1995-03-01

    At the Copenhagen University, we have developed a compact CCD camera system for single and mosaic CCDs. The camera control and data acquisition is performed by a 486 type PC via a frame buffer located in one ISA-bus slot, communicating to the camera electronics on two optical fibers. The PC can run as well special purpose DOS programs, as in a more general mode under LINUX, a UNIX similar operating system. In the latter mode, standard software packages, such as SAOimage and Gnuplot, are utilized extensively thereby reducing the amount of camera specific software. At the same time the observer feels at ease with the system in an IRAF-like environment. Finally, the LINUX version enables the camera to be remotely controlled.

  11. Rotation vector analysis of eye movement in three dimensions with an infrared CCD camera.

    PubMed

    Imai, T; Takeda, N; Morita, M; Koizuka, I; Kubo, T; Miura, K; Nakamae, K; Fujioka, H

    1999-01-01

    We have developed a new technique for analyzing the rotation vector of eye movement in three dimensions with an infrared CCD camera based on the following four assumptions; i) the eye rotates on a point; ii) the pupil edge is a circle; iii) the distance from the center of eye rotation to pupil circle remains unchanged despite the rotation; iv) the image of the eye by the CCD camera is projected onto a plane which is perpendicular to the camera axis. After taking digital images of voluntary circular eye movements, we first constructed a three-dimensional frame of reference fixed on the orbita of the subject wearing a goggle equipped with an infrared CCD camera, and determined the space coordinates of the center of eye rotation, the center of the pupil, and an iris freckle. We then took digital images of the eye movements during a saccade or vestibulo-ocular reflex (VOR) and analyzed the axis and angle of the eye movements by the trajectories of the center of the pupil and the iris freckle. Finally, Listing's plane of saccade and the gain and the phase of VOR were obtained. The suitability of this technique is examined.

  12. Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera

    NASA Technical Reports Server (NTRS)

    Stanojev, B. J.; Houts, M.

    2004-01-01

    Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.

  13. Measuring high-resolution sky luminance distributions with a CCD camera.

    PubMed

    Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther

    2013-03-10

    We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively.

  14. Measurements of surface roughness: use of a CCD camera to correlate doubly scattered speckle patterns.

    PubMed

    Basano, L; Leporatti, S; Ottonello, P; Palestini, V; Rolandi, R

    1995-11-01

    We describe an instrument, built around a commercial CCD camera and some fast image-processing boards, that evaluates roughness height by measuring the average size of doubly scattered speckle patterns. The device is a variant of a recent proposal that was based on the use of a spatial modulator to perform the Fourier transform of a speckle image. In the present setup, the Fourier transform is replaced by the direct evaluation of a second-order correlation function. Strictly speaking, the device proposed in this paper is not a real-time device but its response time (approximately 10 s) is sufficiently short to be of practical value for many applications. Updated CCD cameras that will significantly improve the performance of our prototype are already on the market.

  15. Measurements of surface roughness: use of a CCD camera to correlate doubly scattered speckle patterns

    NASA Astrophysics Data System (ADS)

    Basano, Lorenzo; Leporatti, Stefano; Ottonello, Pasquale; Palestini, Valeria; Rolandi, Ranieri

    1995-11-01

    We describe an instrument, built around a commercial CCD camera and some fast image-processing boards, that evaluates roughness height by measuring the average size of doubly scattered speckle patterns. The device is a variant of a recent proposal that was based on the use of a spatial modulator to perform the Fourier transform of a speckle image. In the present setup, the Fourier transform is replaced by the direct evaluation of a second-order correlation function. Strictly speaking, the device proposed in this paper is not a real-time device but its response time (approximately 10 s) is sufficiently short to be of practical value for many applications. Updated CCD cameras that will significantly improve the performance of our prototype are already on the market.

  16. The development of a high-speed 100 fps CCD camera

    SciTech Connect

    Hoffberg, M.; Laird, R.; Lenkzsus, F. Liu, Chuande; Rodricks, B.; Gelbart, A.

    1996-09-01

    This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512x512 pixel CCD as its sensor which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergoes correlated double sampling after which, they are digitized into 12 bits. The throughput of the system translates into 60 MB/second which is either stored directly in a PC or transferred to a custom designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for x-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed x-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from I MHz to 15 MHz. The noise was measure to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and x-ray photons.

  17. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  18. Flexible heat pipes for CCD cooling on the Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Schweickart, Russell B.; Buchko, Matthew M.

    1998-08-01

    The Advanced Camera for Surveys (ACS) is an instrument containing two charged-coupled device (CCD) cameras and a multi-anode multi-channel array (MAMA) detector being built by Ball Aerospace and Technologies Corporation for NASA's Goddard Space Flight Center. The instrument is scheduled to be installed in the Hubble Space Telescope during a space shuttle mission in December of 1999. The CCD detectors need to operate at a temperature below -80 degrees C in order to avoid unacceptable dark current. This cooling is achieved with thermo-electric coolers (TEC) mounted in evacuated assemblies that contain the detectors. Heat that is generated by the TECs must be dissipated to space. Since the CCd assemblies are centrally located within the instrument enclosure, a method must be provided for transferring this heat to a heat rejection surfaces. Heat pipes have been selected for this purpose since they are frequently used in space applications for passively transferring heat from sources to remotely located radiating panels. The alignment of the CCDs is critical, however, so the loads induced into the detectors and the optical bench containing the sensor assemblies through heat pipes must be minimized. Consequently, the CCD heat pipes have been designed with a flexible section to minimize either thermally generated or launch induced structural loads. Structural and thermal testing has shown that these heat pipes will allow the ACS detectors to attain their operating temperature while meeting alignment stability requirements. This paper presents the design of and test results from the ACS flexible heat pipes.

  19. Initial laboratory evaluation of color video cameras: Phase 2

    SciTech Connect

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  20. The ARGOS wavefront sensor pnCCD camera for an ELT: characteristics, limitations and applications

    NASA Astrophysics Data System (ADS)

    de Xivry, G. Orban; Ihle, S.; Ziegleder, J.; Barl, L.; Hartmann, R.; Rabien, S.; Soltau, H.; Strueder, L.

    2011-09-01

    From low-order to high-order AO, future wave front sensors on ELTs require large, fast, and low-noise detectors with high quantum efficiency and low dark current. While a detector for a high-order Shack-Hartmann WFS does not exist yet, the current CCD technology pushed to its limits already provides several solutions for the ELT AO detector requirements. One of these devices is the new WFS pnCCD camera of ARGOS, the Ground-Layer Adaptive Optics system (GLAO) for LUCIFER at LBT. Indeed, with its 264x264 pixels, 48 mu m pixel size and 1kHz frame rate, this camera provides a technological solution to different needs of the AO systems for ELTs, such as low-order but as well possibly higher order correction using pyramid wavefront sensing. In this contribution, we present the newly developped WFS pnCCD camera of ARGOS and how it fulfills future detector needs of AO on ELTs.

  1. High-resolution image digitizing through 12x3-bit RGB-filtered CCD camera

    NASA Astrophysics Data System (ADS)

    Cheng, Andrew Y. S.; Pau, Michael C. Y.

    1996-09-01

    A high resolution computer-controlled CCD image capturing system is developed by using a 12 bits 1024 by 1024 pixels CCD camera and motorized RGB filters to grasp an image with color depth up to 36 bits. The filters distinguish the major components of color and collect them separately while the CCD camera maintains the spatial resolution and detector filling factor. The color separation can be done optically rather than electronically. The operation is simply by placing the capturing objects like color photos, slides and even x-ray transparencies under the camera system, the necessary parameters such as integration time, mixing level and light intensity are automatically adjusted by an on-line expert system. This greatly reduces the restrictions of the capturing species. This unique approach can save considerable time for adjusting the quality of image, give much more flexibility of manipulating captured object even if it is a 3D object with minimal setup fixers. In addition, cross sectional dimension of a 3D capturing object can be analyzed by adapting a fiber optic ring light source. It is particularly useful in non-contact metrology of a 3D structure. The digitized information can be stored in an easily transferable format. Users can also perform a special LUT mapping automatically or manually. Applications of the system include medical images archiving, printing quality control, 3D machine vision, and etc.

  2. Data acquisition system based on the Nios II for a CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Hu, Keliang; Wang, Chunrong; Liu, Yangbing; He, Chun

    2006-06-01

    The FPGA with Avalon Bus architecture and Nios soft-core processor developed by Altera Corporation is an advanced embedded solution for control and interface systems. A CCD data acquisition system with an Ethernet terminal port based on the TCP/IP protocol is implemented in NAOC, which is composed of a piece of interface board with an Altera's FPGA, 32MB SDRAM and some other accessory devices integrated on it, and two packages of control software used in the Nios II embedded processor and the remote host PC respectively. The system is used to replace a 7200 series image acquisition card which is inserted in a control and data acquisition PC, and to download commands to an existing CCD camera and collect image data from the camera to the PC. The embedded chip in the system is a Cyclone FPGA with a configurable Nios II soft-core processor. Hardware structure of the system, configuration for the embedded soft-core processor, and peripherals of the processor in the PFGA are described. The C program run in the Nios II embedded system is built in the Nios II IDE kits and the C++ program used in the PC is developed in the Microsoft's Visual C++ environment. Some key techniques in design and implementation of the C and VC++ programs are presented, including the downloading of the camera commands, initialization of the camera, DMA control, TCP/IP communication and UDP data uploading.

  3. Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems

    NASA Technical Reports Server (NTRS)

    Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.

    2011-01-01

    The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.

  4. Research on simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Liu, Qi; Cui, Xuenan

    2014-09-01

    To satisfy the needs for testing video processor of satellite remote sensing cameras, a design is provided to achieve a simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA. The correctness of video processor FPGA logic can be verified even without CCD signals or analog to digital convertor. Two Xilinx Virtex FPGAs are adopted to make a center unit, the logic of A/D digital data generating and data processing are developed with VHDL. The RS-232 interface is used to receive commands from the host computer, and different types of data are generated and outputted depending on the commands. Experimental results show that the simulation and verification system is flexible and can work well. The simulation and verification system meets the requirements of testing video processors for several different types of satellite remote sensing cameras.

  5. Thermal modeling of cooled instrument: from the WIRCam IR camera to CCD Peltier cooled compact packages

    NASA Astrophysics Data System (ADS)

    Feautrier, Philippe; Stadler, Eric; Downing, Mark; Hurrell, Steve; Wheeler, Patrick; Gach, Jean-Luc; Magnard, Yves; Balard, Philippe; Guillaume, Christian; Hubin, Norbert; Diaz, José Javier; Suske, Wolfgang; Jorden, Paul

    2006-06-01

    In the past decade, new thermal modelling tools have been offered to system designers. These modelling tools have rarely been used for the cooled instruments in ground-based astronomy. In addition to an overwhelming increase of PC computer capabilities, these tools are now mature enough to drive the design of complex astronomical instruments that are cooled. This is the case for WIRCam, the new wide-field infrared camera installed on the CFHT in Hawaii on the Mauna Kea summit. This camera uses four 2K×2K Rockwell Hawaii-2RG infrared detectors and includes 2 optical barrels and 2 filter wheels. This camera is mounted at the prime focus of the 3.6m CFHT telescope. The mass to be cooled is close to 100 kg. The camera uses a Gifford Mac-Mahon closed-cycle cryo-cooler. The capabilities of the I-deas thermal module (TMG) is demonstrated for our particular application: predicted performances are presented and compared to real measurements after integration on the telescope in December 2004. In addition, we present thermal modelling of small Peltier cooled CCD packages, including the thermal model of the CCD220 Peltier package (fabricated by e2v technologies) and cold head. ESO and the OPTICON European network have funded e2v technologies to develop a compact packaged Peltier-cooled 8-output back illuminated L3Vision CCD. The device will achieve sub-electron read-noise at frame rates up to 1.5 kHz. The development, fully dedicated to the latest generation of adaptive optics wavefront sensors, has many unique features. Among them, the ultra-compactness offered by a Peltier package integrated in a small cold head including the detector drive electronics, is a way to achieve amazing performances for adaptive optics systems. All these models were carried out using a normal PC laptop.

  6. Fused Six-Camera Video of STS-134 Launch

    NASA Video Gallery

    Imaging experts funded by the Space Shuttle Program and located at NASA's Ames Research Center prepared this video by merging nearly 20,000 photographs taken by a set of six cameras capturing 250 i...

  7. Station Cameras Capture New Videos of Hurricane Katia

    NASA Video Gallery

    Aboard the International Space Station, external cameras captured new video of Hurricane Katia as it moved northwest across the western Atlantic north of Puerto Rico at 10:35 a.m. EDT on September ...

  8. DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  9. DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  10. Analysis of unstructured video based on camera motion

    NASA Astrophysics Data System (ADS)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  11. Optical readout of a two phase liquid argon TPC using CCD camera and THGEMs

    NASA Astrophysics Data System (ADS)

    Mavrokoridis, K.; Ball, F.; Carroll, J.; Lazos, M.; McCormick, K. J.; Smith, N. A.; Touramanis, C.; Walker, J.

    2014-02-01

    This paper presents a preliminary study into the use of CCDs to image secondary scintillation light generated by THick Gas Electron Multipliers (THGEMs) in a two phase LAr TPC. A Sony ICX285AL CCD chip was mounted above a double THGEM in the gas phase of a 40 litre two-phase LAr TPC with the majority of the camera electronics positioned externally via a feedthrough. An Am-241 source was mounted on a rotatable motion feedthrough allowing the positioning of the alpha source either inside or outside of the field cage. Developed for and incorporated into the TPC design was a novel high voltage feedthrough featuring LAr insulation. Furthermore, a range of webcams were tested for operation in cryogenics as an internal detector monitoring tool. Of the range of webcams tested the Microsoft HD-3000 (model no:1456) webcam was found to be superior in terms of noise and lowest operating temperature. In ambient temperature and atmospheric pressure 1 ppm pure argon gas, the THGEM gain was ≈ 1000 and using a 1 msec exposure the CCD captured single alpha tracks. Successful operation of the CCD camera in two-phase cryogenic mode was also achieved. Using a 10 sec exposure a photograph of secondary scintillation light induced by the Am-241 source in LAr has been captured for the first time.

  12. Numerical simulations and analyses of temperature control loop heat pipe for space CCD camera

    NASA Astrophysics Data System (ADS)

    Meng, Qingliang; Yang, Tao; Li, Chunlin

    2016-10-01

    As one of the key units of space CCD camera, the temperature range and stability of CCD components affect the image's indexes. Reasonable thermal design and robust thermal control devices are needed. One kind of temperature control loop heat pipe (TCLHP) is designed, which highly meets the thermal control requirements of CCD components. In order to study the dynamic behaviors of heat and mass transfer of TCLHP, particularly in the orbital flight case, a transient numerical model is developed by using the well-established empirical correlations for flow models within three dimensional thermal modeling. The temperature control principle and details of mathematical model are presented. The model is used to study operating state, flow and heat characteristics based upon the analyses of variations of temperature, pressure and quality under different operating modes and external heat flux variations. The results indicate that TCLHP can satisfy the thermal control requirements of CCD components well, and always ensure good temperature stability and uniformity. By comparison between flight data and simulated results, it is found that the model is to be accurate to within 1°C. The model can be better used for predicting and understanding the transient performance of TCLHP.

  13. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Astrophysics Data System (ADS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-08-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  14. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  15. Design and realization of an image mosaic system on the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Wang, Peng; Zhu, Hai bin; Li, Yan; Zhang, Shao jun

    2015-08-01

    It has long been difficulties in aerial photograph to stitch multi-route images into a panoramic image in real time for multi-route flight framing CCD camera with very large amount of data, and high accuracy requirements. An automatic aerial image mosaic system based on GPU development platform is described in this paper. Parallel computing of SIFT feature extraction and matching algorithm module is achieved by using CUDA technology for motion model parameter estimation on the platform, which makes it's possible to stitch multiple CCD images in real-time. Aerial tests proved that the mosaic system meets the user's requirements with 99% accuracy and 30 to 50 times' speed improvement of the normal mosaic system.

  16. Development of the analog ASIC for multi-channel readout X-ray CCD camera

    NASA Astrophysics Data System (ADS)

    Nakajima, Hiroshi; Matsuura, Daisuke; Idehara, Toshihiro; Anabuki, Naohisa; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu; Katayama, Haruyoshi; Kitamura, Hisashi; Uchihori, Yukio

    2011-03-01

    We report on the performance of an analog application-specific integrated circuit (ASIC) developed aiming for the front-end electronics of the X-ray CCD camera system onboard the next X-ray astronomical satellite, ASTRO-H. It has four identical channels that simultaneously process the CCD signals. Distinctive capability of analog-to-digital conversion enables us to construct a CCD camera body that outputs only digital signals. As the result of the front-end electronics test, it works properly with low input noise of ≤30μV at the pixel rate below 100 kHz. The power consumption is sufficiently low of ˜150mW/chip. The input signal range of ±20 mV covers the effective energy range of the typical X-ray photon counting CCD (up to 20 keV). The integrated non-linearity is 0.2% that is similar as those of the conventional CCDs in orbit. We also performed a radiation tolerance test against the total ionizing dose (TID) effect and the single event effect. The irradiation test using 60Co and proton beam showed that the ASIC has the sufficient tolerance against TID up to 200 krad, which absolutely exceeds the expected amount of dose during the period of operating in a low-inclination low-earth orbit. The irradiation of Fe ions with the fluence of 5.2×108 Ion/cm2 resulted in no single event latchup (SEL), although there were some possible single event upsets. The threshold against SEL is higher than 1.68 MeV cm2/mg, which is sufficiently high enough that the SEL event should not be one of major causes of instrument downtime in orbit.

  17. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  18. CQUEAN: New CCD Camera System For The Otto Struve Telescope At The McDonald Observatory

    NASA Astrophysics Data System (ADS)

    Pak, Soojong; Park, W.; Im, M.

    2012-01-01

    We describe the overall characteristics and the performance of an optical CCD camera system, Camera for QUasars in EArly uNiverse (CQUEAN), which is being used at the 2.1m Otto Struve Telescope of the McDonald Observatory since 2010 August. CQUEAN was developed for follow-up imaging observations of near infrared bright sources such as high redshift quasar candidates (z > 4.5), Gamma Ray Bursts, brown dwarfs, and young stellar objects. For efficient observations of the red objects, CQUEAN has a science camera with a deep depletion CCD chip. By employing an auto-guiding system and a focal reducer to enhance the field of view at the classic cassegrain focus, we achieved a stable guiding in 20 minute exposures, an imaging quality with FWHM > 0.6 arcsec over the whole field (4.8 × 4.8 arcmin), and a limiting magnitude of z = 23.4 AB mag at 5-sigma with one hour integration.

  19. Experimental research on femto-second laser damaging array CCD cameras

    NASA Astrophysics Data System (ADS)

    Shao, Junfeng; Guo, Jin; Wang, Ting-feng; Wang, Ming

    2013-05-01

    Charged Coupled Devices (CCD) are widely used in military and security applications, such as airborne and ship based surveillance, satellite reconnaissance and so on. Homeland security requires effective means to negate these advanced overseeing systems. Researches show that CCD based EO systems can be significantly dazzled or even damaged by high-repetition rate pulsed lasers. Here, we report femto - second laser interaction with CCD camera, which is probable of great importance in future. Femto - second laser is quite fresh new lasers, which has unique characteristics, such as extremely short pulse width (1 fs = 10-15 s), extremely high peak power (1 TW = 1012W), and especially its unique features when interacting with matters. Researches in femto second laser interaction with materials (metals, dielectrics) clearly indicate non-thermal effect dominates the process, which is of vast difference from that of long pulses interaction with matters. Firstly, the damage threshold test are performed with femto second laser acting on the CCD camera. An 800nm, 500μJ, 100fs laser pulse is used to irradiate interline CCD solid-state image sensor in the experiment. In order to focus laser energy onto tiny CCD active cells, an optical system of F/5.6 is used. A Sony production CCDs are chose as typical targets. The damage threshold is evaluated with multiple test data. Point damage, line damage and full array damage were observed when the irradiated pulse energy continuously increase during the experiment. The point damage threshold is found 151.2 mJ/cm2.The line damage threshold is found 508.2 mJ/cm2.The full-array damage threshold is found to be 5.91 J/cm2. Although the phenomenon is almost the same as that of nano laser interaction with CCD, these damage thresholds are substantially lower than that of data obtained from nano second laser interaction with CCD. Then at the same time, the electric features after different degrees of damage are tested with electronic multi

  20. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  1. Upwelling radiance at 976 nm measured from space using the OPALS CCD camera on the ISS

    NASA Astrophysics Data System (ADS)

    Biswas, Abhijit; Kovalik, Joseph M.; Oaida, Bogdan V.; Abrahamson, Matthew; Wright, Malcolm W.

    2015-03-01

    The Optical Payload for Lasercomm Science (OPALS) Flight System on-board the International Space Station uses a charge coupled device (CCD) camera to detect a beacon laser from Earth. Relative measurements of the background contributed by upwelling radiance under diverse illumination conditions and varying surface terrain is presented. In some cases clouds in the field-of-view allowed a comparison of terrestrial and cloud-top upwelling radiance. In this paper we will report these measurements and examine the extent of agreement with atmospheric model predictions.

  2. Optical characterization of the SOFIA telescope using fast EM-CCD cameras

    NASA Astrophysics Data System (ADS)

    Pfüller, Enrico; Wolf, Jürgen; Hall, Helen; Röser, Hans-Peter

    2012-09-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has recently demonstrated its scientific capabilities in a first series of astronomical observing flights. In parallel, special measurements and engineering flights were conducted aiming at the characterization and the commissioning of the telescope and the complete airborne observatory. To support the characterization measurements, two commercial Andor iXon EM-CCD cameras have been used, a DU-888 dubbed Fast Diagnostic Camera (FDC) running at frame rates up to about 400 fps, and a DU-860 as a Super Fast Diagnostic Camera (SFDC) providing 2000 fps. Both cameras have been mounted to the telescope’s Focal Plane Imager (FPI) flange in lieu of the standard FPI tracking camera. Their fast image sequences have been used to analyze and to improve the telescope’s pointing stability, especially to help tuning active mass dampers that suppress eigenfrequencies in the telescope system, to characterize and to optimize the chopping secondary mirror and to investigate the structure and behavior of the shear layer that forms over the open telescope cavity in flight. In June 2011, a collaboration between the HIPO science instrument team, the MIT’s stellar occultation group and the FDC team, led to the first SOFIA observation of a stellar occultation by the dwarf planet Pluto over the Pacific.

  3. The Laboratory Radiometric Calibration of the CCD Stereo Camera for the Optical Payload of the Lunar Explorer Project

    NASA Astrophysics Data System (ADS)

    Wang, Jue; Li, Chun-Lai; Zhao, Bao-Chang

    2007-03-01

    The system of the optical payload for the Lunar Explorer includes a CCD stereo camera and an imaging interferometer. The former is devised to get the solid images of the lunar surface with a laser altimeter. The camera working principle, calibration purpose, and content, nude chip detection, and the process of the relative and absolute calibration in the laboratory are introduced.

  4. Controlled Impact Demonstration (CID) tail camera video

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The Controlled Impact Demonstration (CID) was a joint research project by NASA and the FAA to test a survivable aircraft impact using a remotely piloted Boeing 720 aircraft. The tail camera movie is one shot running 27 seconds. It shows the impact from the perspective of a camera mounted high on the vertical stabilizer, looking forward over the fuselage and wings.

  5. A rehabilitation training system with double-CCD camera and automatic spatial positioning technique

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Wei, Tzu-Chi; Lu, An-Tsung; Hung, San-Shan; Chen, Wei-Lung; Chang, Chia-Chang

    2011-03-01

    This study aimed to develop a computer game for machine vision integrated rehabilitation training system. The main function of the system is to allow users to conduct hand grasp-and-place movement through machine vision integration. Images are captured by a double-CCD camera, and then positioned on a large screen. After defining the right, left, upper, and lower boundaries of the captured images, an automatic spatial positioning technique is employed to obtain their correlation functions, and lookup tables are defined for cameras. This system can provide rehabilitation courses and games that allow users to exercise grasp-and-place movements, in order to improve their upper limb movement control, trigger trunk control, and balance training.

  6. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  7. HERSCHEL/SCORE, imaging the solar corona in visible and EUV light: CCD camera characterization.

    PubMed

    Pancrazzi, M; Focardi, M; Landini, F; Romoli, M; Fineschi, S; Gherardi, A; Pace, E; Massone, G; Antonucci, E; Moses, D; Newmark, J; Wang, D; Rossi, G

    2010-07-01

    The HERSCHEL (helium resonant scattering in the corona and heliosphere) experiment is a rocket mission that was successfully launched last September from White Sands Missile Range, New Mexico, USA. HERSCHEL was conceived to investigate the solar corona in the extreme UV (EUV) and in the visible broadband polarized brightness and provided, for the first time, a global map of helium in the solar environment. The HERSCHEL payload consisted of a telescope, HERSCHEL EUV Imaging Telescope (HEIT), and two coronagraphs, HECOR (helium coronagraph) and SCORE (sounding coronagraph experiment). The SCORE instrument was designed and developed mainly by Italian research institutes and it is an imaging coronagraph to observe the solar corona from 1.4 to 4 solar radii. SCORE has two detectors for the EUV lines at 121.6 nm (HI) and 30.4 nm (HeII) and the visible broadband polarized brightness. The SCORE UV detector is an intensified CCD with a microchannel plate coupled to a CCD through a fiber-optic bundle. The SCORE visible light detector is a frame-transfer CCD coupled to a polarimeter based on a liquid crystal variable retarder plate. The SCORE coronagraph is described together with the performances of the cameras for imaging the solar corona.

  8. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  9. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  10. A toolkit for the characterization of CCD cameras for transmission electron microscopy.

    PubMed

    Vulovic, M; Rieger, B; van Vliet, L J; Koster, A J; Ravelli, R B G

    2010-01-01

    Charge-coupled devices (CCD) are nowadays commonly utilized in transmission electron microscopy (TEM) for applications in life sciences. Direct access to digitized images has revolutionized the use of electron microscopy, sparking developments such as automated collection of tomographic data, focal series, random conical tilt pairs and ultralarge single-particle data sets. Nevertheless, for ultrahigh-resolution work photographic plates are often still preferred. In the ideal case, the quality of the recorded image of a vitrified biological sample would solely be determined by the counting statistics of the limited electron dose the sample can withstand before beam-induced alterations dominate. Unfortunately, the image is degraded by the non-ideal point-spread function of the detector, as a result of a scintillator coupled by fibre optics to a CCD, and the addition of several inherent noise components. Different detector manufacturers provide different types of figures of merit when advertising the quality of their detector. It is hard for most laboratories to verify whether all of the anticipated specifications are met. In this report, a set of algorithms is presented to characterize on-axis slow-scan large-area CCD-based TEM detectors. These tools have been added to a publicly available image-processing toolbox for MATLAB. Three in-house CCD cameras were carefully characterized, yielding, among others, statistics for hot and bad pixels, the modulation transfer function, the conversion factor, the effective gain and the detective quantum efficiency. These statistics will aid data-collection strategy programs and provide prior information for quantitative imaging. The relative performance of the characterized detectors is discussed and a comparison is made with similar detectors that are used in the field of X-ray crystallography.

  11. Performance Characterization of the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) CCD Cameras

    NASA Technical Reports Server (NTRS)

    Joiner, Reyann; Kobayashi, Ken; Winebarger, Amy; Champey, Patrick

    2014-01-01

    The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a sounding rocket instrument currently being developed by NASA's Marshall Space Flight Center (MSFC), the National Astronomical Observatory of Japan (NAOJ), and other partners. The goal of this instrument is to observe and detect the Hanle effect in the scattered Lyman-Alpha UV (121.6nm) light emitted by the Sun's chromosphere. The polarized spectrum imaged by the CCD cameras will capture information about the local magnetic field, allowing for measurements of magnetic strength and structure. In order to make accurate measurements of this effect, the performance characteristics of the three on- board charge-coupled devices (CCDs) must meet certain requirements. These characteristics include: quantum efficiency, gain, dark current, read noise, and linearity. Each of these must meet predetermined requirements in order to achieve satisfactory performance for the mission. The cameras must be able to operate with a gain of 2.0+/- 0.5 e--/DN, a read noise level less than 25e-, a dark current level which is less than 10e-/pixel/s, and a residual non- linearity of less than 1%. Determining these characteristics involves performing a series of tests with each of the cameras in a high vacuum environment. Here we present the methods and results of each of these performance tests for the CLASP flight cameras.

  12. Remote control video cameras on a suborbital rocket

    NASA Astrophysics Data System (ADS)

    Wessling, Francis C., Dr.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space.

  13. Remote control video cameras on a suborbital rocket

    SciTech Connect

    Wessling, Francis C.

    1997-01-10

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space.

  14. Soft x-ray response of the x-ray CCD camera directly coated with optical blocking layer

    NASA Astrophysics Data System (ADS)

    Ikeda, S.; Kohmura, T.; Kawai, K.; Kaneko, K.; watanabe, T.; Tsunemi, H.; Hayashida, K.; Anabuki, N.; Nakajima, H.; Ueda, S.; Tsuru, T. G.; Dotani, T.; Ozaki, M.; Matsuta, K.; Fujinaga, T.; Kitamoto, S.; Murakami, H.; Hiraga, J.; Mori, K.; ASTRO-H SXI Team

    2012-03-01

    We have developed the back-illuminated X-ray CCD camera (BI-CCD) to observe Xray in space. The X-ray CCD has a sensitivity not only for in X-ray but also in both Optical and UV light, X-ray CCD has to equip a filter to cut off optical light as well as UV light. The X-ray Imaging Spectrometer (XIS) onboard Suzaku satellite equipped with a thin film (OBF: Optical Blocking Filter) to cut off optical light and UV light. OBF is always in danger tearing by the acousmato or vibration during the launch, and it is difficult to handle on the ground because of its thickness. Instead of OBF, we have newly developed and produced OBL (Optical Blocking Layer), which is directly coating on the X-ray CCD surface.

  15. Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

  16. Characterisation of the signal and noise transfer of CCD cameras for electron detection.

    PubMed

    Meyer, R R; Kirkland, A I

    2000-05-01

    Methods to characterise the performance of CCD cameras for electron detection are investigated with particular emphasis on the difference between the transfer of signal and noise. Similar to the Modulation Transfer Function MTF, which describes the spatial frequency dependent attenuation of contrast in the image, we introduce a Noise Transfer Function NTF that describes the transfer of the Poisson noise that is inevitably present in any electron image. A general model for signal and noise transfer by an image converter is provided. This allows the calculation of MTF and NTF from Monte-Carlo simulations of the trajectories of electrons and photons in the scintillator and the optical coupling of the camera. Furthermore, accurate methods to measure the modulation and noise transfer functions experimentally are presented. The spatial-frequency dependent Detection Quantum Efficiency DQE, an important figure of merit of the camera which has so far not been measured experimentally, can be obtained from the measured MTF and NTF. The experimental results are in good agreement with the simulations and show that the NTF at high spatial frequencies is in some cases by a factor of four higher than the MTF. This implies that the noise method, which is frequently used to measure the MTF, but in fact measures the NTF, gives over-optimistic results. Furthermore, the spatial frequency dependent DQE is lower than previously assumed.

  17. Voice Controlled Stereographic Video Camera System

    NASA Astrophysics Data System (ADS)

    Goode, Georgianna D.; Philips, Michael L.

    1989-09-01

    For several years various companies have been developing voice recognition software. Yet, there are few applications of voice control in the robotics field and virtually no examples of voice controlled three dimensional (3-D) systems. In late 1987 ARD developed a highly specialized, voice controlled 3-D vision system for use in remotely controlled, non-tethered robotic applications. The system was designed as an operator's aid and incorporates features thought to be necessary or helpful in remotely maneuvering a vehicle. Foremost is the three dimensionality of the operator's console display. An image that provides normal depth perception cues over a range of depths greatly increases the ease with which an operator can drive a vehicle and investigate its environment. The availability of both vocal and manual control of all system functions allows the operator to guide the system according to his personal preferences. The camera platform can be panned +/-178 degrees and tilted +/-30 degrees for a full range of view of the vehicle's environment. The cameras can be zoomed and focused for close inspection of distant objects, while retaining substantial stereo effect by increasing the separation between the cameras. There is a ranging and measurement function, implemented through a graphical cursor, which allows the operator to mark objects in a scene to determine their relative positions. This feature will be helpful in plotting a driving path. The image seen on the screen is overlaid with icons and digital readouts which provide information about the position of the camera platform, the range to the graphical cursor and the measurement results. The cursor's "range" is actually the distance from the cameras to the object on which the cursor is resting. Other such features are included in the system and described in subsequent sections of this paper.

  18. Evaluation of scanners and CCD cameras for high-resolution TEM of protein crystals and single particles.

    PubMed

    Hesse, J; Hebert, H; Koeck, P J

    2000-05-01

    The modulation transfer function (MTF) and the geometric errors of two flatbed scanners, a slow-scan CCD (SSC) camera and film, have been measured and compared. The geometric errors of the SSC camera and film have been measured using diffraction spots from a lipid crystal. The SSC camera was shown to have the smallest geometric errors while film had the best MTF. Even though film had the best MTF, this is significantly reduced when scanning the film, so that the MTF of the film and scanner combined are comparable to the MTF of the SSC camera.

  19. Improvement of relief algorithm to prevent inpatient's downfall accident with night-vision CCD camera

    NASA Astrophysics Data System (ADS)

    Matsuda, Noriyuki; Yamamoto, Takeshi; Miwa, Masafumi; Nukumi, Shinobu; Mori, Kumiko; Kuinose, Yuko; Maeda, Etuko; Miura, Hirokazu; Taki, Hirokazu; Hori, Satoshi; Abe, Norihiro

    2005-12-01

    "ROSAI" hospital, Wakayama City in Japan, reported that inpatient's bed-downfall is one of the most serious accidents in hospital at night. Many inpatients have been having serious damages from downfall accidents from a bed. To prevent accidents, the hospital tested several sensors in a sickroom to send warning-signal of inpatient's downfall accidents to a nurse. However, it sent too much inadequate wrong warning about inpatients' sleeping situation. To send a nurse useful information, precise automatic detection for an inpatient's sleeping situation is necessary. In this paper, we focus on a clustering-algorithm which evaluates inpatient's situation from multiple angles by several kinds of sensor including night-vision CCD camera. This paper indicates new relief algorithm to improve the weakness about exceptional cases.

  20. Retrieval of the optical depth using an all-sky CCD camera.

    PubMed

    Olmo, Francisco J; Cazorla, Alberto; Alados-Arboledas, Lucas; López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier

    2008-12-01

    A new method is presented for retrieval of the aerosol and cloud optical depth using a CCD camera equipped with a fish-eye lens (all-sky imager system). In a first step, the proposed method retrieves the spectral radiance from sky images acquired by the all-sky imager system using a linear pseudoinverse algorithm. Then, the aerosol or cloud optical depth at 500 nm is obtained as that which minimizes the residuals between the zenith spectral radiance retrieved from the sky images and that estimated by the radiative transfer code. The method is tested under extreme situations including the presence of nonspherical aerosol particles. The comparison of optical depths derived from the all-sky imager with those retrieved with a sunphotometer operated side by side shows differences similar to the nominal error claimed in the aerosol optical depth retrievals from sunphotometer networks.

  1. Development of X-ray CCD camera based X-ray micro-CT system

    NASA Astrophysics Data System (ADS)

    Sarkar, Partha S.; Ray, N. K.; Pal, Manoj K.; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y.; Sinha, A.; Gadkari, S. C.

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  2. Development of X-ray CCD camera based X-ray micro-CT system.

    PubMed

    Sarkar, Partha S; Ray, N K; Pal, Manoj K; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y; Sinha, A; Gadkari, S C

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  3. 0.25mm-thick CCD packaging for the Dark Energy Survey Camera array

    SciTech Connect

    Derylo, Greg; Diehl, H.Thomas; Estrada, Juan; /Fermilab

    2006-06-01

    The Dark Energy Survey Camera focal plane array will consist of 62 2k x 4k CCDs with a pixel size of 15 microns and a silicon thickness of 250 microns for use at wavelengths between 400 and 1000 nm. Bare CCD die will be received from the Lawrence Berkeley National Laboratory (LBNL). At the Fermi National Accelerator Laboratory, the bare die will be packaged into a custom back-side-illuminated module design. Cold probe data from LBNL will be used to select the CCDs to be packaged. The module design utilizes an aluminum nitride readout board and spacer and an Invar foot. A module flatness of 3 microns over small (1 sqcm) areas and less than 10 microns over neighboring areas on a CCD are required for uniform images over the focal plane. A confocal chromatic inspection system is being developed to precisely measure flatness over a grid up to 300 x 300 mm. This system will be utilized to inspect not only room-temperature modules, but also cold individual modules and partial arrays through flat dewar windows.

  4. Toward Dietary Assessment via Mobile Phone Video Cameras

    PubMed Central

    Chen, Nicholas; Lee, Yun Young; Rabb, Maurice; Schatz, Bruce

    2010-01-01

    Reliable dietary assessment is a challenging yet essential task for determining general health. Existing efforts are manual, require considerable effort, and are prone to underestimation and misrepresentation of food intake. We propose leveraging mobile phones to make this process faster, easier and automatic. Using mobile phones with built-in video cameras, individuals capture short videos of their meals; our software then automatically analyzes the videos to recognize dishes and estimate calories. Preliminary experiments on 20 typical dishes from a local cafeteria show promising results. Our approach complements existing dietary assessment methods to help individuals better manage their diet to prevent obesity and other diet-related diseases. PMID:21346950

  5. 67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST OF ASSISTANT LAUNCH CONDUCTOR PANEL SHOWN IN CA-133-1-A-66 - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  6. Lights, Camera, Action! Using Video Recordings to Evaluate Teachers

    ERIC Educational Resources Information Center

    Petrilli, Michael J.

    2011-01-01

    Teachers and their unions do not want test scores to count for everything; classroom observations are key, too. But planning a couple of visits from the principal is hardly sufficient. These visits may "change the teacher's behavior"; furthermore, principals may not be the best judges of effective teaching. So why not put video cameras in…

  7. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  8. Alignment method of optical registration for multi-channel CCD camera

    NASA Astrophysics Data System (ADS)

    Xin, Jia; Yue, Guo

    2016-10-01

    The mapping satellite is use of the multichip CCD assemble technology to meet the precision landscape positioning requirements. The size of a single CCD cannot meet the requirements of modern optical system. High cost and special technology are required for the resolution. In order to apply space camera to the measurement in large field of view and high resolution, the technology of optical assembly with several CCD is discussed. And a reflector based butting system was adopted. To extend the field of view, an optical butting system is proposed. Aiming at the problems of vignette and decline of modulation transfer function caused by butting, a reflector based butting system which has nine mirrors was investigated. This paper introduced the structure design of a long array and the principle of optical butting. The basic idea of this system is to split the optical image into several parts, so that they can be detected by different sensors. The mirror is used in conventional imaging system; divide the optical image into two parts. To eliminate the vignette distortion caused by the optical system and keep high signal to noise ratio, the sensors receiving the two focal image parts are placed with a little overlapping so that they can compensate each other. In order to ensure the key techniques of mirror location accuracy, a new alignment method was proposed about locating conversation components, mainly aimed at enhancing assembly accuracy of linear array CCD.A high quality image can be obtained by butting the two image parts. Its principle, methods of adjusting and testing as well as the structure of focal plane are described. The assembly with nine TDICCDs is finished on the facility which is composed of a long work-distance microscope and a precise X-Y rail, using the method in which the mechanical adjusting is applied. Compared with convention system, this method can satisfy the linearity accuracy and overlapping pixels tolerance of 0.2 detector pixel sizes. And can

  9. Deep-UV-sensitive high-frame-rate backside-illuminated CCD camera developments

    NASA Astrophysics Data System (ADS)

    Dawson, Robin M.; Andreas, Robert; Andrews, James T.; Bhaskaran, Mahalingham; Farkas, Robert; Furst, David; Gershstein, Sergey; Grygon, Mark S.; Levine, Peter A.; Meray, Grazyna M.; O'Neal, Michael; Perna, Steve N.; Proefrock, Donald; Reale, Michael; Soydan, Ramazan; Sudol, Thomas M.; Swain, Pradyumna K.; Tower, John R.; Zanzucchi, Pete

    2002-04-01

    New applications for ultra-violet imaging are emerging in the fields of drug discovery and industrial inspection. High throughput is critical for these applications where millions of drug combinations are analyzed in secondary screenings or high rate inspection of small feature sizes over large areas is required. Sarnoff demonstrated in1990 a back illuminated, 1024 X 1024, 18 um pixel, split-frame-transfer device running at > 150 frames per second with high sensitivity in the visible spectrum. Sarnoff designed, fabricated and delivered cameras based on these CCDs and is now extending this technology to devices with higher pixel counts and higher frame rates through CCD architectural enhancements. The high sensitivities obtained in the visible spectrum are being pushed into the deep UV to support these new medical and industrial inspection applications. Sarnoff has achieved measured quantum efficiencies > 55% at 193 nm, rising to 65% at 300 nm, and remaining almost constant out to 750 nm. Optimization of the sensitivity is being pursued to tailor the quantum efficiency for particular wavelengths. Characteristics of these high frame rate CCDs and cameras will be described and results will be presented demonstrating high UV sensitivity down to 150 nm.

  10. Real-time synchronous CCD camera observation and reflectance measurement of evaporation-induced polystyrene colloidal self-assembly.

    PubMed

    Lin, Dongfeng; Wang, Jinze; Yang, Lei; Luo, Yanhong; Li, Dongmei; Meng, Qingbo

    2014-04-15

    A new monitoring technique, which combines real-time in-situ CCD camera observation and reflectance spectra measurement, has been developed to study the growing and drying processes of evaporation-induced self-assembly (EISA). Evolutions of the reflectance spectrum and CCD camera images both reveal that the entire process of polystyrene (PS) EISA contains three stages: crack-initiation stage (T1), crack-propagation stage (T2), and crack-remained stage (T3). A new phenomenon, the red-shift of stop-band, is observed when the crack begins to propagate in the monitored window of CCD camera. Deformation of colloidal spheres, which mainly results in the increase of volume fraction of spheres, is applied to explain the phenomenon. Moreover, the modified scalar wave approximation (SWA) is utilized to analyze the reflectance spectra, and the fitting results are in good agreement with the evolution of CCD camera images. This new monitoring technique and the analysis method provide a good way to get insight into the growing and drying processes of PS colloidal self-assembly, especially the crack propagation.

  11. Can video cameras replace visual estrus detection in dairy cows?

    PubMed

    Bruyère, P; Hétreau, T; Ponsart, C; Gatien, J; Buff, S; Disenhaus, C; Giroud, O; Guérin, P

    2012-02-01

    A 6-mo experiment was conducted in a dairy herd to evaluate a video system for estrus detection. From October 2007 to April 2008, 35 dairy cows of three breeds that ranged in age from 2 to 6 yr were included in the study. Four daylight cameras were set up in two free stalls with straw litter and connected to a computer equipped with specific software to detect movement. This system allowed the continuous observation of the cows as well as video storage. An observation method related to the functionality of the video management software ("Camera-Icons" method) was used to detect the standing mount position and was compared to direct visual observation (direct visual method). Both methods were based on the visualization of standing mount position. A group of profile photos consisting of the full face, left side, right side, and back of each cow was used to identify animals on the videos. Milk progesterone profiles allowed the determination of ovulatory periods (reference method), and a total of 84 ovulatory periods were used. Data obtained by direct visual estrus detection were used as a control. Excluding the first postpartum ovulatory periods, the "Camera-Icons" method allowed the detection of 80% of the ovulatory periods versus 68.6% with the direct visual method (control) (P = 0.07). Consequently, the "Camera-Icons" method gave at least similar results to the direct visual method. When combining the two methods, the detection rate was 88.6%, which was significantly higher than the detection rate allowed by the direct visual method (P < 0.0005). Eight to 32 min (mean 20 min) were used daily to analyze stored images. When compared with the 40 min (four periods of 10 min) dedicated to the direct visual method, we conclude that the video survey system not only saved time but also can replace direct visual estrus detection.

  12. A 20 sq CM CCD mosaic camera for a dark matter search. Part 1: Mechanics, optics and cryogeny

    NASA Astrophysics Data System (ADS)

    Arnaud, M.; Aubourg, E.; Bareyre, P.; Brehin, S.; Caridroit, R.; de Kat, J.; Dispau, G.; Djidi, K.; Gros, M.; Lachieze-Rey, M.

    1994-09-01

    A 20 sq cm charge coupled device (CCD) mosaic camera has been especially built to search for dark galactic halo objects by the gravitational microlensing effect. The sensitive area is made of 16 edge-buttable CCDs developed by Thomson-CTS, with 23x23 sq micrometer pixels. The 35 kg camera housing and mechanical equipment is presented. The associated electronics and data acquisition system are described in a separate paper. The camera resides at the focal plane of a 40 cm, f/10, Ferson reflector. The instrument has been in operation since December 1991 at the European Southern Observatory's (ESO) La Silla Observatory.

  13. Measuring the Flatness of Focal Plane for Very Large Mosaic CCD Camera

    SciTech Connect

    Hao, Jiangang; Estrada, Juan; Cease, Herman; Diehl, H.Thomas; Flaugher, Brenna L.; Kubik, Donna; Kuk, Keivin; Kuropatkine, Nickolai; Lin, Huan; Montes, Jorge; Scarpine, Vic; /Fermilab

    2010-06-08

    Large mosaic multiCCD camera is the key instrument for modern digital sky survey. DECam is an extremely red sensitive 520 Megapixel camera designed for the incoming Dark Energy Survey (DES). It is consist of sixty two 4k x 2k and twelve 2k x 2k 250-micron thick fully-depleted CCDs, with a focal plane of 44 cm in diameter and a field of view of 2.2 square degree. It will be attached to the Blanco 4-meter telescope at CTIO. The DES will cover 5000 square-degrees of the southern galactic cap in 5 color bands (g, r, i, z, Y) in 5 years starting from 2011. To achieve the science goal of constraining the Dark Energy evolution, stringent requirements are laid down for the design of DECam. Among them, the flatness of the focal plane needs to be controlled within a 60-micron envelope in order to achieve the specified PSF variation limit. It is very challenging to measure the flatness of the focal plane to such precision when it is placed in a high vacuum dewar at 173 K. We developed two image based techniques to measure the flatness of the focal plane. By imaging a regular grid of dots on the focal plane, the CCD offset along the optical axis is converted to the variation the grid spacings at different positions on the focal plane. After extracting the patterns and comparing the change in spacings, we can measure the flatness to high precision. In method 1, the regular dots are kept in high sub micron precision and cover the whole focal plane. In method 2, no high precision for the grid is required. Instead, we use a precise XY stage moves the pattern across the whole focal plane and comparing the variations of the spacing when it is imaged by different CCDs. Simulation and real measurements show that the two methods work very well for our purpose, and are in good agreement with the direct optical measurements.

  14. OP09O-OP404-9 Wide Field Camera 3 CCD Quantum Efficiency Hysteresis

    NASA Technical Reports Server (NTRS)

    Collins, Nick

    2009-01-01

    The HST/Wide Field Camera (WFC) 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. At the nominal operating temperature of -83C, the QEH feature contrast was typically 0.1-0.2% or less. The behavior was replicated using flight spare detectors. A visible light flat-field (540nm) with a several times full-well signal level can pin the detectors at both optical (600nm) and near-UV (230nm) wavelengths, suppressing the QEH behavior. We are characterizing the timescale for the detectors to become unpinned and developing a protocol for flashing the WFC3 CCDs with the instrument's internal calibration system in flight. The HST/Wide Field Camera 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. The first observed manifestation of QEH was the presence in a small percentage of flat-field images of a bowtie-shaped contrast that spanned the width of each chip. At the nominal operating temperature of -83C, the contrast observed for this feature was typically 0.1-0.2% or less, though at warmer temperatures contrasts up to 5% (at -50C) have been observed. The bowtie morphology was replicated using flight spare detectors in tests at the GSFC Detector Characterization Laboratory by power cycling the detector while cold. Continued investigation revealed that a clearly-related global QE suppression at the approximately 5% level can be produced by cooling the detector in the dark; subsequent flat-field exposures at a constant illumination show asymptotically increasing response. This QE "pinning" can be achieved with a single high signal flat-field or a series of lower signal flats; a visible light (500-580nm) flat-field with a signal level of several hundred thousand electrons per pixel is sufficient for QE pinning at both optical (600nm) and near-UV (230nm) wavelengths. We are characterizing the timescale for the detectors to become unpinned and developing a

  15. A fast auto-focusing technique for the long focal lens TDI CCD camera in remote sensing applications

    NASA Astrophysics Data System (ADS)

    Wang, Dejiang; Ding, Xu; Zhang, Tao; Kuang, Haipeng

    2013-02-01

    The key issue in automatic focus adjustment for long focal lens TDI CCD camera in remote sensing applications is to achieve the optimum focus position as fast as possible. Existing auto-focusing techniques consume too much time as the mechanical focusing parts of the camera move in steps during the searching procedure. In this paper, we demonstrate a fast auto-focusing technique, which employs the internal optical elements and the TDI CCD itself to directly sense the deviations in back focal distance of the lens and restore the imaging system to a best-available focus. It is particularly advantageous for determination of the focus, due to that the relative motion between the TDI CCD and the focusing element can proceed without interruption. Moreover, the theoretical formulas describing the effect of imaging motion on the focusing precision and the effective focusing range are also developed. Finally, an experimental setup is constructed to evaluate the performance of the proposed technique. The results of the experiment show a ±5 μm precision of auto-focusing in a range of ±500 μmdefocus, and the searching procedure could be accomplished within 0.125 s, which leads to remarkable improvement on the real-time imaging capability for high resolution TDI CCD camera in remote sensing applications.

  16. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  17. Characterization and field use of a CCD camera system for retrieval of bidirectional reflectance distribution function

    NASA Astrophysics Data System (ADS)

    Nandy, P.; Thome, K.; Biggar, S.

    2001-06-01

    Vicarious calibration and field validation is a critical aspect of NASA's Earth Observing System program. As part of calibration and validation research related to this project, the Remote Sensing Group (RSG) of the Optical Science Center at the University of Arizona has developed an imaging radiometer for ground-based measurements of directional reflectance. The system relies on a commercially available 1024×1024 pixel, silicon CCD array. Angular measurements are accomplished using a fish-eye lens that has a full 180° field of view with each pixel on the CCD array having a nominal 0.2° field of view. Spectral selection is through four interference filters centered at 470, 575, 660, and 835 nm. The system is designed such that the entire 180° field is collected at one time with a complete multispectral data set collected in under 2 min. The results of laboratory experiments have been used to determine the gain and offset of each detector element as well as the effects of the lens on the system response. Measurements of a stable source using multiple integration times and at multiple distances for a set integration time indicate the system is linear to better than 0.5% over the upper 88% of the dynamic range of the system. The point spread function (PSF) of the lens system was measured for several field angles, and the signal level was found to fall to less than 1% of the peak signal within 1.5° for the on-axis case. The effect of this PSF on the retrieval of modeled BRDFs is shown to be less than 0.2% out to view angles of 70°. The degree of polarization of the system is shown to be negligible for on-axis imaging but to have up to a 20% effect at a field angle of 70°. The effect of the system polarization on the retrieval of modeled BRDFs is shown to be up to 3% for field angles of 70° off nadir and with a solar zenith angle of 70°. Field measurements are made by mounting the camera to a boom mounted to a large tripod that is aligned toward south. This

  18. Development of proton CT imaging system using plastic scintillator and CCD camera

    NASA Astrophysics Data System (ADS)

    Tanaka, Sodai; Nishio, Teiji; Matsushita, Keiichiro; Tsuneda, Masato; Kabuki, Shigeto; Uesaka, Mitsuru

    2016-06-01

    A proton computed tomography (pCT) imaging system was constructed for evaluation of the error of an x-ray CT (xCT)-to-WEL (water-equivalent length) conversion in treatment planning for proton therapy. In this system, the scintillation light integrated along the beam direction is obtained by photography using the CCD camera, which enables fast and easy data acquisition. The light intensity is converted to the range of the proton beam using a light-to-range conversion table made beforehand, and a pCT image is reconstructed. An experiment for demonstration of the pCT system was performed using a 70 MeV proton beam provided by the AVF930 cyclotron at the National Institute of Radiological Sciences. Three-dimensional pCT images were reconstructed from the experimental data. A thin structure of approximately 1 mm was clearly observed, with spatial resolution of pCT images at the same level as that of xCT images. The pCT images of various substances were reconstructed to evaluate the pixel value of pCT images. The image quality was investigated with regard to deterioration including multiple Coulomb scattering.

  19. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  20. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  1. Field-programmable gate array-based hardware architecture for high-speed camera with KAI-0340 CCD image sensor

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yan, Su; Zhou, Zuofeng; Cao, Jianzhong; Yan, Aqi; Tang, Linao; Lei, Yangjie

    2013-08-01

    We present a field-programmable gate array (FPGA)-based hardware architecture for high-speed camera which have fast auto-exposure control and colour filter array (CFA) demosaicing. The proposed hardware architecture includes the design of charge coupled devices (CCD) drive circuits, image processing circuits, and power supply circuits. CCD drive circuits transfer the TTL (Transistor-Transistor-Logic) level timing Sequences which is produced by image processing circuits to the timing Sequences under which CCD image sensor can output analog image signals. Image processing circuits convert the analog signals to digital signals which is processing subsequently, and the TTL timing, auto-exposure control, CFA demosaicing, and gamma correction is accomplished in this module. Power supply circuits provide the power for the whole system, which is very important for image quality. Power noises effect image quality directly, and we reduce power noises by hardware way, which is very effective. In this system, the CCD is KAI-0340 which is can output 210 full resolution frame-per-second, and our camera can work outstandingly in this mode. The speed of traditional auto-exposure control algorithms to reach a proper exposure level is so slow that it is necessary to develop a fast auto-exposure control method. We present a new auto-exposure algorithm which is fit high-speed camera. Color demosaicing is critical for digital cameras, because it converts a Bayer sensor mosaic output to a full color image, which determines the output image quality of the camera. Complexity algorithm can acquire high quality but cannot implement in hardware. An low-complexity demosaicing method is presented which can implement in hardware and satisfy the demand of quality. The experiment results are given in this paper in last.

  2. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Cooke, William

    2016-01-01

    Current optical observations of meteors are commonly limited by systematic uncertainties in photometric calibration at the level of approximately 0.5 mag or higher. Future improvements to meteor ablation models, luminous efficiency models, or emission spectra will hinge on new camera systems and techniques that significantly reduce calibration uncertainties and can reliably perform absolute photometric measurements of meteors. In this talk we discuss the algorithms and tests that NASA's Meteoroid Environment Office (MEO) has developed to better calibrate photometric measurements for the existing All-Sky and Wide-Field video camera networks as well as for a newly deployed four-camera system for measuring meteor colors in Johnson-Cousins BV RI filters. In particular we will emphasize how the MEO has been able to address two long-standing concerns with the traditional procedure, discussed in more detail below.

  3. Analysis and simulation of the phenomenon of secondary spots of the TDI CCD camera irradiated by CW laser.

    PubMed

    Sun, Ke; Huang, Liangjin; Cheng, Xiang'ai; Jiang, Houman

    2011-11-21

    The phenomenon of secondary spots is observed in the experiment of TDI CCD camera irradiated by CW He-Ne laser. It is considered to be related to the scattering of the slit in front of the sensor and the reflection of the window of the TDI CCD chip. Additional experiments and ray tracing simulation are performed to study the mechanism of the secondary spots. The experimental and simulated results demonstrated that the scattering of the side walls of the slit is the main source of the secondary spots. Furthermore, the operation mode of rotary scanning provides the chance of scattering incident beam to the side wall of the slit. This paper will provide a preliminary hint to the optimum design of slit in camera to reduce the effects of the secondary spots to the image quality.

  4. Observations and analysis of FTU plasmas by video cameras

    NASA Astrophysics Data System (ADS)

    De Angelis, R.; Di Matteo, L.

    2010-11-01

    The interaction of the FTU plasma with the vessel walls and with the limiters is responsible for the release of hydrogen and impurities through various physical mechanisms (physical and chemical sputtering, desorption, etc.). In the cold plasma periphery, these particles are weakly ionised and emit mainly in the visible spectral range. A good description of plasma periphery can then be obtained by use of video cameras. In FTU small size video cameras, placed close to the plasma edge, give wide-angle images of the plasma at a standard rate of 25 frames/s. Images are stored digitally, allowing their retrieval and analysis. This paper reports some of the most interesting features of the discharges evidenced by the images. As a first example, the accumulation of cold neutral gas in the plasma periphery above a density threshold (a phenomenon known as Marfe) can be seen on the video images as a toroidally symmetric band oscillating poloidally; on the multi-chord spectroscopy or bolometer channels, this appears only as a sudden rise of the signals whose overall behaviour could not be clearly interpreted. A second example is the identification of runaway discharges by the signature of the fast electrons emitting synchrotron radiation in their motion direction; this appears as a bean shaped bright spot on one toroidal side, which reverts according to plasma current direction. A relevant side effect of plasma discharges, as potentially dangerous, is the formation of dust as a consequence of some strong plasma-wall interaction event; video images allow monitoring and possibly estimating numerically the amount of dust, which can be produced in these events. Specialised software can automatically search experimental database identifying relevant events, partly overcoming the difficulties associated with the very large amount of data produced by video techniques.

  5. Fast auto-acquisition tomography tilt series by using HD video camera in ultra-high voltage electron microscope.

    PubMed

    Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto

    2014-11-01

    The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt

  6. Classification of volcanic ash particles from Sakurajima volcano using CCD camera image and cluster analysis

    NASA Astrophysics Data System (ADS)

    Miwa, T.; Shimano, T.; Nishimura, T.

    2012-12-01

    Quantitative and speedy characterization of volcanic ash particle is needed to conduct a petrologic monitoring of ongoing eruption. We develop a new simple system using CCD camera images for quantitatively characterizing ash properties, and apply it to volcanic ash collected at Sakurajima. Our method characterizes volcanic ash particles by 1) apparent luminance through RGB filters and 2) a quasi-fractal dimension of the shape of particles. Using a monochromatic CCD camera (Starshoot by Orion Co. LTD.) attached to a stereoscopic microscope, we capture digital images of ash particles that are set on a glass plate under which white colored paper or polarizing plate is set. The images of 1390 x 1080 pixels are taken through three kinds of color filters (Red, Green and Blue) under incident-light and transmitted-light through polarizing plate. Brightness of the light sources is set to be constant, and luminance is calibrated by white and black colored papers. About fifteen ash particles are set on the plate at the same time, and their images are saved with a bit map format. We first extract the outlines of particles from the image taken under transmitted-light through polarizing plate. Then, luminances for each color are represented by 256 tones at each pixel in the particles, and the average and its standard deviation are calculated for each ash particle. We also measure the quasi-fractal dimension (qfd) of ash particles. We perform box counting that counts the number of boxes which consist of 1×1 and 128×128 pixels that catch the area of the ash particle. The qfd is estimated by taking the ratio of the former number to the latter one. These parameters are calculated by using software R. We characterize volcanic ash from Showa crater of Sakurajima collected in two days (Feb 09, 2009, and Jan 13, 2010), and apply cluster analyses. Dendrograms are formed from the qfd and following four parameters calculated from the luminance: Rf=R/(R+G+B), G=G/(R+G+B), B=B/(R+G+B), and

  7. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    PubMed

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  8. A new paradigm for video cameras: optical sensors

    NASA Astrophysics Data System (ADS)

    Grottle, Kevin; Nathan, Anoo; Smith, Catherine

    2007-04-01

    This paper presents a new paradigm for the utilization of video surveillance cameras as optical sensors to augment and significantly improve the reliability and responsiveness of chemical monitoring systems. Incorporated into a hierarchical tiered sensing architecture, cameras serve as 'Tier 1' or 'trigger' sensors monitoring for visible indications after a release of warfare or industrial toxic chemical agents. No single sensor today yet detects the full range of these agents, but the result of exposure is harmful and yields visible 'duress' behaviors. Duress behaviors range from simple to complex types of observable signatures. By incorporating optical sensors in a tiered sensing architecture, the resulting alarm signals based on these behavioral signatures increases the range of detectable toxic chemical agent releases and allows timely confirmation of an agent release. Given the rapid onset of duress type symptoms, an optical sensor can detect the presence of a release almost immediately. This provides cues for a monitoring system to send air samples to a higher-tiered chemical sensor, quickly launch protective mitigation steps, and notify an operator to inspect the area using the camera's video signal well before the chemical agent can disperse widely throughout a building.

  9. Refocusing images and videos with a conventional compact camera

    NASA Astrophysics Data System (ADS)

    Kang, Lai; Wu, Lingda; Wei, Yingmei; Song, Hanchen; Yang, Zheng

    2015-03-01

    Digital refocusing is an interesting and useful tool for generating dynamic depth-of-field (DOF) effects in many types of photography such as portraits and creative photography. Since most existing digital refocusing methods rely on four-dimensional light field captured by special precisely manufactured devices or a sequence of images captured by a single camera, existing systems are either expensive for wide practical use or incapable of handling dynamic scenes. We present a low-cost approach for refocusing high-resolution (up to 8 mega pixels) images and videos based on a single shot using an easy to build camera-mirror stereo system. Our proposed method consists of four main steps, namely system calibration, image rectification, disparity estimation, and refocusing rendering. The effectiveness of our proposed method has been evaluated extensively using both static and dynamic scenes with various depth ranges. Promising experimental results demonstrate that our method is able to simulate various controllable realistic DOF effects. To the best of our knowledge, our method is the first that allows one to refocus high-resolution images and videos of dynamic scenes captured by a conventional compact camera.

  10. Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range

    NASA Astrophysics Data System (ADS)

    Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.

    2013-12-01

    The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/ΔE) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 μm from the current 24 μm spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 μm square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 μm and 3.9±0.1 μm at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these

  11. CCD Video Observation of Microgravity Crystallization of Lysozyme and Correlation with Accelerometer Data

    NASA Technical Reports Server (NTRS)

    Snell, E. H.; Boggon, T. J.; Helliwell, J. R.; Moskowitz, M. E.; Nadarajah, A.

    1997-01-01

    Lysozyme has been crystallized using the ESA Advanced Protein Crystallization Facility onboard the NASA Space Shuttle Orbiter during the IML-2 mission. CCD video monitoring was used to follow the crystallization process and evaluate the growth rate. During the mission some tetragonal crystals were observed moving over distances of up to 200 micrometers. This was correlated with microgravity disturbances caused by firings of vernier jets on the Orbiter. Growth-rate measurement of a stationary crystal (which had nucleated on the growth reactor wall) showed spurts and lulls correlated with an onboard activity; astronaut exercise. The stepped growth rates may be responsible for the residual mosaic block structure seen in crystal mosaicity and topography measurements.

  12. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  13. Non-mydriatic, wide field, fundus video camera

    NASA Astrophysics Data System (ADS)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  14. Scientists Behind the Camera - Increasing Video Documentation in the Field

    NASA Astrophysics Data System (ADS)

    Thomson, S.; Wolfe, J.

    2013-12-01

    Over the last two years, Skypunch Creative has designed and implemented a number of pilot projects to increase the amount of video captured by scientists in the field. The major barrier to success that we tackled with the pilot projects was the conflicting demands of the time, space, storage needs of scientists in the field and the demands of shooting high quality video. Our pilots involved providing scientists with equipment, varying levels of instruction on shooting in the field and post-production resources (editing and motion graphics). In each project, the scientific team was provided with cameras (or additional equipment if they owned their own), tripods, and sometimes sound equipment, as well as an external hard drive to return the footage to us. Upon receiving the footage we professionally filmed follow-up interviews and created animations and motion graphics to illustrate their points. We also helped with the distribution of the final product (http://climatescience.tv/2012/05/the-story-of-a-flying-hippo-the-hiaper-pole-to-pole-observation-project/ and http://climatescience.tv/2013/01/bogged-down-in-alaska/). The pilot projects were a success. Most of the scientists returned asking for additional gear and support for future field work. Moving out of the pilot phase, to continue the project, we have produced a 14 page guide for scientists shooting in the field based on lessons learned - it contains key tips and best practice techniques for shooting high quality footage in the field. We have also expanded the project and are now testing the use of video cameras that can be synced with sensors so that the footage is useful both scientifically and artistically. Extract from A Scientist's Guide to Shooting Video in the Field

  15. Flat Field Anomalies in an X-ray CCD Camera Measured Using a Manson X-ray Source

    SciTech Connect

    M. J. Haugh and M. B. Schneider

    2008-10-31

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 μm square pixels, and 15 μm thick. A multi-anode Manson X-ray source, operating up to 10kV and 10W, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/ΔE≈10. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within ±1% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation occurred at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was not observable below 4 keV. We were also able to observe debris, damage, and surface defects on the CCD chip. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.

  16. SU-F-BRA-16: Development of a Radiation Monitoring Device Using a Low-Cost CCD Camera Following Radionuclide Therapy

    SciTech Connect

    Taneja, S; Fru, L Che; Desai, V; Lentz, J; Lin, C; Scarpelli, M; Simiele, E; Trestrail, A; Bednarz, B

    2015-06-15

    Purpose: It is now commonplace to handle treatments of hyperthyroidism using iodine-131 as an outpatient procedure due to lower costs and less stringent federal regulations. The Nuclear Regulatory Commission has currently updated release guidelines for these procedures, but there is still a large uncertainty in the dose to the public. Current guidelines to minimize dose to the public require patients to remain isolated after treatment. The purpose of this study was to use a low-cost common device, such as a cell phone, to estimate exposure emitted from a patient to the general public. Methods: Measurements were performed using an Apple iPhone 3GS and a Cs-137 irradiator. The charge-coupled device (CCD) camera on the phone was irradiated to exposure rates ranging from 0.1 mR/hr to 100 mR/hr and 30-sec videos were taken during irradiation with the camera lens covered by electrical tape. Interactions were detected as white pixels on a black background in each video. Both single threshold (ST) and colony counting (CC) methods were performed using MATLAB®. Calibration curves were determined by comparing the total pixel intensity output from each method to the known exposure rate. Results: The calibration curve showed a linear relationship above 5 mR/hr for both analysis techniques. The number of events counted per unit exposure rate within the linear region was 19.5 ± 0.7 events/mR and 8.9 ± 0.4 events/mR for the ST and CC methods respectively. Conclusion: Two algorithms were developed and show a linear relationship between photons detected by a CCD camera and low exposure rates, in the range of 5 mR/hr to 100-mR/hr. Future work aims to refine this model by investigating the dose-rate and energy dependencies of the camera response. This algorithm allows for quantitative monitoring of exposure from patients treated with iodine-131 using a simple device outside of the hospital.

  17. Design and Development of Minilink Video: A Fiber-Optic Coupled Video System

    DTIC Science & Technology

    1992-02-01

    Case 14 12 9 Scale -Inches 3 0 Figure 2. Video camera phvsical layout. As shown in Figure 1, all of the camera’s electronics except for the CCD camera...Inches A Ve F/O Leadout Cable 900 mcon Side View F/o Cable Interface (Bulkhead Feedfhroughs) Figure 3. Revised video camera design. 8 techniques and

  18. Optical measurement of the pointing stability of the SOFIA Telescope using a fast EM-CCD camera

    NASA Astrophysics Data System (ADS)

    Pfüller, Enrico; Wolf, Jürgen; Röser, Hans-Peter

    2010-07-01

    The goal of the Stratospheric Observatory for Infrared Astronomy (SOFIA) is to point its airborne telescope at astronomical targets stable within 0.2 arcseconds (rms). However, the pointing stability will be affected in flight by aircraft vibrations and movements and constantly changing aerodynamic conditions within the open telescope compartment. Model calculations indicate that initially the deviations from targets may be at the order of several arcseconds. The plan is to carefully analyse and characterize all disturbances and then gradually fine tune the telescope's attitude control system to improve the pointing stability. To optically measure how star images change their position in the focal plane, an Andor DU-888 electronmultiplying (EM) CCD camera will be mounted to the telescope instead of its standard tracking camera. The new camera, dubbed Fast Diagnostic Camera (FDC) has been extensively tested and characterized in the laboratory and on ground based telescopes. In ground tests on the SOFIA telescope system it proofed its capabilities by sampling star images with frame rates up to 400 frames per second. From this data the star's location (centroid) in the focal plane can be calculated every 1/400th second and by means of a Fourier transformation, the star's movement power spectrum can be derived for frequencies up to 200 Hz. Eigenfrequencies and the overall shape of the measured spectrum confirm the previous model calculations. With known disturbances introduced to the telescope's fine drive system, the FDC data can be used to determine the system's transfer function. These data, when measured in flight will be critical for the refinement of the attitude control system. Another subsystem of the telescope that was characterized using FDC data was the chopping secondary mirror. By monitoring a star centroid at high speed while chopping, the chopping mechanism and its properties could be analyzed. This paper will describe the EM-CCD camera and its

  19. The cloud cover fraction obtained from a ground CCD camera and its effect on a radiative transfer model

    NASA Astrophysics Data System (ADS)

    Souza, M. P.; Pereira, E. B.; Martins, F. R.; Chagas, R. C.; Freitas, W. S., Jr.

    2003-04-01

    Clouds are the major factor that rules the solar irradiance over Earth's surface. They interact with solar radiation in the shortwave spectra and with terrestrial radiation emitted by Earth's surface in the longwave range. Information about cloud cover is a very important input data for radiative transfer models and great effort is being made to improve methods to get this information. This paper reports the effects on a radiative transfer model by using the simple cloud fraction obtained by a ground set CCD camera instead of the satellite derived cloud index. The BRASIL-SR model is a radiative transfer model that calculates surface solar irradiance, using a normalized cloud index determined by statistical analyses of satellites images and from climatological values of temperature and albedo. Cloud fraction was obtained from digital images collected by a ground set CCD (Charge Coupled Device) camera, in the visible range (0.4mm - 0.7mm) as RGB (Red - Green - Blue) compositions. The method initially transforms the image attributes from the RGB space to the IHS (Intensity - Hue - Saturation) space. The algorithm defines threshold values for the saturation component of the IHS system to classify a pixel as cloudy or clear sky. Clear skies are identified by high values of saturation in the visible range while cloudy condition presents a mixture of several wavelengths and consequently lower saturation values. Results from the CCD camera and from the satellite were compared with the Kt and Kd from pyranometer data obtained from a local BSRN radiation station at Florianópolis (27º 28'S, 48º 29'W) and show that cloud fraction is only a poor information about the cloud sky status since it does not bear any information on the cloud optical depth which is needed in most radiative transfer models such as the one used in this paper (the BRASIL-SR).

  20. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  1. Deep-Sea Video Cameras Without Pressure Housings

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2004-01-01

    Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If

  2. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  3. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  4. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  5. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    NASA Astrophysics Data System (ADS)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  6. Implementation of a parallel-beam optical-CT apparatus for three-dimensional radiation dosimetry using a high-resolution CCD camera

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Tzeng; Chen, Chin-Hsing; Hung, Chao-Nan; Tuan, Chiu-Ching; Chang, Yuan-Jen

    2015-06-01

    In this study, a charge-coupled device (CCD) camera with 2-megapixel (1920×1080-pixel) and 12-bit resolution was developed for optical computed tomography(optical CT). The signal-to-noise ratio (SNR) of our system was 30.12 dB, better than that of commercially available CCD cameras (25.31 dB). The 50% modulation transfer function (MTF50) of our 1920×1080-pixel camera gave a line width per picture height (LW/PH) of 745, which is 73% of the diffraction-limited resolution. Compared with a commercially available 1-megapixel CCD camera (1296×966-pixel) with a LW/PH=358 and 46.6% of the diffraction-limited resolution, our camera system provided higher spatial resolution and better image quality. The NIPAM gel dosimeter was used to evaluate the optical CT with a 2-megapixel CCD. A clinical five-field irradiation treatment plan was generated using the Eclipse planning system (Varian Corp., Palo Alto, CA, USA). The gel phantom was irradiated using a 6-MV Varian Clinac IX linear accelerator (Varian). The measured NIPAM gel dose distributions and the calculated dose distributions, generated by the treatment planning software (TPS), were compared using the 3% dose-difference and 3 mm distance-to-agreement criteria. The gamma pass rate was as high as 98.2% when 2-megapixel CCD camera was used in optical CT. However, the gamma pass rate was only 96.0% when a commercially available 1-megapixel CCD camera was used.

  7. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  8. Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam

    NASA Astrophysics Data System (ADS)

    Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A. E.; Engelhardt, M.

    2005-04-01

    When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2×107 cm s, which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300×1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200 rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points.

  9. Compact pnCCD-based X-ray camera with high spatial and energy resolution: a color X-ray camera.

    PubMed

    Scharf, O; Ihle, S; Ordavo, I; Arkadiev, V; Bjeoumikhov, A; Bjeoumikhova, S; Buzanich, G; Gubzhokov, R; Günther, A; Hartmann, R; Kühbacher, M; Lang, M; Langhoff, N; Liebel, A; Radtke, M; Reinholz, U; Riesemeier, H; Soltau, H; Strüder, L; Thünemann, A F; Wedell, R

    2011-04-01

    For many applications there is a requirement for nondestructive analytical investigation of the elemental distribution in a sample. With the improvement of X-ray optics and spectroscopic X-ray imagers, full field X-ray fluorescence (FF-XRF) methods are feasible. A new device for high-resolution X-ray imaging, an energy and spatial resolving X-ray camera, is presented. The basic idea behind this so-called "color X-ray camera" (CXC) is to combine an energy dispersive array detector for X-rays, in this case a pnCCD, with polycapillary optics. Imaging is achieved using multiframe recording of the energy and the point of impact of single photons. The camera was tested using a laboratory 30 μm microfocus X-ray tube and synchrotron radiation from BESSY II at the BAMline facility. These experiments demonstrate the suitability of the camera for X-ray fluorescence analytics. The camera simultaneously records 69,696 spectra with an energy resolution of 152 eV for manganese K(α) with a spatial resolution of 50 μm over an imaging area of 12.7 × 12.7 mm(2). It is sensitive to photons in the energy region between 3 and 40 keV, limited by a 50 μm beryllium window, and the sensitive thickness of 450 μm of the chip. Online preview of the sample is possible as the software updates the sums of the counts for certain energy channel ranges during the measurement and displays 2-D false-color maps as well as spectra of selected regions. The complete data cube of 264 × 264 spectra is saved for further qualitative and quantitative processing.

  10. [Research Award providing funds for a tracking video camera

    NASA Technical Reports Server (NTRS)

    Collett, Thomas

    2000-01-01

    The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.

  11. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  12. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  13. Robust Video Stabilization Using Particle Keypoint Update and l₁-Optimized Camera Path.

    PubMed

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-02-10

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  14. Development of Measurement Device of Working Radius of Crane Based on Single CCD Camera and Laser Range Finder

    NASA Astrophysics Data System (ADS)

    Nara, Shunsuke; Takahashi, Satoru

    In this paper, what we want to do is to develop an observation device to measure the working radius of a crane truck. The device has a single CCD camera, a laser range finder and two AC servo motors. First, in order to measure the working radius, we need to consider algorithm of a crane hook recognition. Then, we attach the cross mark on the crane hook. Namely, instead of the crane hook, we try to recognize the cross mark. Further, for the observation device, we construct PI control system with an extended Kalman filter to track the moving cross mark. Through experiments, we show the usefulness of our device including new control system of mark tracking.

  15. Characterization of the luminance and shape of ash particles at Sakurajima volcano, Japan, using CCD camera images

    NASA Astrophysics Data System (ADS)

    Miwa, Takahiro; Shimano, Taketo; Nishimura, Takeshi

    2015-01-01

    We develop a new method for characterizing the properties of volcanic ash at the Sakurajima volcano, Japan, based on automatic processing of CCD camera images. Volcanic ash is studied in terms of both luminance and particle shape. A monochromatic CCD camera coupled with a stereomicroscope is used to acquire digital images through three filters that pass red, green, or blue light. On single ash particles, we measure the apparent luminance, corresponding to 256 tones for each color (red, green, and blue) for each pixel occupied by ash particles in the image, and the average and standard deviation of the luminance. The outline of each ash particle is captured from a digital image taken under transmitted light through a polarizing plate. Also, we define a new quasi-fractal dimension ( D qf ) to quantify the complexity of the ash particle outlines. We examine two ash samples, each including about 1000 particles, which were erupted from the Showa crater of the Sakurajima volcano, Japan, on February 09, 2009 and January 13, 2010. The apparent luminance of each ash particle shows a lognormal distribution. The average luminance of the ash particles erupted in 2009 is higher than that of those erupted in 2010, which is in good agreement with the results obtained from component analysis under a binocular microscope (i.e., the number fraction of dark juvenile particles is lower for the 2009 sample). The standard deviations of apparent luminance have two peaks in the histogram, and the quasi-fractal dimensions show different frequency distributions between the two samples. These features are not recognized in the results of conventional qualitative classification criteria or the sphericity of the particle outlines. Our method can characterize and distinguish ash samples, even for ash particles that have gradual property changes, and is complementary to component analysis. This method also enables the relatively fast and systematic analysis of ash samples that is required for

  16. Range-Gated LADAR Coherent Imaging Using Parametric Up-Conversion of IR and NIR Light for Imaging with a Visible-Range Fast-Shuttered Intensified Digital CCD Camera

    SciTech Connect

    YATES,GEORGE J.; MCDONALD,THOMAS E. JR.; BLISS,DAVID E.; CAMERON,STEWART M.; ZUTAVERN,FRED J.

    2000-12-20

    Research is presented on infrared (IR) and near infrared (NIR) sensitive sensor technologies for use in a high speed shuttered/intensified digital video camera system for range-gated imaging at ''eye-safe'' wavelengths in the region of 1.5 microns. The study is based upon nonlinear crystals used for second harmonic generation (SHG) in optical parametric oscillators (OPOS) for conversion of NIR and IR laser light to visible range light for detection with generic S-20 photocathodes. The intensifiers are ''stripline'' geometry 18-mm diameter microchannel plate intensifiers (MCPIIS), designed by Los Alamos National Laboratory and manufactured by Philips Photonics. The MCPIIS are designed for fast optical shattering with exposures in the 100-200 ps range, and are coupled to a fast readout CCD camera. Conversion efficiency and resolution for the wavelength conversion process are reported. Experimental set-ups for the wavelength shifting and the optical configurations for producing and transporting laser reflectance images are discussed.

  17. Measuring neutron fluences and gamma/x ray fluxes with CCD cameras

    NASA Astrophysics Data System (ADS)

    Yates, G. J.; Smith, G. W.; Zagarino, P.; Thomas, M. C.

    Volume and area measurements of transient radiation-induced pixel charge in English Electric Valve (EEV) Frame Transfer (FT) charge coupled devices (CCDs) from irradiation with pulsed neutrons (14 MeV) and Bremsstrahlung photons (16-MeV endpoint) are utilized to calibrate the devices as radiometric imaging sensors capable of distinguishing between the two types of ionizing radiation. Measurements indicate approximately 0.5 V/rad responsivity with greater than or equal to 1 rad required for saturation from photon irradiation. Neutron-generated localized charge centers or 'peaks' binned by area and amplitude as functions of fluence in the 10(exp 5) to 10(exp 7) n/sq cm range indicate smearing over approximately 1 to 10 percent of CCD array with charge per pixel ranging between noise and saturation levels.

  18. Measuring Night-Sky Brightness With a Wild-Field CCD Camera

    DTIC Science & Technology

    2007-02-01

    detector, pre- sumably because of variations in the manufacturing process and the electronics used in cameras of different manufactur- ers. All...curve. An integration time of at least 2 s was used to minimize shutter effects on the flat. Even with full control over these procedures, variations ... variations in brightness at all four edges of the frame. In this manner, it was determined that the flat succeeded in correcting the vignetting to within

  19. Measuring Night-Sky Brightness With a Wide-Field CCD Camera

    DTIC Science & Technology

    2007-02-13

    different linearity curve, even those that used the same detector, pre- sumably because of variations in the manufacturing process and the electronics...Even with full control over these procedures, variations in replicate flat images were observed for the large-format camera. The physical size of its...a night-sky data set and examining the seams for variations in brightness at all four edges of the frame. In this manner, it was determined that the

  20. Performance Characterization of the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) CCD Cameras

    NASA Technical Reports Server (NTRS)

    Joiner, Reyann; Kobayashi, Ken; Winebarger, Amy; Champey, Patrick

    2014-01-01

    The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a sounding rocket instrument which is currently being developed by NASA's Marshall Space Flight Center (MSFC) and the National Astronomical Observatory of Japan (NAOJ). The goal of this instrument is to observe and detect the Hanle effect in the scattered Lyman-Alpha UV (121.6nm) light emitted by the Sun's Chromosphere to make measurements of the magnetic field in this region. In order to make accurate measurements of this effect, the performance characteristics of the three on-board charge-coupled devices (CCDs) must meet certain requirements. These characteristics include: quantum efficiency, gain, dark current, noise, and linearity. Each of these must meet predetermined requirements in order to achieve satisfactory performance for the mission. The cameras must be able to operate with a gain of no greater than 2 e(-)/DN, a noise level less than 25e(-), a dark current level which is less than 10e(-)/pixel/s, and a residual nonlinearity of less than 1%. Determining these characteristics involves performing a series of tests with each of the cameras in a high vacuum environment. Here we present the methods and results of each of these performance tests for the CLASP flight cameras.

  1. LED characterization for development of on-board calibration unit of CCD-based advanced wide-field sensor camera of Resourcesat-2A

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Verma, Anurag

    2016-05-01

    The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.

  2. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  3. Lori Losey - The Woman Behind the Video Camera

    NASA Video Gallery

    The often-spectacular aerial video imagery of NASA flight research, airborne science missions and space satellite launches doesn't just happen. Much of it is the work of Lori Losey, senior video pr...

  4. Using a Video Camera to Measure the Radius of the Earth

    ERIC Educational Resources Information Center

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  5. Correction of spatially varying image and video motion blur using a hybrid camera.

    PubMed

    Tai, Yu-Wing; Du, Hao; Brown, Michael S; Lin, Stephen

    2010-06-01

    We describe a novel approach to reduce spatially varying motion blur in video and images using a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower resolution, while the lower frame-rate video has higher spatial resolution but is susceptible to motion blur. Our deblurring approach uses the data from these two video streams to reduce spatially varying motion blur in the high-resolution camera with a technique that combines both deconvolution and super-resolution. Our algorithm also incorporates a refinement of the spatially varying blur kernels to further improve results. Our approach can reduce motion blur from the high-resolution video as well as estimate new high-resolution frames at a higher frame rate. Experimental results on a variety of inputs demonstrate notable improvement over current state-of-the-art methods in image/video deblurring.

  6. Prompting Spontaneity by Means of the Video Camera in the Beginning Foreign Language Class.

    ERIC Educational Resources Information Center

    Pelletier, Raymond J.

    1990-01-01

    Describes four techniques for using a video camera to generate higher levels of student interest, involvement, and productivity in beginning foreign language courses. The techniques include spontaneous discussion of video images, enhancement of students' use of interrogative pronouns and phrases, grammar instruction, and student-produced skits.…

  7. Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.

    ERIC Educational Resources Information Center

    Foss, Kurt; Kahan, Robert S.

    This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…

  8. Focus, edge detection, and CCD camera characterization for development of an optical overlay calibration standard

    NASA Astrophysics Data System (ADS)

    Fox, Stephen Harris

    2000-11-01

    measured with digital charge coupled device cameras viewing a stationary stage. We investigated the potential errors introduced into the overlay measurement by the use of such cameras. We present methods for mapping and correction of the errors introduced by the cameras and their associated optical systems.

  9. Fast CCD camera for x-ray photon correlation spectroscopy and time-resolved x-ray scattering and imaging

    NASA Astrophysics Data System (ADS)

    Falus, P.; Borthwick, M. A.; Mochrie, S. G. J.

    2004-11-01

    A new, fast x-ray detector system is presented for high-throughput, high-sensitivity, time-resolved, x-ray scattering and imaging experiments, most especially x-ray photon correlation spectroscopy (XPCS). After a review of the architectures of different CCD chips and a critical examination of their suitability for use in a fast x-ray detector, the new detector hardware is described. In brief, its principal component is an inexpensive, commercial camera—the SMD1M60—originally designed for optical applications, and modified for use as a direct-illumination x-ray detector. The remainder of the system consists of two Coreco Imaging PC-DIG frame grabber boards, located inside a Dell Power-edge 6400 server. Each frame grabber sits on its own PCI bus and handles data from 2 of the CCD's 4 taps. The SMD1M60 is based on a fast, frame-transfer, 4-tap CCD chip, read out at12-bit resolution at frame rates of up to 62 Hz for full frame readout and up to 500 Hz for one-sixteenth frame readout. Experiments to characterize the camera's suitability for XPCS and small-angle x-ray scattering (SAXS) are presented. These experiments show that single photon events are readily identified, and localized to within a pixel index or so. This is a sufficiently fine spatial resolution to maintain the speckle contrast at an acceptable value for XPCS measurements. The detective quantum efficiency of the SMD1M60 is 49% for directly-detected 6.3 keV x rays. The effects of data acquisition strategies that permit near-real-time data compression are also determined and discussed. Overall, the SMD1M60 detector system represents a major improvement in the technology for time-resolved x-ray experiments, that require an area detector with time-resolutions in few-milliseconds-to-few-seconds range, and it should have wide applications, extending beyond XPCS.

  10. Digital video technology and production 101: lights, camera, action.

    PubMed

    Elliot, Diane L; Goldberg, Linn; Goldberg, Michael J

    2014-01-01

    Videos are powerful tools for enhancing the reach and effectiveness of health promotion programs. They can be used for program promotion and recruitment, for training program implementation staff/volunteers, and as elements of an intervention. Although certain brief videos may be produced without technical assistance, others often require collaboration and contracting with professional videographers. To get practitioners started and to facilitate interactions with professional videographers, this Tool includes a guide to the jargon of video production and suggestions for how to integrate videos into health education and promotion work. For each type of video, production principles and issues to consider when working with a professional videographer are provided. The Tool also includes links to examples in each category of video applications to health promotion.

  11. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  12. Lights, Cameras, Pencils! Using Descriptive Video to Enhance Writing

    ERIC Educational Resources Information Center

    Hoffner, Helen; Baker, Eileen; Quinn, Kathleen Benson

    2008-01-01

    Students of various ages and abilities can increase their comprehension and build vocabulary with the help of a new technology, Descriptive Video. Descriptive Video (also known as described programming) was developed to give individuals with visual impairments access to visual media such as television programs and films. Described programs,…

  13. Risk mitigation process for utilization of commercial off-the-shelf (COTS) parts in CCD camera for military applications

    NASA Astrophysics Data System (ADS)

    Ahmad, Anees; Batcheldor, Scott; Cannon, Steven C.; Roberts, Thomas E.

    2002-09-01

    This paper presents the lessons learned during the design and development of a high performance cooled CCD camera for military applications utilizing common commercial off the shelf (COTS) parts. Our experience showed that concurrent evaluation and testing of high risk COTS must be performed to assess their performance over the required temperature range and other special product requirements such as fuel vapor compatibility, EMI and shock susceptibility, etc. Technical, cost and schedule risks for COTS parts must also be carefully evaluated. The customer must be involved in the selection and evaluation of such parts so that the performance limitations of the selected parts are clearly understood. It is equally important to check with vendors on the availability and obsolescence of the COTS parts being considered since the electronic components are often replaced by newer, better and cheaper models in a couple of years. In summary, this paper addresses the major benefits and risks associated with using commercial and industrial parts in military products, and suggests a risk mitigation approach to ensure a smooth development phase, and predictable performance from the end product.

  14. Transient noise characterization and filtration in CCD cameras exposed to stray radiation from a medical linear accelerator.

    PubMed

    Archambault, Louis; Briere, Tina Marie; Beddar, Sam

    2008-10-01

    Charge coupled devices (CCDs) are being increasingly used in radiation therapy for dosimetric purposes. However, CCDs are sensitive to stray radiation. This effect induces transient noise. Radiation-induced noise strongly alters the image and therefore limits its quantitative analysis. The purpose of this work is to characterize the radiation-induced noise and to develop filtration algorithms to restore image quality. Two models of CCD were used for measurements close to a medical linac. The structure of the transient noise was first characterized. Then, four methods of noise filtration were compared: median filtering of a time series of identical images, uniform median filtering of single images, an adaptive filter with switching mechanism, and a modified version of the adaptive switch filter. The intensity distribution of noisy pixels was similar in both cameras. However, the spatial distribution of the noise was different: The average noise cluster size was 1.2 +/- 0.6 and 3.2 +/- 2.7 pixels for the U2000 and the Luca, respectively. The median of a time series of images resulted in the best filtration and minimal image distortion. For applications where time series is impractical, the adaptive switch filter must be used to reduce image distortion. Our modified version of the switch filter can be used in order to handle nonisolated groups of noisy pixels.

  15. Stroboscope Based Synchronization of Full Frame CCD Sensors.

    PubMed

    Shen, Liang; Feng, Xiaobing; Zhang, Yuan; Shi, Min; Zhu, Dengming; Wang, Zhaoqi

    2017-04-07

    The key obstacle to the use of consumer cameras in computer vision and computer graphics applications is the lack of synchronization hardware. We present a stroboscope based synchronization approach for the charge-coupled device (CCD) consumer cameras. The synchronization is realized by first aligning the frames from different video sequences based on the smear dots of the stroboscope, and then matching the sequences using a hidden Markov model. Compared with current synchronized capture equipment, the proposed approach greatly reduces the cost by using inexpensive CCD cameras and one stroboscope. The results show that our method could reach a high accuracy much better than the frame-level synchronization of traditional software methods.

  16. Investigation of temporal evolution and spatial distribution of dust creation events in DITS campaign using visible CCD cameras in Tore Supra

    NASA Astrophysics Data System (ADS)

    Hong, Suk-Ho; Grisolia, Christian; Monier-Gabet, Pascale; Tore Supra Team

    2009-06-01

    Images of wide-angle visible CCD cameras contain information on dust creation events (flaking) that occur during plasma operations. Due to the interaction with plasma, flakes entering into the plasma left straight line-like visible traces behind in the images. Analyzing these traces by image processing, the temporal evolution, spatial distribution, and statistics on dust creation events in DITS campaign in Tore Supra were obtained.

  17. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  18. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  19. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  20. BOREAS RSS-3 Imagery and Snapshots from a Helicopter-Mounted Video Camera

    NASA Technical Reports Server (NTRS)

    Walthall, Charles L.; Loechel, Sara; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor)

    2000-01-01

    The BOREAS RSS-3 team collected helicopter-based video coverage of forested sites acquired during BOREAS as well as single-frame "snapshots" processed to still images. Helicopter data used in this analysis were collected during all three 1994 IFCs (24-May to 16-Jun, 19-Jul to 10-Aug, and 30-Aug to 19-Sep), at numerous tower and auxiliary sites in both the NSA and the SSA. The VHS-camera observations correspond to other coincident helicopter measurements. The field of view of the camera is unknown. The video tapes are in both VHS and Beta format. The still images are stored in JPEG format.

  1. Real Time Eye Tracking and Hand Tracking Using Regular Video Cameras for Human Computer Interaction

    DTIC Science & Technology

    2011-01-01

    19 3.4 Result of Software Usage The system is set up with a display screen or projector , and two cameras installed in...structure and 3D file/ scene visualization tool) by utilizing multi-modal modalities, including head, pose, eye, expressions, hand gesture, body gesture...Dera, T., Bardins, S., Schneider, E., and Brandt, T., “ Mobile Eye Tracking as a Basis for Real-Time Control of a Gaze Driven Head-Mounted Video Camera

  2. Automatic detection of camera translation in eye video recordings using multiple methods.

    PubMed

    Karmali, Faisal; Shelhamer, Mark

    2005-04-01

    A concern with video eye movement tracking is that movement of the camera headset relative to the head creates an artifact of eye movement in pupil-detection software. We describe the development of, and compare the results of, three automatic image processing algorithms to measure camera movement. The best of the algorithms has an average accuracy of 1.3 pixels, equivalent to 0.49 deg with our eye tracking system.

  3. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  4. Lights! Camera! Action!: video projects in the classroom.

    PubMed

    Epstein, Carol Diane; Hovancsek, Marcella T; Dolan, Pamela L; Durner, Erin; La Rocco, Nicole; Preiszig, Patricia; Winnen, Caitlin

    2003-12-01

    We report on two classroom video projects intended to promote active student involvement in their classroom experience during a year-long medical-surgical nursing course. We implemented two types of projects, Nursing Grand Rounds and FPBTV. The projects are templates that can be applied to any nursing specialty and can be implemented without the use of video technology. During the course of several years, both projects have proven effective in encouraging students to promote pattern recognition of characteristic features of common illnesses, to develop teamwork strategies, and to practice their presentation skills in a safe environment among their peers. The projects appealed to students because they increased retention of information and immersed students in the experience of becoming experts about an illness or a family of medications. These projects have enabled students to become engaged and invested in their own learning in the classroom.

  5. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  6. Cost-effective multi-camera array for high quality video with very high dynamic range

    NASA Astrophysics Data System (ADS)

    Keinert, Joachim; Wetzel, Marcus; Schöberl, Michael; Schäfer, Peter; Zilly, Frederik; Bätz, Michel; Fößel, Siegfried; Kaup, André

    2014-03-01

    Temporal bracketing can create images with higher dynamic range than the underlying sensor. Unfortunately, moving objects cause disturbing artifacts. Moreover, the combination with high frame rates is almost unachiev­ able since a single video frame requires multiple sensor readouts. The combination of multiple synchronized side-by-side cameras equipped with different attenuation filters promises a remedy, since all exposures can be performed at the same time with the same duration using the playout video frame rate. However, a disparity correction is needed to compensate the spatial displacement of the cameras. Unfortunately, the requirements for a high quality disparity correction contradict the goal to increase dynamic range. When using two cameras, disparity correction needs objects to be properly exposed in both cameras. In contrast, a dynamic range in­crease needs the cameras to capture different luminance ranges. As this contradiction has not been addressed in literature so far, this paper proposes a novel solution based on a three camera setup. It enables accurate de­ termination of the disparities and an increase of the dynamic range by nearly a factor of two while still limiting costs. Compared to a two camera solution, the mean opinion score (MOS) is improved by 13.47 units in average for the Middleburry images.

  7. Observation of hydrothermal flows with acoustic video camera

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Asada, A.; Tamaki, K.; Scientific Team Of Yk09-13 Leg 1

    2010-12-01

    Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. DIDSON was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. In this cruise, seven dives of Shinkai6500 were conducted. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. These are only a few acoustic video images of the hydrothermal plumes. Processing and analyzing the acoustic video image data are going on. We will report the overview of the acoustic video image of the hydrothermal plumes and discuss possibility of DIDSON as an observation tool for seafloor hydrothermal activity.

  8. Flat Field Anomalies in an X-ray CCD Camera Measured Using a Manson X-ray Source (HTPD 08 paper)

    SciTech Connect

    Haugh, M; Schneider, M B

    2008-04-28

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 {micro}m square pixels, and 15 {micro}m thick. A multi-anode Manson X-ray source, operating up to 10kV and 10W, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/{Delta}E {approx} 10. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within {+-}1% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation occurred at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was not observable below 4 keV. We were also able to observe debris, damage, and surface defects on the CCD chip. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.

  9. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    ERIC Educational Resources Information Center

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  10. Video content analysis on body-worn cameras for retrospective investigation

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  11. Moving camera moving object segmentation in an MPEG-2 compressed video sequence

    NASA Astrophysics Data System (ADS)

    Wang, Jinsong; Patel, Nilesh; Grosky, William

    2006-01-01

    In the paper, we addresses the problem of camera and object motion detection in compressed domain. The estimation of camera motion and the moving object segmentation have been widely stated in a variety of context for video analysis, because they are capable of providing essential clues for interpreting high-level semantic meanings of video sequences. A novel compressed domain motion estimation and segmentation scheme is presented and applied in this paper. The proposed algorithm uses MPEG-2 compressed motion vectors to undergo a spatial and temporal interpolation over several adjacent frames. An iterative rejection scheme based upon the affine model is exploited to effect global camera motion detection. The foreground spatiotemporal objects are separated from the background using the temporal consistency check to the output of the iterative segmentation. This consistency check process can help conglomerate the resulting foreground blocks and weed out unqualified blocks. Illustrative examples are provided to demonstrate the efficacy of the proposed approach.

  12. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Scherer-Negenborn, Norbert; Thakkar, Pooja; Hübner, Wolfgang; Arens, Michael

    2016-10-01

    This paper is a continuation of the work of Becker et al.1 In their work, they analyzed the robustness of various background subtraction algorithms on fused video streams originating from visible and infrared cameras. In order to cover a broader range of background subtraction applications, we show the effects of fusing infrared-visible video streams from vibrating cameras on a large set of background subtraction algorithms. The effectiveness is quantitatively analyzed on recorded data of a typical outdoor sequence with a fine-grained and accurate annotation of the images. Thereby, we identify approaches which can benefit from fused sensor signals with camera jitter. Finally conclusions on what fusion strategies should be preferred under such conditions are given.

  13. Acceptance/operational test procedure 101-AW tank camera purge system and 101-AW video camera system

    SciTech Connect

    Castleberry, J.L.

    1994-09-19

    This procedure will document the satisfactory operation of the 101-AW Tank Camera Purge System (CPS) and the 101-AW Video Camera System. The safety interlock which shuts down all the electronics inside the 101-AW vapor space, during loss of purge pressure, will be in place and tested to ensure reliable performance. This procedure is separated into four sections. Section 6.1 is performed in the 306 building prior to delivery to the 200 East Tank Farms and involves leak checking all fittings on the 101-AW Purge Panel for leakage using a Snoop solution and resolving the leakage. Section 7.1 verifies that PR-1, the regulator which maintains a positive pressure within the volume (cameras and pneumatic lines), is properly set. In addition the green light (PRESSURIZED) (located on the Purge Control Panel) is verified to turn on above 10 in. w.g. and after the time delay (TDR) has timed out. Section 7.2 verifies that the purge cycle functions properly, the red light (PURGE ON) comes on, and that the correct flowrate is obtained to meet the requirements of the National Fire Protection Association. Section 7.3 verifies that the pan and tilt, camera, associated controls and components operate correctly. This section also verifies that the safety interlock system operates correctly during loss of purge pressure. During the loss of purge operation the illumination of the amber light (PURGE FAILED) will be verified.

  14. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  15. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  16. Development of a compact fast CCD camera and resonant soft x-ray scattering endstation for time-resolved pump-probe experiments

    NASA Astrophysics Data System (ADS)

    Doering, D.; Chuang, Y.-D.; Andresen, N.; Chow, K.; Contarato, D.; Cummings, C.; Domning, E.; Joseph, J.; Pepper, J. S.; Smith, B.; Zizka, G.; Ford, C.; Lee, W. S.; Weaver, M.; Patthey, L.; Weizeorick, J.; Hussain, Z.; Denes, P.

    2011-07-01

    The designs of a compact, fast CCD (cFCCD) camera, together with a resonant soft x-ray scattering endstation, are presented. The cFCCD camera consists of a highly parallel, custom, thick, high-resistivity CCD, readout by a custom 16-channel application specific integrated circuit to reach the maximum readout rate of 200 frames per second. The camera is mounted on a virtual-axis flip stage inside the RSXS chamber. When this flip stage is coupled to a differentially pumped rotary seal, the detector assembly can rotate about 100°/360° in the vertical/horizontal scattering planes. With a six-degrees-of-freedom cryogenic sample goniometer, this endstation has the capability to detect the superlattice reflections from the electronic orderings showing up in the lower hemisphere. The complete system has been tested at the Advanced Light Source, Lawrence Berkeley National Laboratory, and has been used in multiple experiments at the Linac Coherent Light Source, SLAC National Accelerator Laboratory.

  17. Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

  18. A refrigerated web camera for photogrammetric video measurement inside biomass boilers and combustion analysis.

    PubMed

    Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

    2011-01-01

    This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes.

  19. A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis

    PubMed Central

    Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

    2011-01-01

    This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes. PMID:22319349

  20. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  1. Video and acoustic camera techniques for studying fish under ice: a review and comparison

    SciTech Connect

    Mueller, Robert P.; Brown, Richard S.; Hop, Haakon H.; Moulton, Larry

    2006-09-05

    Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures. This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes, rivers, and streams with ice cover. Methods are provided for determining fish density and size, identifying species, and measuring swimming speed and successful applications of previous surveys of fish under the ice are described. These include drilling ice holes, selecting batteries and generators, deploying pan and tilt cameras, and using paired colored lasers to determine fish size and habitat associations. We also discuss use of infrared and white light to enhance image-capturing capabilities, deployment of digital recording systems and time-lapse techniques, and the use of imaging software. Data are presented from initial surveys with video and acoustic cameras in the Sagavanirktok River Delta, Alaska, during late winter 2004. These surveys represent the first known successful application of a dual-frequency identification sonar (DIDSON) acoustic camera under the ice that achieved fish detection and sizing at camera ranges up to 16 m. Feasibility tests of video and acoustic cameras for determining fish size and density at various turbidity levels are also presented. Comparisons are made of the different techniques in terms of suitability for achieving various fisheries research objectives. This information is intended to assist researchers in choosing the equipment that best meets their study needs.

  2. A New Remote Sensing Filter Radiometer Employing a Fabry-Perot Etalon and a CCD Camera for Column Measurements of Methane in the Earth Atmosphere

    NASA Technical Reports Server (NTRS)

    Georgieva, E. M.; Huang, W.; Heaps, W. S.

    2012-01-01

    A portable remote sensing system for precision column measurements of methane has been developed, built and tested at NASA GSFC. The sensor covers the spectral range from 1.636 micrometers to 1.646 micrometers, employs an air-gapped Fabry-Perot filter and a CCD camera and has a potential to operate from a variety of platforms. The detector is an XS-1.7-320 camera unit from Xenics Infrared solutions which combines an uncooled InGaAs detector array working up to 1.7 micrometers. Custom software was developed in addition to the graphical user basic interface X-Control provided by the company to help save and process the data. The technique and setup can be used to measure other trace gases in the atmosphere with minimal changes of the etalon and the prefilter. In this paper we describe the calibration of the system using several different approaches.

  3. A Novel Method to Reduce Time Investment When Processing Videos from Camera Trap Studies

    PubMed Central

    Swinnen, Kristijn R. R.; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

    2014-01-01

    Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs. PMID:24918777

  4. A digital underwater video camera system for aquatic research in regulated rivers

    USGS Publications Warehouse

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  5. Flat Field Anomalies in an X-Ray CCD Camera Measured Using a Manson X-Ray Source

    SciTech Connect

    Michael Haugh

    2008-03-01

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. It determines how accurately NIF can point the laser beams and is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 μm square pixels, and 15 μm thick. A multi-anode Manson X-ray source, operating up to 10kV and 2mA, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/ΔE≈12. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within ±1.5% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. The efficiency pattern follows the properties of Si. The maximum quantum efficiency is 0.71. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation was >8% at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was less than the measurement uncertainty below 4 keV. We were also able to observe debris on the CCD chip. The debris showed maximum contrast at the lowest energy used, 930 eV, and disappeared by 4 keV. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.

  6. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  7. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  8. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    SciTech Connect

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  9. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-06-25

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  10. An efficient coding scheme for surveillance videos captured by stationary cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Xianguo; Liang, Luhong; Huang, Qian; Liu, Yazhou; Huang, Tiejun; Gao, Wen

    2010-07-01

    In this paper, a new scheme is presented to improve the coding efficiency of sequences captured by stationary cameras (or namely, static cameras) for video surveillance applications. We introduce two novel kinds of frames (namely background frame and difference frame) for input frames to represent the foreground/background without object detection, tracking or segmentation. The background frame is built using a background modeling procedure and periodically updated while encoding. The difference frame is calculated using the input frame and the background frame. A sequence structure is proposed to generate high quality background frames and efficiently code difference frames without delay, and then surveillance videos can be easily compressed by encoding the background frames and difference frames in a traditional manner. In practice, the H.264/AVC encoder JM 16.0 is employed as a build-in coding module to encode those frames. Experimental results on eight in-door and out-door surveillance videos show that the proposed scheme achieves 0.12 dB~1.53 dB gain in PSNR over the JM 16.0 anchor specially configured for surveillance videos.

  11. Biomechanical and mathematical analysis of human movement in medical rehabilitation science using time-series data from two video cameras and force-plate sensor

    NASA Astrophysics Data System (ADS)

    Tsuruoka, Masako; Shibasaki, Ryosuke; Box, Elgene O.; Murai, Shunji; Mori, Eiji; Wada, Takao; Kurita, Masahiro; Iritani, Makoto; Kuroki, Yoshikatsu

    1994-08-01

    In medical rehabilitation science, quantitative understanding of patient movement in 3-D space is very important. The patient with any joint disorder will experience its influence on other body parts in daily movement. The alignment of joints in movement is able to improve under medical therapy process. In this study, the newly developed system is composed of two non- metri CCD video cameras and a force plate sensor, which are controlled simultaneously by a personal computer. By this system time-series digital data from 3-D image photogrammetry, each foot pressure and its center position, is able to provide efficient information for biomechanical and mathematical analysis of human movement. Each specific and common points are indicated in any patient movement. This study suggests more various, quantitative understanding in medical rehabilitation science.

  12. Analysis of the technical biases of meteor video cameras used in the CILBO system

    NASA Astrophysics Data System (ADS)

    Albin, Thomas; Koschny, Detlef; Molau, Sirko; Srama, Ralf; Poppe, Björn

    2017-02-01

    In this paper, we analyse the technical biases of two intensified video cameras, ICC7 and ICC9, of the double-station meteor camera system CILBO (Canary Island Long-Baseline Observatory). This is done to thoroughly understand the effects of the camera systems on the scientific data analysis. We expect a number of errors or biases that come from the system: instrumental errors, algorithmic errors and statistical errors. We analyse different observational properties, in particular the detected meteor magnitudes, apparent velocities, estimated goodness-of-fit of the astrometric measurements with respect to a great circle and the distortion of the camera. We find that, due to a loss of sensitivity towards the edges, the cameras detect only about 55 % of the meteors it could detect if it had a constant sensitivity. This detection efficiency is a function of the apparent meteor velocity. We analyse the optical distortion of the system and the goodness-of-fit of individual meteor position measurements relative to a fitted great circle. The astrometric error is dominated by uncertainties in the measurement of the meteor attributed to blooming, distortion of the meteor image and the development of a wake for some meteors. The distortion of the video images can be neglected. We compare the results of the two identical camera systems and find systematic differences. For example, the peak magnitude distribution for ICC9 is shifted by about 0.2-0.4 mag towards fainter magnitudes. This can be explained by the different pointing directions of the cameras. Since both cameras monitor the same volume in the atmosphere roughly between the two islands of Tenerife and La Palma, one camera (ICC7) points towards the west, the other one (ICC9) to the east. In particular, in the morning hours the apex source is close to the field-of-view of ICC9. Thus, these meteors appear slower, increasing the dwell time on a pixel. This is favourable for the detection of a meteor of a given

  13. Compact full-motion video hyperspectral cameras: development, image processing, and applications

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.

    2015-10-01

    Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.

  14. An explanation for camera perspective bias in voluntariness judgment for video-recorded confession: Suggestion of cognitive frame.

    PubMed

    Park, Kwangbai; Pyo, Jimin

    2012-06-01

    Three experiments were conducted to test the hypothesis that difference in voluntariness judgment for a custodial confession filmed in different camera focuses ("camera perspective bias") could occur because a particular camera focus conveys a suggestion of a particular cognitive frame. In Experiment 1, 146 juror eligible adults in Korea showed a camera perspective bias in voluntariness judgment with a simulated confession filmed with two cameras of different focuses, one on the suspect and the other on the detective. In Experiment 2, the same bias in voluntariness judgment emerged without cameras when the participants were cognitively framed, prior to listening to the audio track of the videos used in Experiment 1, by instructions to make either a voluntariness judgment for a confession or a coerciveness judgment for an interrogation. In Experiment 3, the camera perspective bias in voluntariness judgment disappeared when the participants viewing the video focused on the suspect were initially framed to make coerciveness judgment for the interrogation and the participants viewing the video focused on the detective were initially framed to make voluntariness judgment for the confession. The results in combination indicated that a particular camera focus may convey a suggestion of a particular cognitive frame in which a video-recorded confession/interrogation is initially represented. Some forensic and policy implications were discussed.

  15. First results from newly developed automatic video system MAIA and comparison with older analogue cameras

    NASA Astrophysics Data System (ADS)

    Koten, P.; Páta, P.; Fliegel, K.; Vítek, S.

    2013-09-01

    New automatic video system for meteor observations MAIA was developed in recent years [1]. The goal is to replace the older analogue cameras and provide a platform for continues round the year observations from two different stations. Here we present first results obtained during testing phase as well as the first double station observations. Comparison with the older analogue cameras is provided too. MAIA (Meteor Automatic Imager and Analyzer) is based on digital monochrome camera JAI CM-040 and well proved image intensifier XX1332 (Figure 1). The camera provides spatial resolution 776 x 582 pixels. The maximum frame rate is 61.15 frames per second. Fast Pentax SMS FA 1.4/50mm lens is used as the input element of the optical system. The resulting field-of-view is about 50º in diameter. For the first time new system was used in semiautomatic regime for the observation of the Draconid outburst on 8th October, 2011. Both cameras recorded more than 160 meteors. Additional hardware and software were developed in 2012 to enable automatic observation and basic processing of the data. The system usually records the video sequences for whole night. During the daytime it looks the records for moving object, saves them into short sequences and clears the hard drives to allow additional observations. Initial laboratory measurements [2] and simultaneous observations with older system show significant improvement of the obtained data. Table 1 shows comparison of the basic parameters of both systems. In this paper we will present comparison of the double station data obtained using both systems.

  16. Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera

    PubMed Central

    2016-01-01

    Summary: Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon’s perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon’s perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection. PMID:27482504

  17. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels.

  18. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  19. Real Time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera

    PubMed Central

    Doğan, Sedat; Temiz, Mahir Serhan; Külür, Sıtkı

    2010-01-01

    In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

  20. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  1. High resolution three-dimensional flash LIDAR system using a polarization modulating Pockels cell and a micro-polarizer CCD camera.

    PubMed

    Jo, Sungeun; Kong, Hong Jin; Bang, Hyochoong; Kim, Jae-Wan; Kim, Jomsool; Choi, Soungwoong

    2016-12-26

    An innovative flash LIDAR (light detection and ranging) system with high spatial resolution and high range precision is proposed in this paper. The proposed system consists of a polarization modulating Pockels cell (PMPC) and a micro-polarizer CCD camera (MCCD). The Pockels cell changes its polarization state with respect to time after a laser pulse is emitted from the system. The polarization state of the laser-return pulse depends on the arrival time. The MCCD measures the intensity of the returning laser pulse to calculate the polarization state, which gives the range. A spatial resolution and range precision of 0.12 mrad and 5.2 mm at 16 m were obtained, respectively, in this experiment.

  2. Acute gastroenteritis and video camera surveillance: a cruise ship case report.

    PubMed

    Diskin, Arthur L; Caro, Gina M; Dahl, Eilif

    2014-01-01

    A 'faecal accident' was discovered in front of a passenger cabin of a cruise ship. After proper cleaning of the area the passenger was approached, but denied having any gastrointestinal symptoms. However, when confronted with surveillance camera evidence, she admitted having the accident and even bringing the towel stained with diarrhoea back to the pool towels bin. She was isolated until the next port where she was disembarked. Acute gastroenteritis (AGE) caused by Norovirus is very contagious and easily transmitted from person to person on cruise ships. The main purpose of isolation is to avoid public vomiting and faecal accidents. To quickly identify and isolate contagious passengers and crew and ensure their compliance are key elements in outbreak prevention and control, but this is difficult if ill persons deny symptoms. All passenger ships visiting US ports now have surveillance video cameras, which under certain circumstances can assist in finding potential index cases for AGE outbreaks.

  3. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    NASA Astrophysics Data System (ADS)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  4. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    PubMed

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  5. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    PubMed Central

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607

  6. People counting and re-identification using fusion of video camera and laser scanner

    NASA Astrophysics Data System (ADS)

    Ling, Bo; Olivera, Santiago; Wagley, Raj

    2016-05-01

    We present a system for people counting and re-identification. It can be used by transit and homeland security agencies. Under FTA SBIR program, we have developed a preliminary system for transit passenger counting and re-identification using a laser scanner and video camera. The laser scanner is used to identify the locations of passenger's head and shoulder in an image, a challenging task in crowed environment. It can also estimate the passenger height without prior calibration. Various color models have been applied to form color signatures. Finally, using a statistical fusion and classification scheme, passengers are counted and re-identified.

  7. Development and calibration of acoustic video camera system for moving vehicles

    NASA Astrophysics Data System (ADS)

    Yang, Diange; Wang, Ziteng; Li, Bing; Lian, Xiaomin

    2011-05-01

    In this paper, a new acoustic video camera system is developed and its calibration method is established. This system is built based on binocular vision and acoustical holography technology. With binocular vision method, the spatial distance between the microphone array and the moving vehicles is obtained, and the sound reconstruction plane can be established closely to the moving vehicle surface automatically. Then the sound video is regenerated closely to the moving vehicles accurately by acoustic holography method. With this system, the moving and stationary sound sources are treated differently and automatically, which makes the sound visualization of moving vehicles much quicker, more intuitively, and accurately. To verify this system, experiments for a stationary speaker and a non-stationary speaker are carried out. Further verification experiments for outdoor moving vehicle are also conducted. Successful video visualization results not only confirm the validity of the system but also suggest that this system can be a potential useful tool in vehicle's noise identification because it allows the users to find out the noise sources by the videos easily. We believe the newly developed system will be of great potential in moving vehicles' noise identification and control.

  8. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  9. Gain, Level, And Exposure Control For A Television Camera

    NASA Technical Reports Server (NTRS)

    Major, Geoffrey J.; Hetherington, Rolfe W.

    1992-01-01

    Automatic-level-control/automatic-gain-control (ALC/AGC) system for charge-coupled-device (CCD) color television camera prevents over-loading in bright scenes using technique for measuring brightness of scene from red, green, and blue output signals and processing these into adjustments of video amplifiers and iris on camera lens. System faster, does not distort video brightness signals, and built with smaller components.

  10. ProgRes 3000: a digital color camera with a 2-D array CCD sensor and programmable resolution up to 2994 x 2320 picture elements

    NASA Astrophysics Data System (ADS)

    Lenz, Reimar K.; Lenz, Udo

    1990-11-01

    A newly developed imaging principle two dimensional microscanning with Piezo-controlled Aperture Displacement (PAD) allows for high image resolutions. The advantages of line scanners (high resolution) are combined with those of CCD area sensors (high light sensitivity geometrical accuracy and stability easy focussing illumination control and selection of field of view by means of TV real-time imaging). A custom designed sensor optimized for small sensor element apertures and color fidelity eliminates the need for color filter revolvers or mechanical shutters and guarantees good color convergence. By altering the computer controlled microscan patterns spatial and temporal resolution become interchangeable their product being a constant. The highest temporal resolution is TV real-time (50 fields/sec) the highest spatial resolution is 2994 x 2320 picture elements (Pels) for each of the three color channels (28 MBytes of raw image data in 8 see). Thus for the first time it becomes possible to take 35mm slide quality still color images of natural 3D scenes by purely electronic means. Nearly " square" Pels as well as hexagonal sampling schemes are possible. Excellent geometrical accuracy and low noise is guaranteed by sensor element (Sel) synchronous analog to digital conversion within the camera head. The cameras principle of operation and the procedure to calibrate the two-dimensional piezo-mechanical motion with an accuracy of better than O. 2. tm RMSE in image space is explained. The remaining positioning inaccuracy may be further

  11. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  12. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  13. Traffic camera system development

    NASA Astrophysics Data System (ADS)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  14. A miniture spectrometer using color CCD and frame calculus technique

    NASA Astrophysics Data System (ADS)

    Wan, Wei; Zhang, Guoping; Chen, Minghong; Liu, Minmin

    2005-01-01

    A design of spectrometer is presented, which uses a holographic grating and a two-dimensional color CCD camera connected with PC via video format port. And in the image post-procession, a real-time frame calculus technique and a non-linear filter were applied to provider higher image quality and better resistant to background noise. With improved designed zoom mechanics, the device has a wide resolution dynamic range and high frequency, since it can gather more spectrum information than linear black-white CCD. The spectrum analysis experiments for water quality detection indicate that the device can meet variant requirements of analysis at low cost.

  15. Reflection imaging in the millimeter-wave range using a video-rate terahertz camera

    NASA Astrophysics Data System (ADS)

    Marchese, Linda E.; Terroux, Marc; Doucet, Michel; Blanchard, Nathalie; Pancrati, Ovidiu; Dufour, Denis; Bergeron, Alain

    2016-05-01

    The ability of millimeter waves (1-10 mm, or 30-300 GHz) to penetrate through dense materials, such as leather, wool, wood and gyprock, and to also transmit over long distances due to low atmospheric absorption, makes them ideal for numerous applications, such as body scanning, building inspection and seeing in degraded visual environments. Current drawbacks of millimeter wave imaging systems are they use single detector or linear arrays that require scanning or the two dimensional arrays are bulky, often consisting of rather large antenna-couple focal plane arrays (FPAs). Previous work from INO has demonstrated the capability of its compact lightweight camera, based on a 384 x 288 microbolometer pixel FPA with custom optics for active video-rate imaging at wavelengths of 118 μm (2.54 THz), 432 μm (0.69 THz), 663 μm (0.45 THz), and 750 μm (0.4 THz). Most of the work focused on transmission imaging, as a first step, but some preliminary demonstrations of reflection imaging at these were also reported. In addition, previous work also showed that the broadband FPA remains sensitive to wavelengths at least up to 3.2 mm (94 GHz). The work presented here demonstrates the ability of the INO terahertz camera for reflection imaging at millimeter wavelengths. Snapshots taken at video rates of objects show the excellent quality of the images. In addition, a description of the imaging system that includes the terahertz camera and different millimeter sources is provided.

  16. Network-linked long-time recording high-speed video camera system

    NASA Astrophysics Data System (ADS)

    Kimura, Seiji; Tsuji, Masataka

    2001-04-01

    This paper describes a network-oriented, long-recording-time high-speed digital video camera system that utilizes an HDD (Hard Disk Drive) as a recording medium. Semiconductor memories (DRAM, etc.) are the most common image data recording media with existing high-speed digital video cameras. They are extensively used because of their advantage of high-speed writing and reading of picture data. The drawback is that their recording time is limited to only several seconds because the data amount is very large. A recording time of several seconds is sufficient for many applications. However, a much longer recording time is required in some applications where an exact prediction of trigger timing is hard to make. In the Late years, the recording density of the HDD has been dramatically improved, which has attracted more attention to its value as a long-recording-time medium. We conceived an idea that we would be able to build a compact system that makes possible a long time recording if the HDD can be used as a memory unit for high-speed digital image recording. However, the data rate of such a system, capable of recording 640 X 480 pixel resolution pictures at 500 frames per second (fps) with 8-bit grayscale is 153.6 Mbyte/sec., and is way beyond the writing speed of the commonly used HDD. So, we developed a dedicated image compression system and verified its capability to lower the data rate from the digital camera to match the HDD writing rate.

  17. Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras

    USGS Publications Warehouse

    Harris, A.J.L.; Thornber, C.R.

    1999-01-01

    GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.

  18. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  19. Development of low-noise high-speed analog ASIC for X-ray CCD cameras and wide-band X-ray imaging sensors

    NASA Astrophysics Data System (ADS)

    Nakajima, Hiroshi; Hirose, Shin-nosuke; Imatani, Ritsuko; Nagino, Ryo; Anabuki, Naohisa; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu; Kitamura, Hisashi; Uchihori, Yukio

    2016-09-01

    We report on the development and performance evaluation of the mixed-signal Application Specific Integrated Circuit (ASIC) developed for the signal processing of onboard X-ray CCD cameras and various types of X-ray imaging sensors in astrophysics. The quick and low-noise readout is essential for the pile-up free imaging spectroscopy with a future X-ray telescope. Our goal is the readout noise of 5e- r . m . s . at the pixel rate of 1 Mpix/s that is about 10 times faster than those of the currently working detectors. We successfully developed a low-noise ASIC as the front-end electronics of the Soft X-ray Imager onboard Hitomi that was launched on February 17, 2016. However, it has two analog-to-digital converters per chain due to the limited processing speed and hence we need to correct the difference of gain to obtain the X-ray spectra. Furthermore, its input equivalent noise performance is not satisfactory (> 100 μV) at the pixel rate higher than 500 kpix/s. Then we upgrade the design of the ASIC with the fourth-order ΔΣ modulators to enhance its inherent noise-shaping performance. Its performance is measured using pseudo CCD signals with variable processing speed. Although its input equivalent noise is comparable with the conventional one, the integrated non-linearity (0.1%) improves to about the half of that of the conventional one. The radiation tolerance is also measured with regard to the total ionizing dose effect and the single event latch-up using protons and Xenon, respectively. The former experiment shows that all of the performances does not change after imposing the dose corresponding to 590 years in a low earth orbit. We also put the upper limit on the frequency of the latch-up to be once per 48 years.

  20. Multiformat video and laser cameras: history, design considerations, acceptance testing, and quality control. Report of AAPM Diagnostic X-Ray Imaging Committee Task Group No. 1.

    PubMed

    Gray, J E; Anderson, W F; Shaw, C C; Shepard, S J; Zeremba, L A; Lin, P J

    1993-01-01

    Acceptance testing and quality control of video and laser cameras is relatively simple, especially with the use of the SMPTE test pattern. Photographic quality control is essential if one wishes to be able to maintain the quality of video and laser cameras. In addition, photographic quality control must be carried out with the film used clinically in the video and laser cameras, and with a sensitometer producing a light spectrum similar to that of the video or laser camera. Before the end of the warranty period a second acceptance test should be carried out. At this time the camera should produce the same results as noted during the initial acceptance test. With the appropriate acceptance and quality control the video and laser cameras should produce quality images throughout the life of the equipment.

  1. Identifying predators and fates of grassland passerine nests using miniature video cameras

    USGS Publications Warehouse

    Pietz, Pamela J.; Granfors, Diane A.

    2000-01-01

    Nest fates, causes of nest failure, and identities of nest predators are difficult to determine for grassland passerines. We developed a miniature video-camera system for use in grasslands and deployed it at 69 nests of 10 passerine species in North Dakota during 1996-97. Abandonment rates were higher at nests 1 day or night (22-116 hr) at 6 nests, 5 of which were depredated by ground squirrels or mice. For nests without cameras, estimated predation rates were lower for ground nests than aboveground nests (P = 0.055), but did not differ between open and covered nests (P = 0.74). Open and covered nests differed, however, when predation risk (estimated by initial-predation rate) was examined separately for day and night using camera-monitored nests; the frequency of initial predations that occurred during the day was higher for open nests than covered nests (P = 0.015). Thus, vulnerability of some nest types may depend on the relative importance of nocturnal and diurnal predators. Predation risk increased with nestling age from 0 to 8 days (P = 0.07). Up to 15% of fates assigned to camera-monitored nests were wrong when based solely on evidence that would have been available from periodic nest visits. There was no evidence of disturbance at nearly half the depredated nests, including all 5 depredated by large mammals. Overlap in types of sign left by different predator species, and variability of sign within species, suggests that evidence at nests is unreliable for identifying predators of grassland passerines.

  2. Optimizing Detection Rate and Characterization of Subtle Paroxysmal Neonatal Abnormal Facial Movements with Multi-Camera Video-Electroencephalogram Recordings.

    PubMed

    Pisani, Francesco; Pavlidis, Elena; Cattani, Luca; Ferrari, Gianluigi; Raheli, Riccardo; Spagnoli, Carlotta

    2016-06-01

    Objectives We retrospectively analyze the diagnostic accuracy for paroxysmal abnormal facial movements, comparing one camera versus multi-camera approach. Background Polygraphic video-electroencephalogram (vEEG) recording is the current gold standard for brain monitoring in high-risk newborns, especially when neonatal seizures are suspected. One camera synchronized with the EEG is commonly used. Methods Since mid-June 2012, we have started using multiple cameras, one of which point toward newborns' faces. We evaluated vEEGs recorded in newborns in the study period between mid-June 2012 and the end of September 2014 and compared, for each recording, the diagnostic accuracies obtained with one-camera and multi-camera approaches. Results We recorded 147 vEEGs from 87 newborns and found 73 episodes of paroxysmal facial abnormal movements in 18 vEEGs of 11 newborns with the multi-camera approach. By using the single-camera approach, only 28.8% of these events were identified (21/73). Ten positive vEEGs with multicamera with 52 paroxysmal facial abnormal movements (52/73, 71.2%) would have been considered as negative with the single-camera approach. Conclusions The use of one additional facial camera can significantly increase the diagnostic accuracy of vEEGs in the detection of paroxysmal abnormal facial movements in the newborns.

  3. Television automatic video-line tester

    NASA Astrophysics Data System (ADS)

    Ge, Zhaoxiang; Tang, Dongsheng; Feng, Binghua

    1998-08-01

    The linearity of telescope video-line is an important character for geodetic instruments and micrometer- telescopes. The instrument of 1 inch video-line tester, invented by University of Shanghai for Science and Technology, has been adopted in related instrument criterion and national metering regulation. But in optical and chemical reading with visual alignment, it can cause subjective error and can not give detailed data and so on. In this paper, the author put forward an improvement for video-line tester by using CCD for TV camera, displaying and processing CCD signal through computer, and auto-testing, with advantage of objectivity, reliability, rapid speed and less focusing error.

  4. The design and realization of a three-dimensional video system by means of a CCD array

    NASA Astrophysics Data System (ADS)

    Boizard, J. L.

    1985-12-01

    Design features and principles and initial tests of a prototype three-dimensional robot vision system based on a laser source and a CCD detector array is described. The use of a laser as a coherent illumination source permits the determination of the relief using one emitter since the location of the source is a known quantity with low distortion. The CCD signal detector array furnishes an acceptable signal/noise ratio and, when wired to an appropriate signal processing system, furnishes real-time data on the return signals, i.e., the characteristic points of an object being scanned. Signal processing involves integration of 29 kB of data per 100 samples, with sampling occurring at a rate of 5 MHz (the CCDs) and yielding an image every 12 msec. Algorithms for filtering errors from the data stream are discussed.

  5. Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera

    NASA Astrophysics Data System (ADS)

    Hahne, Christopher; Aggoun, Amar

    2014-03-01

    A novel and low-cost embedded hardware architecture for real-time refocusing based on a standard plenoptic camera is presented in this study. The proposed layout design synthesizes refocusing slices directly from micro images by omitting the process for the commonly used sub-aperture extraction. Therefore, intellectual property cores, containing switch controlled Finite Impulse Response (FIR) filters, are developed and applied to the Field Programmable Gate Array (FPGA) XC6SLX45 from Xilinx. Enabling the hardware design to work economically, the FIR filters are composed of stored product as well as upsampling and interpolation techniques in order to achieve an ideal relation between image resolution, delay time, power consumption and the demand of logic gates. The video output is transmitted via High-Definition Multimedia Interface (HDMI) with a resolution of 720p at a frame rate of 60 fps conforming to the HD ready standard. Examples of the synthesized refocusing slices are presented.

  6. Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera:

    SciTech Connect

    Moss, K.J.

    1990-09-01

    Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed for I-131. Plant uptake was greater when I-131 was applied at soil depths with higher root concentrations. When I-131 was applied at soil depths with lower root concentrations, plant uptake was less. However, the relationship between root concentration and plant uptake was not a direct one. When I-131 was delivered to deeper soil depths with low root concentrations, the quantity of roots there appeared to be less effective in uptake than the same quantity of roots at shallow soil depths with high root concentration. 29 refs., 6 figs., 11 tabs.

  7. CCD TV focal plane guider development and comparison to SIRTF applications

    NASA Technical Reports Server (NTRS)

    Rank, David M.

    1989-01-01

    It is expected that the SIRTF payload will use a CCD TV focal plane fine guidance sensor to provide acquisition of sources and tracking stability of the telescope. Work has been done to develop CCD TV cameras and guiders at Lick Observatory for several years and have produced state of the art CCD TV systems for internal use. NASA decided to provide additional support so that the limits of this technology could be established and a comparison between SIRTF requirements and practical systems could be put on a more quantitative basis. The results of work carried out at Lick Observatory which was designed to characterize present CCD autoguiding technology and relate it to SIRTF applications is presented. Two different design types of CCD cameras were constructed using virtual phase and burred channel CCD sensors. A simple autoguider was built and used on the KAO, Mt. Lemon and Mt. Hamilton telescopes. A video image processing system was also constructed in order to characterize the performance of the auto guider and CCD cameras.

  8. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  9. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the

  10. Single-Camera Panoramic-Imaging Systems

    NASA Technical Reports Server (NTRS)

    Lindner, Jeffrey L.; Gilbert, John

    2007-01-01

    Panoramic detection systems (PDSs) are developmental video monitoring and image-data processing systems that, as their name indicates, acquire panoramic views. More specifically, a PDS acquires images from an approximately cylindrical field of view that surrounds an observation platform. The main subsystems and components of a basic PDS are a charge-coupled- device (CCD) video camera and lens, transfer optics, a panoramic imaging optic, a mounting cylinder, and an image-data-processing computer. The panoramic imaging optic is what makes it possible for the single video camera to image the complete cylindrical field of view; in order to image the same scene without the benefit of the panoramic imaging optic, it would be necessary to use multiple conventional video cameras, which have relatively narrow fields of view.

  11. Optimal camera exposure for video surveillance systems by predictive control of shutter speed, aperture, and gain

    NASA Astrophysics Data System (ADS)

    Torres, Juan; Menéndez, José Manuel

    2015-02-01

    This paper establishes a real-time auto-exposure method to guarantee that surveillance cameras in uncontrolled light conditions take advantage of their whole dynamic range while provide neither under nor overexposed images. State-of-the-art auto-exposure methods base their control on the brightness of the image measured in a limited region where the foreground objects are mostly located. Unlike these methods, the proposed algorithm establishes a set of indicators based on the image histogram that defines its shape and position. Furthermore, the location of the objects to be inspected is likely unknown in surveillance applications. Thus, the whole image is monitored in this approach. To control the camera settings, we defined a parameters function (Ef ) that linearly depends on the shutter speed and the electronic gain; and is inversely proportional to the square of the lens aperture diameter. When the current acquired image is not overexposed, our algorithm computes the value of Ef that would move the histogram to the maximum value that does not overexpose the capture. When the current acquired image is overexposed, it computes the value of Ef that would move the histogram to a value that does not underexpose the capture and remains close to the overexposed region. If the image is under and overexposed, the whole dynamic range of the camera is therefore used, and a default value of the Ef that does not overexpose the capture is selected. This decision follows the idea that to get underexposed images is better than to get overexposed ones, because the noise produced in the lower regions of the histogram can be removed in a post-processing step while the saturated pixels of the higher regions cannot be recovered. The proposed algorithm was tested in a video surveillance camera placed at an outdoor parking lot surrounded by buildings and trees which produce moving shadows in the ground. During the daytime of seven days, the algorithm was running alternatively together

  12. HiPERCAM: a high-speed quintuple-beam CCD camera for the study of rapid variability in the universe

    NASA Astrophysics Data System (ADS)

    Dhillon, Vikram S.; Marsh, Thomas R.; Bezawada, Naidu; Black, Martin; Dixon, Simon; Gamble, Trevor; Henry, David; Kerry, Paul; Littlefair, Stuart; Lunney, David W.; Morris, Timothy; Osborn, James; Wilson, Richard W.

    2016-08-01

    HiPERCAM is a high-speed camera for the study of rapid variability in the Universe. The project is funded by a ɛ3.5M European Research Council Advanced Grant. HiPERCAM builds on the success of our previous instrument, ULTRACAM, with very significant improvements in performance thanks to the use of the latest technologies. HiPERCAM will use 4 dichroic beamsplitters to image simultaneously in 5 optical channels covering the u'g'r'I'z' bands. Frame rates of over 1000 per second will be achievable using an ESO CCD controller (NGC), with every frame GPS timestamped. The detectors are custom-made, frame-transfer CCDs from e2v, with 4 low noise (2.5e-) outputs, mounted in small thermoelectrically-cooled heads operated at 180 K, resulting in virtually no dark current. The two reddest CCDs will be deep-depletion devices with anti-etaloning, providing high quantum efficiencies across the red part of the spectrum with no fringing. The instrument will also incorporate scintillation noise correction via the conjugate-plane photometry technique. The opto-mechanical chassis will make use of additive manufacturing techniques in metal to make a light-weight, rigid and temperature-invariant structure. First light is expected on the 4.2m William Herschel Telescope on La Palma in 2017 (on which the field of view will be 10' with a 0.3"/pixel scale), with subsequent use planned on the 10.4m Gran Telescopio Canarias on La Palma (on which the field of view will be 4' with a 0.11"/pixel scale) and the 3.5m New Technology Telescope in Chile.

  13. Design and Characterization of the CCD Detector Assemblies for ICON FUV

    NASA Astrophysics Data System (ADS)

    Champagne, J.; Syrstad, E. A.; Siegmund, O.; Darling, N.; Jelinsky, S. R.; Curtis, T.

    2015-12-01

    The Far Ultraviolet Imaging Spectrograph (FUV) on the upcoming Ionospheric Connection Explorer (ICON) mission uses dual image-intensified CCD camera systems, capable of detecting individual UV photons from both spectrometer channels (135.6 and 155 nm). Incident photons are converted to visible light using a sealed tube UV converter. The converter output is coupled to the CCD active area using a bonded fiber optic taper. The CCD (Teledyne DALSA FTT1010M) is a 1024x1024 frame transfer architecture. The camera readout electronics provide video imagery to the spacecraft over a 21 bit serialized LVDS interface, nominally at 10 frames per second and in 512x512 format (2x2 pixel binning). The CCD and primary electronics assembly reside in separate thermal zones, to minimize dark current without active cooling.Engineering and flight camera systems have been assembled, integrated, and tested under both ambient pressure and thermal vacuum environments. The CCD cameras have been fully characterized with both visible light (prior to integration with the UV converter) and UV photons (following system integration). Measured parameters include camera dark current, dark signal non-uniformity, read noise, linearity, gain, pulse height distribution, dynamic range, charge transfer efficiency, resolution, relative efficiency, quantum efficiency, and full well capacity. UV characterization of the camera systems over a range of microchannel plate (MCP) voltages during thermal vacuum testing demonstrates that camera performance will meet the critical on-orbit FUV dynamic range requirements. Flight camera integration with the FUV instrument and sensor calibration is planned for Fall 2015. Camera design and full performance data for the engineering and flight model cameras will be presented.

  14. Determination of visible coordinates of the low-orbit space objects and their photometry by the CCD camera with the analogue output. Initial image processing

    NASA Astrophysics Data System (ADS)

    Shakun, L. S.; Koshkin, N. I.

    2014-06-01

    The number of artificial space objects in the low Earth orbit has been continuously increasing. That raises the requirements for the accuracy of measurement of their coordinates and for the precision of the prediction of their motion. The accuracy of the prediction can be improved if the actual current orientation of the non-spherical satellite is taken into account. In so doing, it becomes possible to directly determine the atmospheric density along the orbit. The problem solution is to regularly conduct the photometric surveillances of a large number of satellites and monitor the parameters of their rotation around the centre of mass. To do that, it is necessary to get and promptly process large video arrays, containing pictures of a satellite against the background stars. In the present paper, the method for the simultaneous measurement of coordinates and brightness of the low Earth orbit space objects against the background stars when they are tracked by telescope KT-50 with the mirror diameter of 50 cm and with video camera WAT-209H2 is considered. The problem of determination of the moments of exposures of images is examined in detail. The estimation of the accuracy of measuring both the apparent coordinates of stars and their photometry is given on the example of observation of the open star cluster. In the presented observations, the standard deviation of one position measured is 1σ, the accuracy of determination of the moment of exposure of images is better than 0.0001 s. The estimate of the standard deviation of one measurement of brightness is 0.1m. Some examples of the results of surveillances of satellites are also presented in the paper.

  15. Nyquist sampling theorem: understanding the illusion of a spinning wheel captured with a video camera

    NASA Astrophysics Data System (ADS)

    Lévesque, Luc

    2014-11-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the sampling time is chosen judiciously, then it is possible to accurately determine the frequency of a signal varying periodically with time. This paper is of educational value as it presents the principles of sampling during data acquisition. The concept of the Nyquist sampling theorem is usually introduced very briefly in the literature, with very little practical examples to grasp its importance during data acquisitions. Through a series of carefully chosen examples, we attempt to present data sampling from the elementary conceptual idea and try to lead the reader naturally to the Nyquist sampling theorem so we may more clearly understand why a signal can be interpreted incorrectly during a data acquisition procedure in the case of undersampling.

  16. High speed cooled CCD experiments

    SciTech Connect

    Pena, C.R.; Albright, K.L.; Yates, G.J.

    1998-12-31

    Experiments were conducted using cooled and intensified CCD cameras. Two different cameras were identically tested using different Optical test stimulus variables. Camera gain and dynamic range were measured by varying microchannel plate (MCP) voltages and controlling light flux using neutral density (ND) filters to yield analog digitized units (ADU) which are digitized values of the CCD pixel`s analog charge. A Xenon strobe (5 {micro}s FWHM, blue light, 430 nm) and a doubled Nd.yag laser (10 ns FWHM, green light, 532 nm) were both used as pulsed illumination sources for the cameras. Images were captured on PC desktop computer system using commercial software. Camera gain and integration time values were adjusted using camera software. Mean values of camera volts versus input flux were also obtained by performing line scans through regions of interest. Experiments and results will be discussed.

  17. Application of video-cameras for quality control and sampling optimisation of hydrological and erosion measurements in a catchment

    NASA Astrophysics Data System (ADS)

    Lora-Millán, Julio S.; Taguas, Encarnacion V.; Gomez, Jose A.; Perez, Rafael

    2014-05-01

    Long term soil erosion studies imply substantial efforts, particularly when there is the need to maintain continuous measurements. There are high costs associated to maintenance of field equipment keeping and quality control of data collection. Energy supply and/or electronic failures, vandalism and burglary are common causes of gaps in datasets, reducing their reach in many cases. In this work, a system of three video-cameras, a recorder and a transmission modem (3G technology) has been set up in a gauging station where rainfall, runoff flow and sediment concentration are monitored. The gauging station is located in the outlet of an olive orchard catchment of 6.4 ha. Rainfall is measured with one automatic raingauge that records intensity at one minute intervals. The discharge is measured by a flume of critical flow depth, where the water is recorded by an ultrasonic sensor. When the water level rises to a predetermined level, the automatic sampler turns on and fills a bottle at different intervals according to a program depending on the antecedent precipitation. A data logger controls the instruments' functions and records the data. The purpose of the video-camera system is to improve the quality of the dataset by i) the visual analysis of the measurement conditions of flow into the flume; ii) the optimisation of the sampling programs. The cameras are positioned to record the flow at the approximation and the gorge of the flume. In order to contrast the values of ultrasonic sensor, there is a third camera recording the flow level close to a measure tape. This system is activated when the ultrasonic sensor detects a height threshold, equivalent to an electric intensity level. Thus, only when there is enough flow, video-cameras record the event. This simplifies post-processing and reduces the cost of download of recordings. The preliminary contrast analysis will be presented as well as the main improvements in the sample program.

  18. Validation of heart rate extraction using video imaging on a built-in camera system of a smartphone.

    PubMed

    Kwon, Sungjun; Kim, Hyunseok; Park, Kwang Suk

    2012-01-01

    As a smartphone is becoming very popular and its performance is being improved fast, a smartphone shows its potential as a low-cost physiological measurement solution which is accurate and can be used beyond the clinical environment. Because cardiac pulse leads the subtle color change of a skin, a pulsatile signal which can be described as photoplethysmographic (PPG) signal can be measured through recording facial video using a digital camera. In this paper, we explore the potential that the reliable heart rate can be measured remotely by the facial video recorded using smartphone camera. First, using the front facing-camera of a smartphone, facial video was recorded. We detected facial region on the image of each frame using face detection, and yielded the raw trace signal from the green channel of the image. To extract more accurate cardiac pulse signal, we applied independent component analysis (ICA) to the raw trace signal. The heart rate was extracted using frequency analysis of the raw trace signal and the analyzed signal from ICA. The accuracy of the estimated heart rate was evaluated by comparing with the heart rate from reference electrocardiogram (ECG) signal. Finally, we developed FaceBEAT, an iPhone application for remote heart rate measurement, based on this study.

  19. Robotic versus human camera holding in video-assisted thoracic sympathectomy: a single blind randomized trial of efficacy and safety.

    PubMed

    Martins Rua, Joaquim Fernando; Jatene, Fabio Biscegli; de Campos, José Ribas Milanez; Monteiro, Rosangela; Tedde, Miguel Lia; Samano, Marcos Naoyuki; Bernardo, Wanderley M; Das-Neves-Pereira, João Carlos

    2009-02-01

    Our objective is to compare surgical safety and efficacy between robotic and human camera control in video-assisted thoracic sympathectomy. A randomized-controlled-trial was performed. Surgical operation was VATS sympathectomy for hyperhidrosis. The trial compared a voice-controlled robot for holding the endoscopic camera robotic group (Ro) to human assisted group (Hu). Each group included 19 patients. Sympathectomy was achieved by electrodessication of the third ganglion. Operations were filmed and images stored. Two observers quantified the number of involuntary and inappropriate movements and how many times the camera was cleaned. Safety criteria were surgical accidents, pain and aesthetical results; efficacy criteria were: surgical and camera use duration, anhydrosis, length of hospitalization, compensatory hyperhidrosis and patient satisfaction. There was no difference between groups regarding surgical accidents, number of involuntary movements, pain, aesthetical results, general satisfaction, number of lens cleaning, anhydrosis, length of hospitalization, and compensatory hyperhidrosis. The number of contacts of the laparoscopic lens with mediastinal structures was lower in the Ro group (P<0.001), but the total and surgical length was longer in this group (P<0.001). Camera holding by a robotic arm in VATS sympathectomy for hyperhidrosis is as safe but less efficient when compared to a human camera-holding assistant.

  20. High-precision portable instrument to measure position angles of a video camera for bird flight research

    NASA Astrophysics Data System (ADS)

    Delinger, W. G.; Willis, W. R.

    1988-05-01

    A battery-powered portable instrument for research on the aerodynamics of bird flight has been built to automatically measure and record the horizontal and vertical angles at which a video camera is pointed as an operator videotapes a soaring bird. Each angle was measured to a precision of about 20 arc seconds or better. Two complete systems were constructed, and a triangulation method was used so the same bird in flight could be videotaped by two cameras at different locations to establish the radius vectors from an origin to the bird. The angle information was generated by rotary transducers attached to the camera mounts, and the angle values along with timing data were stored in the semiconductor memory of a single-board computer. The equipment has been successfully tested in the field and promises to have a wider application where a portable instrument is required to measure angles to high precision.

  1. Addition of a video camera system improves the ease of Airtraq(®) tracheal intubation during chest compression.

    PubMed

    Kohama, Hanako; Komasawa, Nobuyasu; Ueki, Ryusuke; Itani, Motoi; Nishi, Shin-ichi; Kaminoh, Yoshiroh

    2012-04-01

    Recent resuscitation guidelines for cardiopulmonary resuscitation emphasize that rescuers should perform tracheal intubation with minimal interruption of chest compressions. We evaluated the use of video guidance to facilitate tracheal intubation with the Airtraq (ATQ) laryngoscope during chest compression. Eighteen novice physicians in our anesthesia department performed tracheal intubation on a manikin using the ATQ with a video camera system (ATQ-V) or with no video guidance (ATQ-N) during chest compression. All participants were able to intubate the manikin using the ATQ-N without chest compression, but five failed during chest compression (P < 0.05). In contrast, all participants successfully secured the airway with the ATQ-V, with or without chest compression. Concurrent chest compression increased the time required for intubation with the ATQ-N (without chest compression 14.8 ± 4.5 s; with chest compression, 28.2 ± 10.6 s; P < 0.05), but not with the ATQ-V (without chest compression, 15.9 ± 5.8 s; with chest compression, 17.3 ± 5.3 s; P > 0.05). The ATQ video camera system improves the ease of tracheal intubation during chest compressions.

  2. High-speed video capture by a single flutter shutter camera using three-dimensional hyperbolic wavelets

    NASA Astrophysics Data System (ADS)

    Huang, Kuihua; Zhang, Jun; Hou, Jinxin

    2014-09-01

    Based on the consideration of easy achievement in modern sensors, this paper further exploits the possibility of the recovery of high-speed video (HSV) by a single flutter shutter camera. Taking into account different degrees of smoothness along the spatial and temporal dimensions of HSV, this paper proposes to use a three-dimensional hyperbolic wavelet basis based on Kronecker product to jointly model the spatial and temporal redundancy of HSV. Besides, we incorporate the total variation of temporal correlations in HSV as a prior knowledge to further enhance our reconstruction quality. We recover the underlying HSV frames from the observed low-speed coded video by solving a convex minimization problem. The experimental results on simulated and real-world videos both demonstrate the validity of the proposed method.

  3. HDR {sup 192}Ir source speed measurements using a high speed video camera

    SciTech Connect

    Fonseca, Gabriel P.; Rubo, Rodrigo A.; Sales, Camila P. de; Verhaegen, Frank

    2015-01-15

    Purpose: The dose delivered with a HDR {sup 192}Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a {sup 192}Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases.

  4. Lights... camera... action! a guide for creating a DVD/video.

    PubMed

    Fleming, Susan E; Reynolds, Jerry; Wallace, Barb

    2009-01-01

    The DVD/video format offers an educational program that is convenient, consistent, and interactive for the viewer. Faculty members are essential and instrumental in creating storyboards from a script, which is an initial step in the production of DVD/videos. The authors discuss how faculty can participate in the process of developing an educational DVD/video program.

  5. Method for eliminating artifacts in CCD imagers

    DOEpatents

    Turko, Bojan T.; Yates, George J.

    1992-01-01

    An electronic method for eliminating artifacts in a video camera (10) employing a charge coupled device (CCD) (12) as an image sensor. The method comprises the step of initializing the camera (10) prior to normal read out and includes a first dump cycle period (76) for transferring radiation generated charge into the horizontal register (28) while the decaying image on the phosphor (39) being imaged is being integrated in the photosites, and a second dump cycle period (78), occurring after the phosphor (39) image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers (32). Image charge is then transferred from the photosites (36) and (38) to the vertical registers (32) and read out in conventional fashion. The inventive method allows the video camera (10) to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers (28) and (32), and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites (36) and (37).

  6. Method for eliminating artifacts in CCD imagers

    DOEpatents

    Turko, B.T.; Yates, G.J.

    1992-06-09

    An electronic method for eliminating artifacts in a video camera employing a charge coupled device (CCD) as an image sensor is disclosed. The method comprises the step of initializing the camera prior to normal read out and includes a first dump cycle period for transferring radiation generated charge into the horizontal register while the decaying image on the phosphor being imaged is being integrated in the photosites, and a second dump cycle period, occurring after the phosphor image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers. Image charge is then transferred from the photosites and to the vertical registers and read out in conventional fashion. The inventive method allows the video camera to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers and, and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites. 3 figs.

  7. Method for eliminating artifacts in CCD imagers

    NASA Astrophysics Data System (ADS)

    Turko, B. T.; Yates, G. J.

    1990-06-01

    An electronic method for eliminating artifacts in a video camera employing a charge coupled device (CCD) as an image sensor is presented. The method comprises the step of initializing the camera prior to normal readout. The method includes a first dump cycle period for transferring radiation generated charge into the horizontal register. This occurs while the decaying image on the phosphor being imaged is being integrated in the photosites, and a second dump cycle period, occurring after the phosphor image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers. Image charge is then transferred from the photosites and to the vertical registers and readout in conventional fashion. The inventive method allows the video camera to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear, and smear phenomena caused by insufficient opacity of the registers, and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites.

  8. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    SciTech Connect

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  9. Advanced Video Data-Acquisition System For Flight Research

    NASA Technical Reports Server (NTRS)

    Miller, Geoffrey; Richwine, David M.; Hass, Neal E.

    1996-01-01

    Advanced video data-acquisition system (AVDAS) developed to satisfy variety of requirements for in-flight video documentation. Requirements range from providing images for visualization of airflows around fighter airplanes at high angles of attack to obtaining safety-of-flight documentation. F/A-18 AVDAS takes advantage of very capable systems like NITE Hawk forward-looking infrared (FLIR) pod and recent video developments like miniature charge-couple-device (CCD) color video cameras and other flight-qualified video hardware.

  10. Use of a CCD-based area detection system of a fibre diffractometer

    SciTech Connect

    Hanna, S.; Windle, A.H.

    1995-12-31

    We describe a new X-ray fibre diffractometer, consisting of a commercial X-ray sensitive video camera coupled to a conventional 3-circle goniometer in place of a more traditional single-point detector. The active element of the video camera is a charge-coupled device (CCD). Diffraction images, obtained at various goniometer settings, are transformed into reciprocal space, and combined to give a complete section through the origin and parallel to the symmetry axis of cylindrically averaged reciprocal space. A greater density of measurements is needed in the vicinity of the reciprocal fibre axis in order to avoid information loss due to the curvature of the Ewald sphere. The pros and cons of using CCD`s as X-ray detectors are discussed and sample results from polymer fibers are shown. 17 refs., 5 figs.

  11. The Ortega Telescope Andor CCD

    NASA Astrophysics Data System (ADS)

    Tucker, M.; Batcheldor, D.

    2012-07-01

    We present a preliminary instrument report for an Andor iKon-L 936 charge-couple device (CCD) being operated at Florida Tech's 0.8 m Ortega Telescope. This camera will replace the current Finger Lakes Instrumentation (FLI) Proline CCD. Details of the custom mount produced for this camera are presented, as is a quantitative and qualitative comparison of the new and old cameras. We find that the Andor camera has 50 times less noise than the FLI, has no significant dark current over 30 seconds, and has a smooth, regular flat field. The Andor camera will provide significantly better sensitivity for direct imaging programs and, once it can be satisfactorily tested on-sky, will become the standard imaging device on the Ortega Telescope.

  12. CCD imaging systems for DEIMOS

    NASA Astrophysics Data System (ADS)

    Wright, Christopher A.; Kibrick, Robert I.; Alcott, Barry; Gilmore, David K.; Pfister, Terry; Cowley, David J.

    2003-03-01

    The DEep Imaging Multi-Object Spectrograph (DEIMOS) images with an 8K x 8K science mosaic composed of eight 2K x 4K MIT/Lincoln Lab (MIT/LL) CCDs. It also incorporates two 1200 x 600 Orbit Semiconductor CCDs for active, close-loop flexure compensation. The science mosaic CCD controller system reads out all eight science CCDs in 40 seconds while maintaining the low noise floor of the MIT/Lincoln Lab CCDs. The flexure compensation (FC) CCD controller reads out the FC CCDs several times per minute during science mosaic exposures. The science mosaic CCD controller and the FC CCD controller are located on the electronics ring of DEIMOS. Both the MIT/Lincoln Lab CCDs and the Orbit flexure compensation CCDs and their associated cabling and printed circuit boards are housed together in the same detector vessel that is approximately 10 feet away from the electronics ring. Each CCD controller has a modular hardware design and is based on the San Diego State University (SDSU) Generation 2 (SDSU-2) CCD controller. Provisions have been made to the SDSU-2 video board to accommodate external CCD preamplifiers that are located at the detector vessel. Additional circuitry has been incorporated in the CCD controllers to allow the readback of all clocks and bias voltages for up to eight CCDs, to allow up to 10 temperature monitor and control points of the mosaic, and to allow full-time monitoring of power supplies and proper power supply sequencing. Software control features of the CCD controllers are: software selection between multiple mosaic readout modes, readout speeds, selectable gains, ramped parallel clocks to eliminate spurious charge on the CCDs, constant temperature monitoring and control of each CCD within the mosaic, proper sequencing of the bias voltages of the CCD output MOSFETs, and anti-blooming operation of the science mosaic. We cover both the hardware and software highlights of both of these CCD controller systems as well as their respective performance.

  13. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  14. Hand-gesture extraction and recognition from the video sequence acquired by a dynamic camera using condensation algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Ohya, Jun

    2009-01-01

    To achieve environments in which humans and mobile robots co-exist, technologies for recognizing hand gestures from the video sequence acquired by a dynamic camera could be useful for human-to-robot interface systems. Most of conventional hand gesture technologies deal with only still camera images. This paper proposes a very simple and stable method for extracting hand motion trajectories based on the Human-Following Local Coordinate System (HFLC System), which is obtained from the located human face and both hands. Then, we apply Condensation Algorithm to the extracted hand trajectories so that the hand motion is recognized. We demonstrate the effectiveness of the proposed method by conducting experiments on 35 kinds of sign language based hand gestures.

  15. Lights, Camera, Action: Advancing Learning, Research, and Program Evaluation through Video Production in Educational Leadership Preparation

    ERIC Educational Resources Information Center

    Friend, Jennifer; Militello, Matthew

    2015-01-01

    This article analyzes specific uses of digital video production in the field of educational leadership preparation, advancing a three-part framework that includes the use of video in (a) teaching and learning, (b) research methods, and (c) program evaluation and service to the profession. The first category within the framework examines videos…

  16. Lights, Camera, Action! Learning about Management with Student-Produced Video Assignments

    ERIC Educational Resources Information Center

    Schultz, Patrick L.; Quinn, Andrew S.

    2014-01-01

    In this article, we present a proposal for fostering learning in the management classroom through the use of student-produced video assignments. We describe the potential for video technology to create active learning environments focused on problem solving, authentic and direct experiences, and interaction and collaboration to promote student…

  17. In-situ measurements of alloy oxidation/corrosion/erosion using a video camera and proximity sensor with microcomputer control

    NASA Technical Reports Server (NTRS)

    Deadmore, D. L.

    1984-01-01

    Two noncontacting and nondestructive, remotely controlled methods of measuring the progress of oxidation/corrosion/erosion of metal alloys, exposed to flame test conditions, are described. The external diameter of a sample under test in a flame was measured by a video camera width measurement system. An eddy current proximity probe system, for measurements outside of the flame, was also developed and tested. The two techniques were applied to the measurement of the oxidation of 304 stainless steel at 910 C using a Mach 0.3 flame. The eddy current probe system yielded a recession rate of 0.41 mils diameter loss per hour and the video system gave 0.27.

  18. Studying complex decision making in natural settings: using a head-mounted video camera to study competitive orienteering.

    PubMed

    Omodei, M M; McLennan, J

    1994-12-01

    Head-mounted video recording is described as a potentially powerful method for studying decision making in natural settings. Most alternative data-collection procedures are intrusive and disruptive of the decision-making processes involved while conventional video-recording procedures are either impractical or impossible. As a severe test of the robustness of the methodology we studied the decision making of 6 experienced orienteers who carried a head-mounted light-weight video camera as they navigated, running as fast as possible, around a set of control points in a forest. Use of the Wilcoxon matched-pairs signed-ranks test indicated that compared with free recall, video-assisted recall evoked (a) significantly greater experiential immersion in the recall, (b) significantly more specific recollections of navigation-related thoughts and feelings, (c) significantly more realizations of map and terrain features and aspects of running speed which were not noticed at the time of actual competition, and (d) significantly greater insight into specific navigational errors and the intrusion of distracting thoughts into the decision-making process. Potential applications of the technique in (a) the environments of emergency services, (b) therapeutic contexts, (c) education and training, and (d) sports psychology are discussed.

  19. Activity profiles and hook-tool use of New Caledonian crows recorded by bird-borne video cameras.

    PubMed

    Troscianko, Jolyon; Rutz, Christian

    2015-12-01

    New Caledonian crows are renowned for their unusually sophisticated tool behaviour. Despite decades of fieldwork, however, very little is known about how they make and use their foraging tools in the wild, which is largely owing to the difficulties in observing these shy forest birds. To obtain first estimates of activity budgets, as well as close-up observations of tool-assisted foraging, we equipped 19 wild crows with self-developed miniature video cameras, yielding more than 10 h of analysable video footage for 10 subjects. While only four crows used tools during recording sessions, they did so extensively: across all 10 birds, we conservatively estimate that tool-related behaviour occurred in 3% of total observation time, and accounted for 19% of all foraging behaviour. Our video-loggers provided first footage of crows manufacturing, and using, one of their most complex tool types--hooked stick tools--under completely natural foraging conditions. We recorded manufacture from live branches of paperbark (Melaleuca sp.) and another tree species (thought to be Acacia spirorbis), and deployment of tools in a range of contexts, including on the forest floor. Taken together, our video recordings reveal an 'expanded' foraging niche for hooked stick tools, and highlight more generally how crows routinely switch between tool- and bill-assisted foraging.

  20. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  1. Lights, Camera: Learning! Findings from studies of video in formal and informal science education

    NASA Astrophysics Data System (ADS)

    Borland, J.

    2013-12-01

    As part of the panel, media researcher, Jennifer Borland, will highlight findings from a variety of studies of videos across the spectrum of formal to informal learning, including schools, museums, and in viewers homes. In her presentation, Borland will assert that the viewing context matters a great deal, but there are some general take-aways that can be extrapolated to the use of educational video in a variety of settings. Borland has served as an evaluator on several video-related projects funded by NASA and the the National Science Foundation including: Data Visualization videos and Space Shows developed by the American Museum of Natural History, DragonflyTV, Earth the Operators Manual, The Music Instinct and Time Team America.

  2. Spatial and temporal scales of shoreline morphodynamics derived from video camera observations for the island of Sylt, German Wadden Sea

    NASA Astrophysics Data System (ADS)

    Blossier, Brice; Bryan, Karin R.; Daly, Christopher J.; Winter, Christian

    2016-08-01

    Spatial and temporal scales of beach morphodynamics were assessed for the island of Sylt, German Wadden Sea, based on continuous video camera monitoring data from 2011 to 2014 along a 1.3 km stretch of sandy beach. They served to quantify, at this location, the amount of shoreline variability covered by beach monitoring schemes, depending on the time interval and alongshore resolution of the surveys. Correlation methods, used to quantify the alongshore spatial scales of shoreline undulations, were combined with semi-empirical modelling and spectral analyses of shoreline temporal fluctuations. The data demonstrate that an alongshore resolution of 150 m and a monthly survey time interval capture 70% of the kilometre-scale shoreline variability over the 2011-2014 study period. An alongshore spacing of 10 m and a survey time interval of 5 days would be required to monitor 95% variance of the shoreline temporal fluctuations with steps of 5% changes in variance over space. Although monitoring strategies such as land or airborne surveying are reliable methods of data collection, video camera deployment remains the cheapest technique providing the high spatiotemporal resolution required to monitor subkilometre-scale morphodynamic processes involving, for example, small- to middle-sized beach nourishment.

  3. Spatial and temporal scales of shoreline morphodynamics derived from video camera observations for the island of Sylt, German Wadden Sea

    NASA Astrophysics Data System (ADS)

    Blossier, Brice; Bryan, Karin R.; Daly, Christopher J.; Winter, Christian

    2017-04-01

    Spatial and temporal scales of beach morphodynamics were assessed for the island of Sylt, German Wadden Sea, based on continuous video camera monitoring data from 2011 to 2014 along a 1.3 km stretch of sandy beach. They served to quantify, at this location, the amount of shoreline variability covered by beach monitoring schemes, depending on the time interval and alongshore resolution of the surveys. Correlation methods, used to quantify the alongshore spatial scales of shoreline undulations, were combined with semi-empirical modelling and spectral analyses of shoreline temporal fluctuations. The data demonstrate that an alongshore resolution of 150 m and a monthly survey time interval capture 70% of the kilometre-scale shoreline variability over the 2011-2014 study period. An alongshore spacing of 10 m and a survey time interval of 5 days would be required to monitor 95% variance of the shoreline temporal fluctuations with steps of 5% changes in variance over space. Although monitoring strategies such as land or airborne surveying are reliable methods of data collection, video camera deployment remains the cheapest technique providing the high spatiotemporal resolution required to monitor subkilometre-scale morphodynamic processes involving, for example, small- to middle-sized beach nourishment.

  4. Lights, camera, action…critique? Submit videos to AGU communications workshop

    NASA Astrophysics Data System (ADS)

    Viñas, Maria-José

    2011-08-01

    What does it take to create a science video that engages the audience and draws thousands of views on YouTube? Those interested in finding out should submit their research-related videos to AGU's Fall Meeting science film analysis workshop, led by oceanographer turned documentary director Randy Olson. Olson, writer-director of two films (Flock of Dodos: The Evolution-Intelligent Design Circus and Sizzle: A Global Warming Comedy) and author of the book Don't Be Such a Scientist: Talking Substance in an Age of Style, will provide constructive criticism on 10 selected video submissions, followed by moderated discussion with the audience. To submit your science video (5 minutes or shorter), post it on YouTube and send the link to the workshop coordinator, Maria-José Viñas (mjvinas@agu.org), with the following subject line: Video submission for Olson workshop. AGU will be accepting submissions from researchers and media officers of scientific institutions until 6:00 P.M. eastern time on Friday, 4 November. Those whose videos are selected to be screened will be notified by Friday, 18 November. All are welcome to attend the workshop at the Fall Meeting.

  5. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  6. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    SciTech Connect

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.

  7. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE PAGES

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  8. A new method to calculate the camera focusing area and player position on playfield in soccer video

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Huang, Qingming; Ye, Qixiang; Gao, Wen

    2005-07-01

    Sports video enrichment is attracting many researchers. People want to appreciate some highlight segments with cartoon. In order to automatically generate these cartoon video, we have to estimate the players" and ball"s 3D position. In this paper, we propose an algorithm to cope with the former problem, i.e. to compute players" position on court. For the image with sufficient corresponding points, the algorithm uses these points to calibrate the map relationship between image and playfield plane (called as homography). For the images without enough corresponding points, we use global motion estimation (GME) and the already calibrated image to compute the images" homographies. Thus, the problem boils down to estimating global motion. To enhance the performance of global motion estimation, two strategies are exploited. The first one is removing the moving objects based on adaptive GMM playfield detection, which can eliminate the influence of non-still object; The second one is using LKT tracking feature points to determine horizontal and vertical translation, which makes the optimization process for GME avoid being trapped into local minimum. Thus, if some images of a sequence can be calibrated directly from the intersection points of court line, all images of the sequence can by calibrated through GME. When we know the homographies between image and playfield, we can compute the camera focusing area and players" position in real world. We have tested our algorithm on real video and the result is encouraging.

  9. Visual surveys can reveal rather different 'pictures' of fish densities: Comparison of trawl and video camera surveys in the Rockall Bank, NE Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    McIntyre, F. D.; Neat, F.; Collie, N.; Stewart, M.; Fernandes, P. G.

    2015-01-01

    Visual surveys allow non-invasive sampling of organisms in the marine environment which is of particular importance in deep-sea habitats that are vulnerable to damage caused by destructive sampling devices such as bottom trawls. To enable visual surveying at depths greater than 200 m we used a deep towed video camera system, to survey large areas around the Rockall Bank in the North East Atlantic. The area of seabed sampled was similar to that sampled by a bottom trawl, enabling samples from the towed video camera system to be compared with trawl sampling to quantitatively assess the numerical density of deep-water fish populations. The two survey methods provided different results for certain fish taxa and comparable results for others. Fish that exhibited a detectable avoidance behaviour to the towed video camera system, such as the Chimaeridae, resulted in mean density estimates that were significantly lower (121 fish/km2) than those determined by trawl sampling (839 fish/km2). On the other hand, skates and rays showed no reaction to the lights in the towed body of the camera system, and mean density estimates of these were an order of magnitude higher (64 fish/km2) than the trawl (5 fish/km2). This is probably because these fish can pass under the footrope of the trawl due to their flat body shape lying close to the seabed but are easily detected by the benign towed video camera system. For other species, such as Molva sp, estimates of mean density were comparable between the two survey methods (towed camera, 62 fish/km2; trawl, 73 fish/km2). The towed video camera system presented here can be used as an alternative benign method for providing indices of abundance for species such as ling in areas closed to trawling, or for those fish that are poorly monitored by trawl surveying in any area, such as the skates and rays.

  10. Evaluation of a 0.9- to 2.2-microns sensitive video camera with a mid-infrared filter (1.45- to 2.0-microns)

    NASA Astrophysics Data System (ADS)

    Everitt, J. H.; Escobar, D. E.; Nixon, P. R.; Blazquez, C. H.; Hussey, M. A.

    The application of 0.9- to 2.2-microns sensitive black and white IR video cameras to remote sensing is examined. Field and laboratory recordings of the upper and lower surface of peperomia leaves, succulent prickly pear, and buffelgrass are evaluated; the reflectance, phytomass, green weight, and water content for the samples were measured. The data reveal that 0.9- to 2.2-microns video cameras are effective tools for laboratory and field research; however, the resolution and image quality of the data is poor compared to visible and near-IR images.

  11. Action, Interaction, and Reaction: The Video Camera and the FL Classroom.

    ERIC Educational Resources Information Center

    Armstrong, Kimberly M.; Yetter-Vassot, Cindy

    Uses of pre-recorded and student-generated videotape recordings in the foreign language (FL) classroom are described and discussed from the perspective of their utility in helping students achieve target language communicative competence. It is suggested that viewing authentic video materials provides an opportunity to observe extralinguistic…

  12. Lights! Camera! Action! Producing Library Instruction Video Tutorials Using Camtasia Studio

    ERIC Educational Resources Information Center

    Charnigo, Laurie

    2009-01-01

    From Web guides to online tutorials, academic librarians are increasingly experimenting with many different technologies in order to meet the needs of today's growing distance education populations. In this article, the author discusses one librarian's experience using Camtasia Studio to create subject specific video tutorials. Benefits, as well…

  13. "Lights, Camera, Reflection": Using Peer Video to Promote Reflective Dialogue among Student Teachers

    ERIC Educational Resources Information Center

    Harford, Judith; MacRuairc, Gerry; McCartan, Dermot

    2010-01-01

    This paper examines the use of peer-videoing in the classroom as a means of promoting reflection among student teachers. Ten pre-service teachers participating in a teacher education programme in a university in the Republic of Ireland and ten pre-service teachers participating in a teacher education programme in a university in the North of…

  14. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  15. A pulsed THz imaging system with a line focus and a balanced 1-D detection scheme with two industrial CCD line-scan cameras.

    PubMed

    Wiegand, Christian; Herrmann, Michael; Bachtler, Sebastian; Klier, Jens; Molter, Daniel; Jonuscheit, Joachim; Beigang, René

    2010-03-15

    We present a pulsed THz Imaging System with a line focus intended to speed up measurements. A balanced 1-D detection scheme working with two industrial line-scan cameras is used. The instrument is implemented without the need for an amplified laser system, increasing the industrial applicability. The instrumental characteristics are determined.

  16. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. I: Nucleation and growth model

    PubMed Central

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Tachibana, Masatoshi; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. has been investigated by image processing using a high-speed video camera. In order to express the temporal change in the stalk length resulting from the contraction, a damped spring model and a nucleation and growth model are applied. A double exponential is deduced from a conventional damped spring model, while a stretched exponential is newly proposed from a nucleation and growth model. The stretched exponential function is more suitable for the curve fitting and suggests a more particular contraction mechanism in which the contraction of the stalk begins near the cell body and spreads downwards along the stalk. The index value of the stretched exponential is evaluated in the range from 1 to 2 in accordance with the model in which the contraction undergoes through nucleation and growth in a one-dimensional space. PMID:27857602

  17. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. I: Nucleation and growth model.

    PubMed

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Tachibana, Masatoshi; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. has been investigated by image processing using a high-speed video camera. In order to express the temporal change in the stalk length resulting from the contraction, a damped spring model and a nucleation and growth model are applied. A double exponential is deduced from a conventional damped spring model, while a stretched exponential is newly proposed from a nucleation and growth model. The stretched exponential function is more suitable for the curve fitting and suggests a more particular contraction mechanism in which the contraction of the stalk begins near the cell body and spreads downwards along the stalk. The index value of the stretched exponential is evaluated in the range from 1 to 2 in accordance with the model in which the contraction undergoes through nucleation and growth in a one-dimensional space.

  18. Design of a space-qualified zoom lens for the space station mobile servicing system video camera

    NASA Astrophysics Data System (ADS)

    Hull, Anthony B.; Arsenault, Roger H.; Hulan, Dave G.; Morgan, William F.

    1995-10-01

    OCA, under contract to Spar Aerospace, has developed a space-qualified zoom color video camera. The optics are a 9.3:1 f/2 zoom lens under digital servo control, using only two moving groups to accomplish zoom, compensation, and focus over an object distance range from 355 mm to infinity. Accomplishing three functions with two moving groups both improves reliability and allows better aberration correction than conventional zoom lenses using front-element motion to focus for range. The detector is a single chip array with integral color filter array. Important lens features include excellent image quality; performance in a near-earth orbit for 10 years without maintenance; and the development of an algorithm allowing accurate photogrammetric ranging from 355 mm to 10 meters.

  19. Real-time multi-camera video acquisition and processing platform for ADAS

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  20. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  1. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    NASA Astrophysics Data System (ADS)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  2. Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms.

    PubMed

    Huang, Fang; Hartwich, Tobias M P; Rivera-Molina, Felix E; Lin, Yu; Duim, Whitney C; Long, Jane J; Uchil, Pradeep D; Myers, Jordan R; Baird, Michelle A; Mothes, Walther; Davidson, Michael W; Toomre, Derek; Bewersdorf, Joerg

    2013-07-01

    Newly developed scientific complementary metal-oxide semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition, enlarge the field of view and increase the effective quantum efficiency in single-molecule switching nanoscopy. However, sCMOS-intrinsic pixel-dependent readout noise substantially lowers the localization precision and introduces localization artifacts. We present algorithms that overcome these limitations and that provide unbiased, precise localization of single molecules at the theoretical limit. Using these in combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at rates of up to 32 reconstructed images per second in fixed and living cells.

  3. Circuit design of an EMCCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; Jin, Jianhui; He, Chun

    2012-07-01

    EMCCDs have been used in the astronomical observations in many ways. Recently we develop a camera using an EMCCD TX285. The CCD chip is cooled to -100°C in an LN2 dewar. The camera controller consists of a driving board, a control board and a temperature control board. Power supplies and driving clocks of the CCD are provided by the driving board, the timing generator is located in the control board. The timing generator and an embedded Nios II CPU are implemented in an FPGA. Moreover the ADC and the data transfer circuit are also in the control board, and controlled by the FPGA. The data transfer between the image workstation and the camera is done through a Camera Link frame grabber. The software of image acquisition is built using VC++ and Sapera LT. This paper describes the camera structure, the main components and circuit design for video signal processing channel, clock driver, FPGA and Camera Link interfaces, temperature metering and control system. Some testing results are presented.

  4. Introducing Contactless Blood Pressure Assessment Using a High Speed Video Camera.

    PubMed

    Jeong, In Cheol; Finkelstein, Joseph

    2016-04-01

    Recent studies demonstrated that blood pressure (BP) can be estimated using pulse transit time (PTT). For PTT calculation, photoplethysmogram (PPG) is usually used to detect a time lag in pulse wave propagation which is correlated with BP. Until now, PTT and PPG were registered using a set of body-worn sensors. In this study a new methodology is introduced allowing contactless registration of PTT and PPG using high speed camera resulting in corresponding image-based PTT (iPTT) and image-based PPG (iPPG) generation. The iPTT value can be potentially utilized for blood pressure estimation however extent of correlation between iPTT and BP is unknown. The goal of this preliminary feasibility study was to introduce the methodology for contactless generation of iPPG and iPTT and to make initial estimation of the extent of correlation between iPTT and BP "in vivo." A short cycling exercise was used to generate BP changes in healthy adult volunteers in three consecutive visits. BP was measured by a verified BP monitor simultaneously with iPTT registration at three exercise points: rest, exercise peak, and recovery. iPPG was simultaneously registered at two body locations during the exercise using high speed camera at 420 frames per second. iPTT was calculated as a time lag between pulse waves obtained as two iPPG's registered from simultaneous recoding of head and palm areas. The average inter-person correlation between PTT and iPTT was 0.85 ± 0.08. The range of inter-person correlations between PTT and iPTT was from 0.70 to 0.95 (p < 0.05). The average inter-person coefficient of correlation between SBP and iPTT was -0.80 ± 0.12. The range of correlations between systolic BP and iPTT was from 0.632 to 0.960 with p < 0.05 for most of the participants. Preliminary data indicated that a high speed camera can be potentially utilized for unobtrusive contactless monitoring of abrupt blood pressure changes in a variety of settings. The initial prototype system was able to

  5. Using a thermistor flowmeter with attached video camera for monitoring sponge excurrent speed and oscular behaviour

    PubMed Central

    Jorgensen, Damien; Webster, Nicole S.; Pineda, Mari-Carmen; Duckworth, Alan

    2016-01-01

    A digital, four-channel thermistor flowmeter integrated with time-lapse cameras was developed as an experimental tool for measuring pumping rates in marine sponges, particularly those with small excurrent openings (oscula). Combining flowmeters with time-lapse imagery yielded valuable insights into the contractile behaviour of oscula in Cliona orientalis. Osculum cross-sectional area (OSA) was positively correlated to measured excurrent speeds (ES), indicating that sponge pumping and osculum contraction are coordinated behaviours. Both OSA and ES were positively correlated to pumping rate (Q). Diel trends in pumping activity and osculum contraction were also observed, with sponges increasing their pumping activity to peak at midday and decreasing pumping and contracting oscula at night. Short-term elevation of the suspended sediment concentration (SSC) within the seawater initially decreased pumping rates by up to 90%, ultimately resulting in closure of the oscula and cessation of pumping. PMID:27994973

  6. A simple, inexpensive video camera setup for the study of avian nest activity

    USGS Publications Warehouse

    Sabine, J.B.; Meyers, J.M.; Schweitzer, Sara H.

    2005-01-01

    Time-lapse video photography has become a valuable tool for collecting data on avian nest activity and depredation; however, commercially available systems are expensive (>USA $4000/unit). We designed an inexpensive system to identify causes of nest failure of American Oystercatchers (Haematopus palliatus) and assessed its utility at Cumberland Island National Seashore, Georgia. We successfully identified raccoon (Procyon lotor), bobcat (Lynx rufus), American Crow (Corvus brachyrhynchos), and ghost crab (Ocypode quadrata) predation on oystercatcher nests. Other detected causes of nest failure included tidal overwash, horse trampling, abandonment, and human destruction. System failure rates were comparable with commercially available units. Our system's efficacy and low cost (<$800) provided useful data for the management and conservation of the American Oystercatcher.

  7. Analysis of Small-Scale Convective Dynamics in a Crown Fire Using Infrared Video Camera Imagery.

    NASA Astrophysics Data System (ADS)

    Clark, Terry L.; Radke, Larry; Coen, Janice; Middleton, Don

    1999-10-01

    vortex tilting but in the sense that the tilted vortices come together to form the hairpin shape. As the vortices rise and come closer together their combined motion results in the vortex tilting forward at a relatively sharp angle, giving a hairpin shape. The development of these hairpin vortices over a range of scales may represent an important mechanism through which convection contributes to the fire spread.A major problem with the IR data analysis is understanding fully what it is that the camera is sampling, in order physically to interpret the data. The results indicate that because of the large amount of after-burning incandescent soot associated with the crown fire, the camera was viewing only a shallow depth into the flame front, and variabilities in the distribution of hot soot particles provide the structures necessary to derive image flow fields. The coherency of the derived horizontal velocities support this view because if the IR camera were seeing deep into or through the flame front, then the effect of the ubiquitous vertical rotations almost certainly would result in random and incoherent estimates for the horizontal flow fields. Animations of the analyzed imagery showed a remarkable level of consistency in both horizontal and vertical velocity flow structures from frame to frame in support of this interpretation. The fact that the 2D image represents a distorted surface also must be taken into account when interpreting the data.Suggestions for further field experimentation, software development, and testing are discussed in the conclusions. These suggestions may further understanding on this topic and increase the utility of this type of analysis to wildfire research.

  8. Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea): video evidence from animal-borne cameras.

    PubMed

    Heaslip, Susan G; Iverson, Sara J; Bowen, W Don; James, Michael C

    2012-01-01

    The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km) from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate) correlate with the daytime foraging behavior of leatherbacks (n = 19) in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08-3:38 h), and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata) was the dominant prey (83-100%), but moon jellyfish (Aurelia aurita) were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models). Handling time increased with prey size regardless of prey species (p = 0.0001). Estimates of energy intake averaged 66,018 kJ • d(-1) but were as high as 167,797 kJ • d(-1) corresponding to turtles consuming an average of 330 kg wet mass • d(-1) (up to 840 kg • d(-1)) or approximately 261 (up to 664) jellyfish • d(-1). Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass • d(-1) equating to an average energy intake of 3-7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to

  9. Linear CCD attitude measurement system based on the identification of the auxiliary array CCD

    NASA Astrophysics Data System (ADS)

    Hu, Yinghui; Yuan, Feng; Li, Kai; Wang, Yan

    2015-10-01

    Object to the high precision flying target attitude measurement issues of a large space and large field of view, comparing existing measurement methods, the idea is proposed of using two array CCD to assist in identifying the three linear CCD with multi-cooperative target attitude measurement system, and to address the existing nonlinear system errors and calibration parameters and more problems with nine linear CCD spectroscopic test system of too complicated constraints among camera position caused by excessive. The mathematical model of binocular vision and three linear CCD test system are established, co-spot composition triangle utilize three red LED position light, three points' coordinates are given in advance by Cooperate Measuring Machine, the red LED in the composition of the three sides of a triangle adds three blue LED light points as an auxiliary, so that array CCD is easier to identify three red LED light points, and linear CCD camera is installed of a red filter to filter out the blue LED light points while reducing stray light. Using array CCD to measure the spot, identifying and calculating the spatial coordinates solutions of red LED light points, while utilizing linear CCD to measure three red LED spot for solving linear CCD test system, which can be drawn from 27 solution. Measured with array CCD coordinates auxiliary linear CCD has achieved spot identification, and has solved the difficult problems of multi-objective linear CCD identification. Unique combination of linear CCD imaging features, linear CCD special cylindrical lens system is developed using telecentric optical design, the energy center of the spot position in the depth range of convergence in the direction is perpendicular to the optical axis of the small changes ensuring highprecision image quality, and the entire test system improves spatial object attitude measurement speed and precision.

  10. Assessing the application of an airborne intensified multispectral video camera to measure chlorophyll a in three Florida estuaries

    SciTech Connect

    Dierberg, F.E.; Zaitzeff, J.

    1997-08-01

    After absolute and spectral calibration, an airborne intensified, multispectral video camera was field tested for water quality assessments over three Florida estuaries (Tampa Bay, Indian River Lagoon, and the St. Lucie River Estuary). Univariate regression analysis of upwelling spectral energy vs. ground-truthed uncorrected chlorophyll a (Chl a) for each estuary yielded lower coefficients of determination (R{sup 2}) with increasing concentrations of Gelbstoff within an estuary. More predictive relationships were established by adding true color as a second independent variable in a bivariate linear regression model. These regressions successfully explained most of the variation in upwelling light energy (R{sup 2}=0.94, 0.82 and 0.74 for the Tampa Bay, Indian River Lagoon, and St. Lucie estuaries, respectively). Ratioed wavelength bands within the 625-710 nm range produced the highest correlations with ground-truthed uncorrected Chl a, and were similar to those reported as being the most predictive for Chl a in Tennessee reservoirs. However, the ratioed wavebands producing the best predictive algorithms for Chl a differed among the three estuaries due to the effects of varying concentrations of Gelbstoff on upwelling spectral signatures, which precluded combining the data into a common data set for analysis.

  11. Development of Dynamic Spatial Video Camera (DSVC) for 4D observation, analysis and modeling of human body locomotion.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Hayashibe, Mitsuhiro; Suzuki, Shigeyuki; Otake, Yoshito

    2003-01-01

    We have developed an imaging system for free and quantitative observation of human locomotion in a time-spatial domain by way of real time imaging. The system is equipped with 60 computer controlled video cameras to film human locomotion from all angles simultaneously. Images are installed into the main graphic workstation and translated into a 2D image matrix. Observation of the subject from optional directions is able to be performed by selecting the view point from the optimum image sequence in this image matrix. This system also possesses a function to reconstruct 4D models of the subject's moving human body by using 60 images taken from all directions at one particular time. And this system also has the capability to visualize inner structures such as the skeletal or muscular systems of the subject by compositing computer graphics reconstructed from the MRI data set. We are planning to apply this imaging system to clinical observation in the area of orthopedics, rehabilitation and sports science.

  12. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Technical Reports Server (NTRS)

    Bolas, M. T.; Fisher, S. S.

    1990-01-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remore manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  13. Video photographic considerations for measuring the proximity of a probe aircraft with a smoke seeded trailing vortex

    NASA Technical Reports Server (NTRS)

    Childers, Brooks A.; Snow, Walter L.

    1990-01-01

    Considerations for acquiring and analyzing 30 Hz video frames from charge coupled device (CCD) cameras mounted in the wing tips of a Beech T-34 aircraft are described. Particular attention is given to the characterization and correction of optical distortions inherent in the data.

  14. Improvement in the light sensitivity of the ultrahigh-speed high-sensitivity CCD with a microlens array

    NASA Astrophysics Data System (ADS)

    Hayashida, T.,; Yonai, J.; Kitamura, K.; Arai, T.; Kurita, T.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Kitagawa, S.; Hatade, K.; Yamaguchi, T.; Takeuchi, H.; Iida, K.

    2008-02-01

    We are advancing the development of ultrahigh-speed, high-sensitivity CCDs for broadcast use that are capable of capturing smooth slow-motion videos in vivid colors even where lighting is limited, such as at professional baseball games played at night. We have already developed a 300,000 pixel, ultrahigh-speed CCD, and a single CCD color camera that has been used for sports broadcasts and science programs using this CCD. However, there are cases where even higher sensitivity is required, such as when using a telephoto lens during a baseball broadcast or a high-magnification microscope during science programs. This paper provides a summary of our experimental development aimed at further increasing the sensitivity of CCDs using the light-collecting effects of a microlens array.

  15. Cryostat and CCD for MEGARA at GTC

    NASA Astrophysics Data System (ADS)

    Castillo-Domínguez, E.; Ferrusca, D.; Tulloch, S.; Velázquez, M.; Carrasco, E.; Gallego, J.; Gil de Paz, A.; Sánchez, F. M.; Vílchez Medina, J. M.

    2012-09-01

    MEGARA (Multi-Espectrógrafo en GTC de Alta Resolución para Astronomía) is the new integral field unit (IFU) and multi-object spectrograph (MOS) instrument for the GTC. The spectrograph subsystems include the pseudo-slit, the shutter, the collimator with a focusing mechanism, pupil elements on a volume phase holographic grating (VPH) wheel and the camera joined to the cryostat through the last lens, with a CCD detector inside. In this paper we describe the full preliminary design of the cryostat which will harbor the CCD detector for the spectrograph. The selected cryogenic device is an LN2 open-cycle cryostat which has been designed by the "Astronomical Instrumentation Lab for Millimeter Wavelengths" at INAOE. A complete description of the cryostat main body and CCD head is presented as well as all the vacuum and temperature sub-systems to operate it. The CCD is surrounded by a radiation shield to improve its performance and is placed in a custom made mechanical mounting which will allow physical adjustments for alignment with the spectrograph camera. The 4k x 4k pixel CCD231 is our selection for the cryogenically cooled detector of MEGARA. The characteristics of this CCD, the internal cryostat cabling and CCD controller hardware are discussed. Finally, static structural finite element modeling and thermal analysis results are shown to validate the cryostat model.

  16. Evaluating the Effects of Camera Perspective in Video Modeling for Children with Autism: Point of View versus Scene Modeling

    ERIC Educational Resources Information Center

    Cotter, Courtney

    2010-01-01

    Video modeling has been used effectively to teach a variety of skills to children with autism. This body of literature is characterized by a variety of procedural variations including the characteristics of the video model (e.g., self vs. other, adult vs. peer). Traditionally, most video models have been filmed using third person perspective…

  17. Low-light-level EMCCD color camera

    NASA Astrophysics Data System (ADS)

    Heim, Gerald B.; Burkepile, Jon; Frame, Wayne W.

    2006-05-01

    Video cameras have increased in usefulness in military applications over the past four decades. This is a result of many advances in technology and because no one portion of the spectrum reigns supreme under all environmental and operating conditions. The visible portion of the spectrum has the clear advantage of ease of information interpretation, requiring little or no training. This advantage extends into the Near IR (NIR) spectral region to silicon cutoff with little difficulty. Inclusion of the NIR region is of particular importance due to the rich photon content of natural night illumination. The addition of color capability offers another dimension to target/situation discrimination and hence is highly desirable. A military camera must be small, lightweight and low power. Limiting resolution and sensitivity cannot be sacrificed to achieve color capability. Newly developed electron-multiplication CCD sensors (EMCCDs) open the door to a practical low-light/all-light color camera without an image intensifier. Ball Aerospace & Technologies Corp (BATC) has developed a unique color camera that allows the addition of color with a very small impact on low light level performance and negligible impact on limiting resolution. The approach, which includes the NIR portion of the spectrum along with the visible, requires no moving parts and is based on the addition of a sparse sampling color filter to the surface of an EMCCD. It renders the correct hue in a real time, video rate image with negligible latency. Furthermore, camera size and power impact is slight.

  18. The Dark Energy Survey CCD imager design

    SciTech Connect

    Cease, H.; DePoy, D.; Diehl, H.T.; Estrada, J.; Flaugher, B.; Guarino, V.; Kuk, K.; Kuhlmann, S.; Schultz, K.; Schmitt, R.L.; Stefanik, A.; /Fermilab /Ohio State U. /Argonne

    2008-06-01

    The Dark Energy Survey is planning to use a 3 sq. deg. camera that houses a {approx} 0.5m diameter focal plane of 62 2kx4k CCDs. The camera vessel including the optical window cell, focal plate, focal plate mounts, cooling system and thermal controls is described. As part of the development of the mechanical and cooling design, a full scale prototype camera vessel has been constructed and is now being used for multi-CCD readout tests. Results from this prototype camera are described.

  19. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  20. Testing fully depleted CCD

    NASA Astrophysics Data System (ADS)

    Casas, Ricard; Cardiel-Sas, Laia; Castander, Francisco J.; Jiménez, Jorge; de Vicente, Juan

    2014-08-01

    The focal plane of the PAU camera is composed of eighteen 2K x 4K CCDs. These devices, plus four spares, were provided by the Japanese company Hamamatsu Photonics K.K. with type no. S10892-04(X). These detectors are 200 μm thick fully depleted and back illuminated with an n-type silicon base. They have been built with a specific coating to be sensitive in the range from 300 to 1,100 nm. Their square pixel size is 15 μm. The read-out system consists of a Monsoon controller (NOAO) and the panVIEW software package. The deafualt CCD read-out speed is 133 kpixel/s. This is the value used in the calibration process. Before installing these devices in the camera focal plane, they were characterized using the facilities of the ICE (CSIC- IEEC) and IFAE in the UAB Campus in Bellaterra (Barcelona, Catalonia, Spain). The basic tests performed for all CCDs were to obtain the photon transfer curve (PTC), the charge transfer efficiency (CTE) using X-rays and the EPER method, linearity, read-out noise, dark current, persistence, cosmetics and quantum efficiency. The X-rays images were also used for the analysis of the charge diffusion for different substrate voltages (VSUB). Regarding the cosmetics, and in addition to white and dark pixels, some patterns were also found. The first one, which appears in all devices, is the presence of half circles in the external edges. The origin of this pattern can be related to the assembly process. A second one appears in the dark images, and shows bright arcs connecting corners along the vertical axis of the CCD. This feature appears in all CCDs exactly in the same position so our guess is that the pattern is due to electrical fields. Finally, and just in two devices, there is a spot with wavelength dependence whose origin could be the result of a defectous coating process.

  1. Vacuum Camera Cooler

    NASA Technical Reports Server (NTRS)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  2. Optical signal processing of video surveillance for recognizing and measurement location railway infrastructure elements

    NASA Astrophysics Data System (ADS)

    Diyazitdinov, Rinat R.; Vasin, Nikolay N.

    2016-03-01

    Processing of optical signals, which are received from CCD sensors of video cameras, allows to extend the functionality of video surveillance systems. Traditional video surveillance systems are used for saving, transmitting and preprocessing of the video content from the controlled objects. Video signal processing by analytics systems allows to get more information about object's location and movement, the flow of technological processes and to measure other parameters. For example, the signal processing of video surveillance systems, installed on carriage-laboratories, are used for getting information about certain parameters of the railways. Two algorithms for video processing, allowing recognition of pedestrian crossings of the railways, as well as location measurement of the so-called "Anchor Marks" used to control the mechanical stresses of continuous welded rail track are described in this article. The algorithms are based on the principle of determining the region of interest (ROI), and then the analysis of the fragments inside this ROI.

  3. CCD Photometer Installed on the Telescope - 600 OF the Shamakhy Astrophysical Observatory II. The Technique of Observation and Data Processing of CCD Photometry

    NASA Astrophysics Data System (ADS)

    Abdullayev, B. I.; Gulmaliyev, N. I.; Majidova, S. O.; Mikayilov, Kh. M.; Rustamov, B. N.

    2009-12-01

    Basic technical characteristics of CCD matrix U-47 made by the Apogee Alta Instruments Inc. are provided. Short description and features of various noises introduced by optical system and CCD camera are presented. The technique of getting calibration frames: bias, dark, flat field and main stages of processing of results CCD photometry are described.

  4. EL Sistema CCD de Tonantzintla. Pruebas Y Planes Futuros

    NASA Astrophysics Data System (ADS)

    Cardona, O.; Chavira, E.; Furenlid, L.; Iriarte, B.

    1987-05-01

    We present results of the laboratory tests of the CCD camera system recently acquired by INAOE, also the theoretical and observational performance of the instrument with the one meter telescope of UNAM. The system has a TI 4849 CCD with 390 >c 584 pixels. We will present the future plans of its use in the new 2.1 m telescope at Cananea, onora.

  5. Development of low-noise CCD drive electronics for the world space observatory ultraviolet spectrograph subsystem

    NASA Astrophysics Data System (ADS)

    Salter, Mike; Clapp, Matthew; King, James; Morse, Tom; Mihalcea, Ionut; Waltham, Nick; Hayes-Thakore, Chris

    2016-07-01

    World Space Observatory Ultraviolet (WSO-UV) is a major Russian-led international collaboration to develop a large space-borne 1.7 m Ritchey-Chrétien telescope and instrumentation to study the universe at ultraviolet wavelengths between 115 nm and 320 nm, exceeding the current capabilities of ground-based instruments. The WSO Ultraviolet Spectrograph subsystem (WUVS) is led by the Institute of Astronomy of the Russian Academy of Sciences and consists of two high resolution spectrographs covering the Far-UV range of 115-176 nm and the Near-UV range of 174-310 nm, and a long-slit spectrograph covering the wavelength range of 115-305 nm. The custom-designed CCD sensors and cryostat assemblies are being provided by e2v technologies (UK). STFC RAL Space is providing the Camera Electronics Boxes (CEBs) which house the CCD drive electronics for each of the three WUVS channels. This paper presents the results of the detailed characterisation of the WUVS CCD drive electronics. The electronics include a novel high-performance video channel design that utilises Digital Correlated Double Sampling (DCDS) to enable low-noise readout of the CCD at a range of pixel frequencies, including a baseline requirement of less than 3 electrons rms readout noise for the combined CCD and electronics system at a readout rate of 50 kpixels/s. These results illustrate the performance of this new video architecture as part of a wider electronics sub-system that is designed for use in the space environment. In addition to the DCDS video channels, the CEB provides all the bias voltages and clocking waveforms required to operate the CCD and the system is fully programmable via a primary and redundant SpaceWire interface. The development of the CEB electronics design has undergone critical design review and the results presented were obtained using the engineering-grade electronics box. A variety of parameters and tests are included ranging from general system metrics, such as the power and mass

  6. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  7. CCD photometry using a wide-field Newtonian telescope.

    NASA Astrophysics Data System (ADS)

    Menako, C. R.; Henson, G. D.; Castelaz, M. A.; Powell, H. D.

    1996-01-01

    The paper demonstrates the utility of a CCD electronic-imaging camera at the focus of a wide-field Newtonian telescope as an efficient system for astronomical photometry. The CCD camera coupled to the wide-field telescope images one square degree of the sky, allowing for simultaneous light flux measurement of multiple stars without instrument repositioning. Photometric data acquired from the variable star W UMa using this system is compared to published values.

  8. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  9. The Pulkovo CCD Spectroheliograph - Magnetograph

    NASA Astrophysics Data System (ADS)

    Pafinenko, L. D.

    The CCD spectroheliograph-magnetograph is a focal plane ancillary instrument for Pulkovo horizontal solar telescope ACU-5. The instrument is placed at an exit port of an isothermal high-resolution diffraction-grating spectrograph. The modified Leighton optical scheme for registration of sunspot magnetic fields is used. The instrument provides obtaining FITS digital video cards of radial velocities, magnetic fields and spectroheliogram in any line of spectral region 3900A - 11000A. The time of obtaining of one video card by the size 91[double prime or second] times154[double prime or second] is equal 10.24sec. The angular resolution of the instrument is 0[double prime or second].8; spectral resolution is 0.01-0.03A. There is remote access to a solar telescope in real time on the basis of Internet - process engineerings.

  10. Fast measurement of temporal noise of digital camera's photosensors

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.

    2015-10-01

    Currently photo- and videocameras are widespread parts of both scientific experimental setups and consumer applications. They are used in optics, radiophysics, astrophotography, chemistry, and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photoand videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Spatial part usually several times lower in magnitude than temporal. At first approximation spatial noises might be neglected. Earlier we proposed modification of the automatic segmentation of non-uniform targets (ASNT) method for measurement of temporal noise of photo- and videocameras. Only two frames are sufficient for noise measurement with the modified method. In result, proposed ASNT modification should allow fast and accurate measurement of temporal noise. In this paper, we estimated light and dark temporal noises of four cameras of different types using the modified ASNT method with only several frames. These cameras are: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PLB781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. We measured elapsed time for processing of shots used for temporal noise estimation. The results demonstrate the possibility of fast obtaining of dependency of camera full temporal noise on signal value with the proposed ASNT modification.

  11. Car speed estimation based on cross-ratio using video data of car-mounted camera (black box).

    PubMed

    Han, Inhwan

    2016-12-01

    This paper proposes several methods for using footages of car-mounted camera (car black box) to estimate the speed of the car with the camera, or the speed of other cars. This enables estimating car velocities directly from recorded footages without the need of specific physical locations of cars shown in the recorded material. To achieve this, this study collected 96 cases of black box footages and classified them for analysis based on various factors such as travel circumstances and directions. With these data, several case studies relating to speed estimation of camera-mounted car and other cars in recorded footage while the camera-mounted car is stationary, or moving, have been conducted. Additionally, a rough method for estimating the speed of other cars moving through a curvilinear path and its analysis results are described, for practical uses. Speed estimations made using cross-ratio were compared with the results of the traditional footage-analysis method and GPS calculation results for camera-mounted cars, proving its applicability.

  12. Identification of Prey Captures in Australian Fur Seals (Arctocephalus pusillus doriferus) Using Head-Mounted Accelerometers: Field Validation with Animal-Borne Video Cameras

    PubMed Central

    Volpov, Beth L.; Hoskins, Andrew J.; Battaile, Brian C.; Viviant, Morgane; Wheatley, Kathryn E.; Marshall, Greg; Abernathy, Kyler; Arnould, John P. Y.

    2015-01-01

    This study investigated prey captures in free-ranging adult female Australian fur seals (Arctocephalus pusillus doriferus) using head-mounted 3-axis accelerometers and animal-borne video cameras. Acceleration data was used to identify individual attempted prey captures (APC), and video data were used to independently verify APC and prey types. Results demonstrated that head-mounted accelerometers could detect individual APC but were unable to distinguish among prey types (fish, cephalopod, stingray) or between successful captures and unsuccessful capture attempts. Mean detection rate (true positive rate) on individual animals in the testing subset ranged from 67-100%, and mean detection on the testing subset averaged across 4 animals ranged from 82-97%. Mean False positive (FP) rate ranged from 15-67% individually in the testing subset, and 26-59% averaged across 4 animals. Surge and sway had significantly greater detection rates, but also conversely greater FP rates compared to heave. Video data also indicated that some head movements recorded by the accelerometers were unrelated to APC and that a peak in acceleration variance did not always equate to an individual prey item. The results of the present study indicate that head-mounted accelerometers provide a complementary tool for investigating foraging behaviour in pinnipeds, but that detection and FP correction factors need to be applied for reliable field application. PMID:26107647

  13. Identification of Prey Captures in Australian Fur Seals (Arctocephalus pusillus doriferus) Using Head-Mounted Accelerometers: Field Validation with Animal-Borne Video Cameras.

    PubMed

    Volpov, Beth L; Hoskins, Andrew J; Battaile, Brian C; Viviant, Morgane; Wheatley, Kathryn E; Marshall, Greg; Abernathy, Kyler; Arnould, John P Y

    2015-01-01

    This study investigated prey captures in free-ranging adult female Australian fur seals (Arctocephalus pusillus doriferus) using head-mounted 3-axis accelerometers and animal-borne video cameras. Acceleration data was used to identify individual attempted prey captures (APC), and video data were used to independently verify APC and prey types. Results demonstrated that head-mounted accelerometers could detect individual APC but were unable to distinguish among prey types (fish, cephalopod, stingray) or between successful captures and unsuccessful capture attempts. Mean detection rate (true positive rate) on individual animals in the testing subset ranged from 67-100%, and mean detection on the testing subset averaged across 4 animals ranged from 82-97%. Mean False positive (FP) rate ranged from 15-67% individually in the testing subset, and 26-59% averaged across 4 animals. Surge and sway had significantly greater detection rates, but also conversely greater FP rates compared to heave. Video data also indicated that some head movements recorded by the accelerometers were unrelated to APC and that a peak in acceleration variance did not always equate to an individual prey item. The results of the present study indicate that head-mounted accelerometers provide a complementary tool for investigating foraging behaviour in pinnipeds, but that detection and FP correction factors need to be applied for reliable field application.

  14. CCD star trackers

    NASA Technical Reports Server (NTRS)

    Goss, W. C.

    1975-01-01

    The application of CCDs to star trackers and star mappers is considered. Advantages and disadvantages of silicon CCD star trackers are compared with those of image dissector star trackers. It is concluded that the CCD has adequate sensitivity for most single star tracking tasks and is distinctly superior in multiple star tracking or mapping applications. The signal and noise figures of several current CCD configurations are discussed. The basic structure of the required signal processing is described, and it is shown that resolution in excess of the number of CCD elements may be had by interpolation.

  15. An electronic pan/tilt/zoom camera system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steve; Martin, H. L.

    1992-01-01

    A small camera system is described for remote viewing applications that employs fisheye optics and electronics processing for providing pan, tilt, zoom, and rotational movements. The fisheye lens is designed to give a complete hemispherical FOV with significant peripheral distortion that is corrected with high-speed electronic circuitry. Flexible control of the viewing requirements is provided by a programmable transformation processor so that pan/tilt/rotation/zoom functions can be accomplished without mechanical movements. Images are presented that were taken with a prototype system using a CCD camera, and 5 frames/sec can be acquired from a 180-deg FOV. The image-tranformation device can provide multiple images with different magnifications and pan/tilt/rotation sequences at frame rates compatible with conventional video devices. The system is of interest to the object tracking, surveillance, and viewing in constrained environments that would require the use of several cameras.

  16. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    PubMed

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  17. The Effect of Smartphone Video Camera as a Tool to Create Gigital Stories for English Learning Purposes

    ERIC Educational Resources Information Center

    Gromik, Nicolas A.

    2015-01-01

    The integration of smartphones in the language learning environment is gaining research interest. However, using a smartphone to learn to speak spontaneously has received little attention. The emergence of smartphone technology and its video recording feature are recognised as suitable learning tools. This paper reports on a case study conducted…

  18. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  19. What Does the Camera Communicate? An Inquiry into the Politics and Possibilities of Video Research on Learning

    ERIC Educational Resources Information Center

    Vossoughi, Shirin; Escudé, Meg

    2016-01-01

    This piece explores the politics and possibilities of video research on learning in educational settings. The authors (a research-practice team) argue that changing the stance of inquiry from "surveillance" to "relationship" is an ongoing and contingent practice that involves pedagogical, political, and ethical choices on the…

  20. Determining the frequency of open windows in motor vehicles: a pilot study using a video camera in Houston, Texas during high temperature conditions.

    PubMed

    Long, Tom; Johnson, Ted; Ollison, Will

    2002-05-01

    Researchers have developed a variety of computer-based models to estimate population exposure to air pollution. These models typically estimate exposures by simulating the movement of specific population groups through defined microenvironments. Exposures in the motor vehicle microenvironment are significantly affected by air exchange rate, which in turn is affected by vehicle speed, window position, vent status, and air conditioning use. A pilot study was conducted in Houston, Texas, during September 2000 for a specific set of weather, vehicle speed, and road type conditions to determine whether useful information on the position of windows, sunroofs, and convertible tops could be obtained through the use of video cameras. Monitoring was conducted at three sites (two arterial roads and one interstate highway) on the perimeter of Harris County located in or near areas not subject to mandated Inspection and Maintenance programs. Each site permitted an elevated view of vehicles as they proceeded through a turn, thereby exposing all windows to the stationary video camera. Five videotaping sessions were conducted over a two-day period in which the Heat Index (HI)-a function of temperature and humidity-varied from 80 to 101 degrees F and vehicle speed varied from 30 to 74 mph. The resulting videotapes were processed to create a master database listing vehicle-specific data for site location, date, time, vehicle type (e.g., minivan), color, window configuration (e.g., four windows and sunroof), number of windows in each of three position categories (fully open, partially open, and closed), HI, and speed. Of the 758 vehicles included in the database, 140 (18.5 percent) were labeled as "open," indicating a window, sunroof, or convertible top was fully or partially open. The results of a series of stepwise linear regression analyses indicated that the probability of a vehicle in the master database being "open" was weakly affected by time of day, vehicle type, vehicle color

  1. Overview of a hybrid underwater camera system

    NASA Astrophysics Data System (ADS)

    Church, Philip; Hou, Weilin; Fournier, Georges; Dalgleish, Fraser; Butler, Derek; Pari, Sergio; Jamieson, Michael; Pike, David

    2014-05-01

    The paper provides an overview of a Hybrid Underwater Camera (HUC) system combining sonar with a range-gated laser camera system. The sonar is the BlueView P900-45, operating at 900kHz with a field of view of 45 degrees and ranging capability of 60m. The range-gated laser camera system is based on the third generation LUCIE (Laser Underwater Camera Image Enhancer) sensor originally developed by the Defence Research and Development Canada. LUCIE uses an eye-safe laser generating 1ns pulses at a wavelength of 532nm and at the rate of 25kHz. An intensified CCD camera operates with a gating mechanism synchronized with the laser pulse. The gate opens to let the camera capture photons from a given range of interest and can be set from a minimum delay of 5ns with increments of 200ps. The output of the sensor is a 30Hz video signal. Automatic ranging is achieved using a sonar altimeter. The BlueView sonar and LUCIE sensors are integrated with an underwater computer that controls the sensors parameters and displays the real-time data for the sonar and the laser camera. As an initial step for data integration, graphics overlays representing the laser camera field-of-view along with the gate position and width are overlaid on the sonar display. The HUC system can be manually handled by a diver and can also be controlled from a surface vessel through an umbilical cord. Recent test data obtained from the HUC system operated in a controlled underwater environment will be presented along with measured performance characteristics.

  2. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. II: Viscosity effect of several types of polymer additives.

    PubMed

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. in polymer solutions with various viscosities has been investigated by image processing using a high-speed video camera. The viscosity of the external fluid ranges from 1 to 5mPa·s for different polymer additives such as hydroxypropyl cellulose, polyethylene oxide, and Ficoll. The temporal change in the contraction length of Vorticella sp. in various macromolecular solutions is fitted well by a stretched exponential function based on the nucleation and growth model. The maximum speed of the contractile process monotonically decreases with an increase in the external viscosity, in accordance with power law behavior. The index values approximate to 0.5 and this suggests that the viscous energy dissipated by the contraction of Vorticella sp. is constant in a macromolecular environment.

  3. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. II: Viscosity effect of several types of polymer additives

    PubMed Central

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. in polymer solutions with various viscosities has been investigated by image processing using a high-speed video camera. The viscosity of the external fluid ranges from 1 to 5mPa·s for different polymer additives such as hydroxypropyl cellulose, polyethylene oxide, and Ficoll. The temporal change in the contraction length of Vorticella sp. in various macromolecular solutions is fitted well by a stretched exponential function based on the nucleation and growth model. The maximum speed of the contractile process monotonically decreases with an increase in the external viscosity, in accordance with power law behavior. The index values approximate to 0.5 and this suggests that the viscous energy dissipated by the contraction of Vorticella sp. is constant in a macromolecular environment. PMID:27857603

  4. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  5. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  6. Real-time 3D video utilizing a compressed sensing time-of-flight single-pixel camera

    NASA Astrophysics Data System (ADS)

    Edgar, Matthew P.; Sun, Ming-Jie; Gibson, Graham M.; Spalding, Gabriel C.; Phillips, David B.; Padgett, Miles J.

    2016-09-01

    Time-of-flight 3D imaging is an important tool for applications such as remote sensing, machine vision and autonomous navigation. Conventional time-of-flight three-dimensional imaging systems that utilize a raster scanned laser to measure the range of each pixel in the scene sequentially, inherently have acquisition times that scale directly with the resolution. Here we show a modified time-of-flight 3D camera employing structured illumination, which uses a visible camera to enable a novel compressed sensing technique, minimising the acquisition time as well as providing a high-resolution reflectivity map for image overlay. Furthermore, a quantitative assessment of the 3D imaging performance is provided.

  7. A CCD offset guider for the KAO

    NASA Technical Reports Server (NTRS)

    Colgan, Sean W. J.; Erickson, Edwin F.; Haynes, Fredric B.; Rank, David M.

    1995-01-01

    We describe a focal plane guider for the Kuiper Airborne Observatory which consists of a CCD camera interfaced to an AMIGA personal computer. The camera is made by Photometrics Ltd. and utilizes a Thomson 576 x 384 pixel CCD chip operated in Frame Transfer mode. Custom optics produce a scale of 2.4 arc-sec/pixel, yielding an approx. 12 ft. diameter field of view. Chopped images of stars with HST Guide Star Catalog magnitudes fainter than 14 have been used for guiding at readout rates greater than or equal to 0.5 Hz. The software includes automatic map generation, subframing and zooming, and correction for field rotation when two stars are in the field of view.

  8. The future scientific CCD

    NASA Technical Reports Server (NTRS)

    Janesick, J. R.; Elliott, T.; Collins, S.; Marsh, H.; Blouke, M. M.

    1984-01-01

    Since the first introduction of charge-coupled devices (CCDs) in 1970, CCDs have been considered for applications related to memories, logic circuits, and the detection of visible radiation. It is pointed out, however, that the mass market orientation of CCD development has left largely untapped the enormous potential of these devices for advanced scientific instrumentation. The present paper has, therefore, the objective to introduce the CCD characteristics to the scientific community, taking into account prospects for further improvement. Attention is given to evaluation criteria, a summary of current CCDs, CCD performance characteristics, absolute calibration tools, quantum efficiency, aspects of charge collection, charge transfer efficiency, read noise, and predictions regarding the characteristics of the next generation of silicon scientific CCD imagers.

  9. Dual-Sampler Processor Digitizes CCD Output

    NASA Technical Reports Server (NTRS)

    Salomon, P. M.

    1986-01-01

    Circuit for processing output of charge-coupled device (CCD) imager provides increased time for analog-to-digital conversion, thereby reducing bandwidth required for video processing. Instead of one sampleand-hold circuit of conventional processor, improved processor includes two sample-and-hold circuits alternated with each other. Dual-sampler processor operates with lower bandwidth and with timing requirements less stringent than those of single-sample processor.

  10. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable

  11. Research of aerial camera focal pane micro-displacement measurement system based on Michelson interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Shu-juan; Zhao, Yu-liang; Li, Shu-jun

    2014-09-01

    The aerial camera focal plane in the correct position is critical to the imaging quality. In order to adjust the aerial camera focal plane displacement caused in the process of maintenance, a new micro-displacement measuring system of aerial camera focal plane in view of the Michelson interferometer has been designed in this paper, which is based on the phase modulation principle, and uses the interference effect to realize the focal plane of the micro-displacement measurement. The system takes He-Ne laser as the light source, uses the Michelson interference mechanism to produce interference fringes, changes with the motion of the aerial camera focal plane interference fringes periodically, and records the periodicity of the change of the interference fringes to obtain the aerial camera plane displacement; Taking linear CCD and its driving system as the interference fringes picking up tool, relying on the frequency conversion and differentiating system, the system determines the moving direction of the focal plane. After data collecting, filtering, amplifying, threshold comparing, counting, CCD video signals of the interference fringes are sent into the computer processed automatically, and output the focal plane micro displacement results. As a result, the focal plane micro displacement can be measured automatically by this system. This system uses linear CCD as the interference fringes picking up tool, greatly improving the counting accuracy and eliminated the artificial counting error almost, improving the measurement accuracy of the system. The results of the experiments demonstrate that: the aerial camera focal plane displacement measurement accuracy is 0.2nm. While tests in the laboratory and flight show that aerial camera focal plane positioning is accurate and can satisfy the requirement of the aerial camera imaging.

  12. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  13. Scientific CCD controller for the extreme environment at Antarctic

    NASA Astrophysics Data System (ADS)

    Zhang, Hong-fei; Wang, Jian-min; Feng, Yi; Lin, Sheng-zhao; Chen, Jie; Wang, Jian

    2016-07-01

    A prototype of scientific CCD detector system is designed, implemented and tested for the extreme environment in Antarctic, including clocks and biases driver for CCD chip, video pre-amplifier, video sampling circuit and ultra-low noise power. The low temperature influence is fully considered in the electronics design. Low noise readout system with CCD47-20 is tested, and the readout noise is as low as 5e- when the CCD readout speed is 100kpixs/s. We simulated the extreme low temperature environment of Antarctic to test the system, and verified that the system has the ability of long-term working in the extreme low temperature environment as low as -80°C.

  14. CCD gate definition process

    NASA Astrophysics Data System (ADS)

    Bluzer

    1986-02-01

    The present invention utilizes a double masking step in a CCD gate definition process to eliminate the re-entrant oxide by using a thin film layer other than photoresist to define the polysilicon gates used by defining the thin film layer with a double masking process before any of the polysilicon gate layer is etched. It is one object of the present invention, therefore, to provide an improved process for CCD gate definition. It is another object of the invention to provide an improved CCD gate definition process wherein a profiled oxide layer is produced over a polysilicon layer without re-entrant oxide regions. It is another object of the invention to provide an improved CCD gate definition process wherein a thin film layer is utilized to define the polysilicon gate layers. It is another object of the invention to provide an improved CCD gate definition process wherein the thin film layer is defined by a double masking process before any polysilicon layer is etched.

  15. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y

  16. Calculating video meteor positions in a narrow-angle field with AIP4Win software - Comparison with the positions obtained by SPOSH cameras in a wide-angle field

    NASA Astrophysics Data System (ADS)

    Tsamis, Vagelis; Margonis, Anastasios; Christou, Apostolos

    2013-01-01

    We present an alternative way to calculate the positions of meteors captured in a narrow video field with a Watec camera and a 28 mm aspherical lens (FOV 11 degrees) by using Astronomical Image Processing for Windows, V2, a classic astrometry and photometry software. We have calculated positions for two Perseid meteors in Lyra which were recorded in August 2010, at Mt. Parnon, Greece. We then compare our astrometry position results with the results obtained by SPOSH cameras (FOV 120 degrees) for the same meteors.

  17. Application of a Two Camera Video Imaging System to Three-Dimensional Vortex Tracking in the 80- by 120-Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Bennett, Mark S.

    1993-01-01

    A description is presented of two enhancements for a two-camera, video imaging system that increase the accuracy and efficiency of the system when applied to the determination of three-dimensional locations of points along a continuous line. These enhancements increase the utility of the system when extracting quantitative data from surface and off-body flow visualizations. The first enhancement utilizes epipolar geometry to resolve the stereo "correspondence" problem. This is the problem of determining, unambiguously, corresponding points in the stereo images of objects that do not have visible reference points. The second enhancement, is a method to automatically identify and trace the core of a vortex in a digital image. This is accomplished by means of an adaptive template matching algorithm. The system was used to determine the trajectory of a vortex generated by the Leading-Edge eXtension (LEX) of a full-scale F/A-18 aircraft tested in the NASA Ames 80- by 120-Foot Wind Tunnel. The system accuracy for resolving the vortex trajectories is estimated to be +/-2 inches over distance of 60 feet. Stereo images of some of the vortex trajectories are presented. The system was also used to determine the point where the LEX vortex "bursts". The vortex burst point locations are compared with those measured in small-scale tests and in flight and found to be in good agreement.

  18. Effect of camera resolution and bandwidth on facial affect recognition.

    PubMed

    Cruz, Mario; Cruz, Robyn Flaum; Krupinski, Elizabeth A; Lopez, Ana Maria; McNeeley, Richard M; Weinstein, Ronald S

    2004-01-01

    This preliminary study explored the effect of camera resolution and bandwidth on facial affect recognition, an important process and clinical variable in mental health service delivery. Sixty medical students and mental health-care professionals were recruited and randomized to four different combinations of commonly used teleconferencing camera resolutions and bandwidths: (1) one chip charged coupling device (CCD) camera, commonly used for VHSgrade taping and in teleconferencing systems costing less than $4,000 with a resolution of 280 lines, and 128 kilobytes per second bandwidth (kbps); (2) VHS and 768 kbps; (3) three-chip CCD camera, commonly used for Betacam (Beta) grade taping and in teleconferencing systems costing more than $4,000 with a resolution of 480 lines, and 128 kbps; and (4) Betacam and 768 kbps. The subjects were asked to identify four facial affects dynamically presented on videotape by an actor and actress presented via a video monitor at 30 frames per second. Two-way analysis of variance (ANOVA) revealed a significant interaction effect for camera resolution and bandwidth (p = 0.02) and a significant main effect for camera resolution (p = 0.006), but no main effect for bandwidth was detected. Post hoc testing of interaction means, using the Tukey Honestly Significant Difference (HSD) test and the critical difference (CD) at the 0.05 alpha level = 1.71, revealed subjects in the VHS/768 kbps (M = 7.133) and VHS/128 kbps (M = 6.533) were significantly better at recognizing the displayed facial affects than those in the Betacam/768 kbps (M = 4.733) or Betacam/128 kbps (M = 6.333) conditions. Camera resolution and bandwidth combinations differ in their capacity to influence facial affect recognition. For service providers, this study's results support the use of VHS cameras with either 768 kbps or 128 kbps bandwidths for facial affect recognition compared to Betacam cameras. The authors argue that the results of this study are a consequence of the

  19. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  20. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  1. Distance measurement based on pixel variation of CCD images.

    PubMed

    Hsu, Chen-Chien; Lu, Ming-Chih; Wang, Wei-Yen; Lu, Yin-Yu

    2009-10-01

    This paper presents a distance measurement method based on pixel number variation of CCD images by referencing to two arbitrarily designated points in the image frames. By establishing a relationship between the displacement of the camera movement along the photographing direction and the difference in pixel count between reference points in the images, the distance from an object can be calculated via the proposed method. To integrate the measuring functions into digital cameras, a circuit design implementing the proposed measuring system in selecting reference points, measuring distance, and displaying measurement results on CCD panel of the digital camera is proposed in this paper. In comparison to pattern recognition or image analysis methods, the proposed measuring approach is simple and straightforward for practical implementation into digital cameras. To validate the performance of the proposed method, measurement results using the proposed method and ultrasonic rangefinders are also presented in this paper.

  2. The use of video for air pollution source monitoring

    SciTech Connect

    Ferreira, F.; Camara, A.

    1999-07-01

    The evaluation of air pollution impacts from single industrial emission sources is a complex environmental engineering problem. Recent developments in multimedia technologies used by personal computers improved the digitizing and processing of digital video sequences. This paper proposes a methodology where statistical analysis of both meteorological and air quality data combined with digital video images are used for monitoring air pollution sources. One of the objectives of this paper is to present the use of image processing algorithms in air pollution source monitoring. CCD amateur video cameras capture images that are further processed by computer. The use of video as a remote sensing system was implemented with the goal of determining some particular parameters, either meteorological or related with air quality monitoring and modeling of point sources. These parameters include the remote calculation of wind direction, wind speed, gases stack's outlet velocity, and stack's effective emission height. The characteristics and behavior of a visible pollutant's plume is also studied. Different sequences of relatively simple image processing operations are applied to the images gathered by the different cameras to segment the plume. The algorithms are selected depending on the atmospheric and lighting conditions. The developed system was applied to a 1,000 MW fuel power plant located at Setubal, Portugal. The methodology presented shows that digital video can be an inexpensive form to get useful air pollution related data for monitoring and modeling purposes.

  3. A project plans to develop two ASICs for CCD controller

    NASA Astrophysics Data System (ADS)

    Song, Qian; Wei, Mingzhi; Sun, Quan; Zhang, Yuheng

    2016-07-01

    Astronomical instrumentation, in many cases, especially the large field of view application while huge mosaic CCD or CMOS camera is needed, requires the camera electronics to be much more compact and of much smaller the size than the controller used to be. Making the major parts of CCD driving circuits into an ASIC or ASICs can greatly bring down the controller's volume, weight and power consumption and make it easier to control the crosstalk brought up by the long length of the cables that connect the CCD output ports and the signal processing electronics, and, therefore, is the most desirable approach to build the large mosaic CCD camera. A project endeavors to make two ASICs, one to achieve CCD signal processing and another to provide the clock drives and bias voltages, is introduced. The first round of design of the two ASICs has been completed and the devices have just been manufactured. Up to now the test of one of the two, the signal processing ASIC, was partially done and the linearity has reached the requirement of the design.

  4. CCD ACS Postflash Calibration

    NASA Astrophysics Data System (ADS)

    Chiaberge, Marco

    2011-10-01

    This activity provides a set of CCD FLASH exposure reference images for each current level/shutter-side combination, for the FLASH LED on the instrument side currently in use {one LED per instrument side}. It also tests linearity by exploring a wide range of flash "on" times and current settings. Short-term repeatability is also tested at the shortest FLASH exposure times that are expected to be used {2.0 sec, LOW LED current setting}.

  5. Cone penetrometer deployed in situ video microscope for characterizing sub-surface soil properties

    SciTech Connect

    Lieberman, S.H.; Knowles, D.S.; Kertesz, J.

    1997-12-31

    In this paper we report on the development and field testing of an in situ video microscope that has been integrated with a cone penetrometer probe in order to provide a real-time method for characterizing subsurface soil properties. The video microscope system consists of a miniature CCD color camera system coupled with an appropriate magnification and focusing optics to provide a field of view with a coverage of approximately 20 mm. The camera/optic system is mounted in a cone penetrometer probe so that the camera views the soil that is in contact with a sapphire window mounted on the side of the probe. The soil outside the window is illuminated by diffuse light provided through the window by an optical fiber illumination system connected to a white light source at the surface. The video signal from the camera is returned to the surface where it can be displayed in real-time on a video monitor, recorded on a video cassette recorder (VCR), and/or captured digitally with a frame grabber installed in a microcomputer system. In its highest resolution configuration, the in situ camera system has demonstrated a capability to resolve particle sizes as small as 10 {mu}m. By using other lens systems to increase the magnification factor, smaller particles could be resolved, however, the field of view would be reduced. Initial field tests have demonstrated the ability of the camera system to provide real-time qualitative characterization of soil particle sizes. In situ video images also reveal information on porosity of the soil matrix and the presence of water in the saturated zone. Current efforts are focused on the development of automated imaging processing techniques as a means of extracting quantitative information on soil particle size distributions. Data will be presented that compares data derived from digital images with conventional sieve/hydrometer analyses.

  6. Megapixel imaging camera for expanded H{sup {minus}} beam measurements

    SciTech Connect

    Simmons, J.E.; Lillberg, J.W.; McKee, R.J.; Slice, R.W.; Torrez, J.H.; McCurnin, T.W.; Sanchez, P.G.

    1994-02-01

    A charge coupled device (CCD) imaging camera system has been developed as part of the Ground Test Accelerator project at the Los Alamos National Laboratory to measure the properties of a large diameter, neutral particle beam. The camera is designed to operate in the accelerator vacuum system for extended periods of time. It would normally be cooled to reduce dark current. The CCD contains 1024 {times} 1024 pixels with pixel size of 19 {times} 19 {mu}m{sup 2} and with four phase parallel clocking and two phase serial clocking. The serial clock rate is 2.5{times}10{sup 5} pixels per second. Clock sequence and timing are controlled by an external logic-word generator. The DC bias voltages are likewise located externally. The camera contains circuitry to generate the analog clocks for the CCD and also contains the output video signal amplifier. Reset switching noise is removed by an external signal processor that employs delay elements to provide noise suppression by the method of double-correlated sampling. The video signal is digitized to 12 bits in an analog to digital converter (ADC) module controlled by a central processor module. Both modules are located in a VME-type computer crate that communicates via ethernet with a separate workstation where overall control is exercised and image processing occurs. Under cooled conditions the camera shows good linearity with dynamic range of 2000 and with dark noise fluctuations of about {plus_minus}1/2 ADC count. Full well capacity is about 5{times}10{sup 5} electron charges.

  7. Upgrades to NDSF Vehicle Camera Systems and Development of a Prototype System for Migrating and Archiving Video Data in the National Deep Submergence Facility Archives at WHOI

    NASA Astrophysics Data System (ADS)

    Fornari, D.; Howland, J.; Lerner, S.; Gegg, S.; Walden, B.; Bowen, A.; Lamont, M.; Kelley, D.

    2003-12-01

    In recent years, considerable effort has been made to improve the visual recording capabilities of Alvin and ROV Jason. This has culminated in the routine use of digital cameras, both internal and external on these vehicles, which has greatly expanded the scientific recording capabilities of the NDSF. The UNOLS National Deep Submergence Facility (NDSF) archives maintained at Woods Hole Oceanograpic Institution (WHOI) are the repository for the diverse suite of photographic still images (both 35mm and recently digital), video imagery, vehicle data and navigation, and near-bottom side-looking sonar data obtained by the facility vehicles. These data comprise a unique set of information from a wide range of seafloor environments over the more than 25 years of NDSF operations in support of science. Included in the holdings are Alvin data plus data from the tethered vehicles- ROV Jason, Argo II, and the DSL-120 side scan sonar. This information conservatively represents an outlay in facilities and science costs well in excess of \\$100 million. Several archive related improvement issues have become evident over the past few years. The most critical are: 1. migration and better access to the 35mm Alvin and Jason still images through digitization and proper cataloging with relevant meta-data, 2. assessing Alvin data logger data, migrating data on older media no longer in common use, and properly labeling and evaluating vehicle attitude and navigation data, 3. migrating older Alvin and Jason video data, especially data recorded on Hi-8 tape that is very susceptible to degradation on each replay, to newer digital format media such as DVD, 4. improving the capabilities of the NDSF archives to better serve the increasingly complex needs of the oceanographic community, including researchers involved in focused programs like Ridge2000 and MARGINS, where viable distributed databases in various disciplinary topics will form an important component of the data management structure

  8. Measurements of 42 Wide CPM Pairs with a CCD

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard

    2015-11-01

    This paper addresses the use of a Skyris 618C color CCD camera as a means of obtaining data for analysis in the measurement of wide common proper motion stars. The equipment setup is described and data collection procedure outlined. Results of the measures of 42 CPM stars are presented, showing the Skyris is a reliable device for the measurement of double stars.

  9. Dynamic MTF improvement scheme and its validation for CCD operating in TDI mode for Earth imaging applications

    NASA Astrophysics Data System (ADS)

    Dubey, Neeraj; Banerjee, Arup

    2016-05-01

    The paper presents the scheme for improving the image contrast in the remote sensing images and highlights the novelty in hardware & software design in the test system developed for measuring image contrast function. Modulation transfer function (MTF) is the most critical quality element of the high-resolution imaging payloads for earth observation consisting of TDI-CCD (Time Delayed Integration Charge Coupled Device) image. From the mathematical model for MTF Smear MTF of 65% (35% degradation) is observed. Then a operating method for TDI-CCD is developed, using which 96% of Motion Smear MTF will occur within the imaging operation. As a major part of the validation, indigenously designed and developed a test system for measuring the dynamic MTF of TDI Sensors which consists of the optical scanning system, TDI-CCD camera drive & video processing electronics, thermal control system and telecentric uniform illumination system. The experimental results confirm that image quality improvement can be achieved by this method. This method is now implemented in the flight model hardware of the remote sensing payload.

  10. The DSLR Camera

    NASA Astrophysics Data System (ADS)

    Berkó, Ernő; Argyle, R. W.

    Cameras have developed significantly in the past decade; in particular, digital Single-Lens Reflex Cameras (DSLR) have appeared. As a consequence we can buy cameras of higher and higher pixel number, and mass production has resulted in the great reduction of prices. CMOS sensors used for imaging are increasingly sensitive, and the electronics in the cameras allows images to be taken with much less noise. The software background is developing in a similar way—intelligent programs are created for after-processing and other supplementary works. Nowadays we can find a digital camera in almost every household, most of these cameras are DSLR ones. These can be used very well for astronomical imaging, which is nicely demonstrated by the amount and quality of the spectacular astrophotos appearing in different publications. These examples also show how much post-processing software contributes to the rise in the standard of the pictures. To sum up, the DSLR camera serves as a cheap alternative for the CCD camera, with somewhat weaker technical characteristics. In the following, I will introduce how we can measure the main parameters (position angle and separation) of double stars, based on the methods, software and equipment I use. Others can easily apply these for their own circumstances.

  11. Characterization of the Series 1000 Camera System

    SciTech Connect

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  12. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  13. CCD imaging sensors

    NASA Technical Reports Server (NTRS)

    Janesick, James R. (Inventor); Elliott, Stythe T. (Inventor)

    1989-01-01

    A method for promoting quantum efficiency (QE) of a CCD imaging sensor for UV, far UV and low energy x-ray wavelengths by overthinning the back side beyond the interface between the substrate and the photosensitive semiconductor material, and flooding the back side with UV prior to using the sensor for imaging. This UV flooding promotes an accumulation layer of positive states in the oxide film over the thinned sensor to greatly increase QE for either frontside or backside illumination. A permanent or semipermanent image (analog information) may be stored in a frontside SiO.sub.2 layer over the photosensitive semiconductor material using implanted ions for a permanent storage and intense photon radiation for a semipermanent storage. To read out this stored information, the gate potential of the CCD is biased more negative than that used for normal imaging, and excess charge current thus produced through the oxide is integrated in the pixel wells for subsequent readout by charge transfer from well to well in the usual manner.

  14. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  15. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1981-06-10

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid.

  16. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  17. Colorized linear CCD data acquisition system with automatic exposure control

    NASA Astrophysics Data System (ADS)

    Li, Xiaofan; Sui, Xiubao

    2014-11-01

    Colorized linear cameras deliver superb color fidelity at the fastest line rates in the industrial inspection. It's RGB trilinear sensor eliminates image artifacts by placing a separate row of pixels for each color on a single sensor. It's advanced design minimizes distance between rows to minimize image artifacts due to synchronization. In this paper, the high-speed colorized linear CCD data acquisition system was designed take advantages of the linear CCD sensor μpd3728. The hardware and software design of the system based on FPGA is introduced and the design of the functional modules is performed. The all system is composed of CCD driver module, data buffering module, data processing module and computer interface module. The image data was transferred to computer by Camera link interface. The system which automatically adjusts the exposure time of linear CCD, is realized with a new method. The integral time of CCD can be controlled by the program. The method can automatically adjust the integration time for different illumination intensity under controlling of FPGA, and respond quickly to brightness changes. The data acquisition system is also offering programmable gains and offsets for each color. The quality of image can be improved after calibration in FPGA. The design has high expansibility and application value. It can be used in many application situations.

  18. Backside charging of the CCD

    NASA Technical Reports Server (NTRS)

    Janesick, J.; Elliott, T.; Daud, T.; Mccarthy, J.; Blouke, M.

    1985-01-01

    Until recently, the usefulness of the charge coupled device (CCD) as an imaging sensor was thought to be restricted to within rather narrow boundaries of the visible and near IR spectrum. However, since the discovery of backside charging the full potential of CCD performance is now realized. Indeed, the technique of backside charging not only allows the CCD to be used directly in the UV, EUV, and soft X-ray regimes, it has opened up new opportunities in optimizing charge collection processes as well. The technique of backside charging is discussed, and its properties, use, and potential in the future as it applies to the CCD are described.

  19. Video-based beam position monitoring at CHESS

    NASA Astrophysics Data System (ADS)

    Revesz, Peter; Pauling, Alan; Krawczyk, Thomas; Kelly, Kevin J.

    2012-10-01

    CHESS has pioneered the development of X-ray Video Beam Position Monitors (VBPMs). Unlike traditional photoelectron beam position monitors that rely on photoelectrons generated by the fringe edges of the X-ray beam, with VBPMs we collect information from the whole cross-section of the X-ray beam. VBPMs can also give real-time shape/size information. We have developed three types of VBPMs: (1) VBPMs based on helium luminescence from the intense white X-ray beam. In this case the CCD camera is viewing the luminescence from the side. (2) VBPMs based on luminescence of a thin (~50 micron) CVD diamond sheet as the white beam passes through it. The CCD camera is placed outside the beam line vacuum and views the diamond fluorescence through a viewport. (3) Scatter-based VBPMs. In this case the white X-ray beam passes through a thin graphite filter or Be window. The scattered X-rays create an image of the beam's footprint on an X-ray sensitive fluorescent screen using a slit placed outside the beam line vacuum. For all VBPMs we use relatively inexpensive 1.3 Mega-pixel CCD cameras connected via USB to a Windows host for image acquisition and analysis. The VBPM host computers are networked and provide live images of the beam and streams of data about the beam position, profile and intensity to CHESS's signal logging system and to the CHESS operator. The operational use of VBPMs showed great advantage over the traditional BPMs by providing direct visual input for the CHESS operator. The VBPM precision in most cases is on the order of ~0.1 micron. On the down side, the data acquisition frequency (50-1000ms) is inferior to the photoelectron based BPMs. In the future with the use of more expensive fast cameras we will be able create VBPMs working in the few hundreds Hz scale.

  20. An advanced CCD emulator with 32MB image memory

    NASA Astrophysics Data System (ADS)

    O'Connor, P.; Fried, J.; Kotov, I.

    2012-07-01

    As part of the LSST sensor development program we have developed an advanced CCD emulator for testing new multichannel readout electronics. The emulator, based on an Altera Stratix II FPGA for timing and control, produces 4 channels of simulated video waveforms in response to an appropriate sequence of horizontal and vertical clocks. It features 40MHz, 16-bit DACs for reset and video generation, 32MB of image memory for storage of arbitrary grayscale bitmaps, and provision to simulate reset and clock feedthrough ("glitches") on the video channels. Clock inputs are qualified for proper sequences and levels before video output is generated. Binning, region of interest, and reverse clock sequences are correctly recognized and appropriate video output will be produced. Clock transitions are timestamped and can be played back to a control PC. A simplified user interface is provided via a daughter card having an ARM M3 Cortex microprocessor and miniature color LCD display and joystick. The user can select video modes from stored bitmap images, or flat, gradient, bar, chirp, or checkerboard test patterns; set clock thresholds and video output levels; and set row/column formats for image outputs. Multiple emulators can be operated in parallel to simulate complex CCDs or CCD arrays.

  1. STIS-01 CCD Functional

    NASA Astrophysics Data System (ADS)

    Valenti, Jeff

    2001-07-01

    This activity measures the baseline performance and commandability of the CCD subsystem. Only primary amplifier D is used. Bias, Dark, and Flat Field exposures are taken in order to measure read noise, dark current, CTE, and gain. Numerous bias frames are taken to permit construction of "superbias" frames in which the effects of read noise have been rendered negligible. Dark exposures are made outside the SAA. Full frame and binned observations are made, with binning factors of 1x1 and 2x2. Finally, tungsten lamp exposures are taken through narrow slits to confirm the slit positions in the current database. All exposures are internals. This is a reincarnation of SM3A proposal 8502 with some unnecessary tests removed from the program.

  2. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  3. First Carlsberg Meridian Telescope (CMT) CCD Catalogue.

    NASA Astrophysics Data System (ADS)

    Bélizon, F.; Muiños, J. L.; Vallejo, M.; Evans, D. W.; Irwin, M.; Helmer, L.

    2003-11-01

    The Carlsberg Meridian Telescope (CMT) is a telescope owned by Copenhagen University Observatory (CUO). It was installed in the Spanish observatory of El Roque de los Muchachos on the island of La Palma (Canary Islands) in 1984. It is operated jointly by the CUO, the Institute of Astronomy, Cambridge (IoA) and the Real Instituto y Observatorio de la Armada of Spain (ROA) in the framework of an international agreement. From 1984 to 1998 the instrument was provided with a moving slit micrometer and with its observations a series of 11 catalogues were published, `Carlsberg Meridian Catalogue La Palma (CMC No 1-11)'. Since 1997, the telescope has been controlled remotely via Internet. The three institutions share this remote control in periods of approximately three months. In 1998, the CMT was upgraded by installing as sensor, a commercial Spectrasource CCD camera as a test of the possibility of performing meridian transits observed in drift-scan mode. Once this was shown possible, in 1999, a second model of CCD camera, built in the CUO workshop with a better performance, was installed. The Spectrasource camera was loaned to ROA by CUO and is now installed in the San Fernando Automatic Meridian Circle in San Juan (CMASF). In 1999, the observations were started of a sky survey from -3deg to +30deg in declination. In July 2002, a first release of the survey was published, with the positions of the observed stars in the band between -3deg and +3deg in declination. This oral communication will present this first release of the survey.

  4. Fully depleted back illuminated CCD

    DOEpatents

    Holland, Stephen Edward

    2001-01-01

    A backside illuminated charge coupled device (CCD) is formed of a relatively thick high resistivity photon sensitive silicon substrate, with frontside electronic circuitry, and an optically transparent backside ohmic contact for applying a backside voltage which is at least sufficient to substantially fully deplete the substrate. A greater bias voltage which overdepletes the substrate may also be applied. One way of applying the bias voltage to the substrate is by physically connecting the voltage source to the ohmic contact. An alternate way of applying the bias voltage to the substrate is to physically connect the voltage source to the frontside of the substrate, at a point outside the depletion region. Thus both frontside and backside contacts can be used for backside biasing to fully deplete the substrate. Also, high resistivity gaps around the CCD channels and electrically floating channel stop regions can be provided in the CCD array around the CCD channels. The CCD array forms an imaging sensor useful in astronomy.

  5. Practical performance evaluation of a 10k × 10k CCD for electron cryo-microscopy

    PubMed Central

    Bammes, Benjamin E.; Rochat, Ryan H.; Jakana, Joanita; Chiu, Wah

    2011-01-01

    Electron cryo-microscopy (cryo-EM) images are commonly collected using either charge-coupled devices (CCD) or photographic film. Both film and the current generation of 16 megapixel (4k × 4k) CCD cameras have yielded high-resolution structures. Yet, despite the many advantages of CCD cameras, more than two times as many structures of biological macromolecules have been published in recent years using photographic film. The continued preference to film, especially for subnanometer-resolution structures, may be partially influenced by the finer sampling and larger effective specimen imaging area offered by film. Large format digital cameras may finally allow them to overtake film as the preferred detector for cryo-EM. We have evaluated a 111-megapixel (10k × 10k) CCD camera with a 9 μm pixel size. The spectral signal-to-noise ratios of low dose images of carbon film indicate that this detector is capable of providing signal up to at least 2/5 Nyquist frequency potentially retrievable for 3-D reconstructions of biological specimens, resulting in more than double the effective specimen imaging area of existing 4k × 4k CCD cameras. We verified our estimates using frozen-hydrated ε15 bacteriophage as a biological test specimen with previously determined structure, yielding a ~7 Å resolution single particle reconstruction from only 80 CCD frames. Finally, we explored the limits of current CCD technology by comparing the performance of this detector to various CCD cameras used for recording data yielding subnanometer resolution cryo-EM structures submitted to the Electron Microscopy Data Bank (http://www.emdatabank.org/). PMID:21619932

  6. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  7. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  8. System for control of cooled CCD and image data processing for plasma spectroscopy

    SciTech Connect

    Mimura, M.; Kakeda, T.; Inoko, A.

    1995-12-31

    A Spectroscopic measurement system which has a spacial resolution is important for plasma study. This is especially true for a measurement of a plasma without axial symmetry like the LHD-plasma. Several years ago, we developed an imaging spectroscopy system using a CCD camera and an image-memory board of a personal computer. It was very powerful to study a plasma-gas interaction phenomena. In which system, however, an ordinary CCD was used so that the dark-current noise of the CCD prevented to measure dark spectral lines. Recently, a cooled CCD system can be obtained for the high sensitivity measurement. But such system is still very expensive. The cooled CCD itself as an element can be purchased cheaply, because amateur agronomists began to use it to take a picture of heavenly bodies. So we developed an imaging spectroscopy system using such a cheap cooled CCD for plasma experiment.

  9. Measurement of marine picoplankton cell size by using a cooled, charge-coupled device camera with image-analyzed fluorescence microscopy

    SciTech Connect

    Viles, C.L.; Sieracki, M.E. )

    1992-02-01

    Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 {mu}m) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 {mu}m) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured.

  10. Make a Pinhole Camera

    ERIC Educational Resources Information Center

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  11. Spas color camera

    NASA Technical Reports Server (NTRS)

    Toffales, C.

    1983-01-01

    The procedures to be followed in assessing the performance of the MOS color camera are defined. Aspects considered include: horizontal and vertical resolution; value of the video signal; gray scale rendition; environmental (vibration and temperature) tests; signal to noise ratios; and white balance correction.

  12. CCD imager with photodetector bias introduced via the CCD register

    NASA Technical Reports Server (NTRS)

    Kosonocky, Walter F. (Inventor)

    1986-01-01

    An infrared charge-coupled-device (IR-CCD) imager uses an array of Schottky-barrier diodes (SBD's) as photosensing elements and uses a charge-coupled-device (CCD) for arranging charge samples supplied in parallel from the array of SBD's into a succession of serially supplied output signal samples. Its sensitivity to infrared (IR) is improved by placing bias charges on the Schottky barrier diodes. Bias charges are transported to the Schottky barrier diodes by a CCD also used for charge sample read-out.

  13. Video Golf

    NASA Technical Reports Server (NTRS)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  14. Underwater camera with depth measurement

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  15. World's fastest and most sensitive astronomical camera

    NASA Astrophysics Data System (ADS)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  16. The Calibration of High-Speed Camera Imaging System for ELMs Observation on EAST Tokamak

    NASA Astrophysics Data System (ADS)

    Fu, Chao; Zhong, Fangchuan; Hu, Liqun; Yang, Jianhua; Yang, Zhendong; Gan, Kaifu; Zhang, Bin; East Team

    2016-09-01

    A tangential fast visible camera has been set up in EAST tokamak for the study of edge MHD instabilities such as ELM. To determine the 3-D information from CCD images, Tsai's two-stage technique was utilized to calibrate the high-speed camera imaging system for ELM study. By applying tiles of the passive stabilizers in the tokamak device as the calibration pattern, transformation parameters for transforming from a 3-D world coordinate system to a 2-D image coordinate system were obtained, including the rotation matrix, the translation vector, the focal length and the lens distortion. The calibration errors were estimated and the results indicate the reliability of the method used for the camera imaging system. Through the calibration, some information about ELM filaments, such as positions and velocities were obtained from images of H-mode CCD videos. supported by National Natural Science Foundation of China (No. 11275047), the National Magnetic Confinement Fusion Science Program of China (No. 2013GB102000)

  17. CCD evaluation for estimating measurement precision in lateral shearing interferometry

    NASA Astrophysics Data System (ADS)

    Liu, Bingcai; Li, Bing; Tian, Ailing; Li, Baopeng

    2013-06-01

    Because of larger measurement ability of wave-front deviation and no need of reference plat, the lateral shearing interferometry based on four step phase shifting has been widely used for wave-front measurement. After installation shearing interferograms are captured by CCD camera, and the actual phase data of wave-front can be calculated by four step phase shift algorithm and phase unwrapping. In this processing, the pixel resolution and gray scale of CCD camera is the vital factor for the measurement precision. In this paper, Based on the structure of lateral shearing surface interferometer with phase shifting, pixel resolution more or less for measurement precision is discussed. Also, the gray scale is 8 bit, 12 bit or 16 bit for measurement precision is illustrated by simulation.

  18. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  19. Video monitoring system for car seat

    NASA Technical Reports Server (NTRS)

    Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

    2004-01-01

    A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.

  20. Enhanced performance CCD output amplifier

    DOEpatents

    Dunham, Mark E.; Morley, David W.

    1996-01-01

    A low-noise FET amplifier is connected to amplify output charge from a che coupled device (CCD). The FET has its gate connected to the CCD in common source configuration for receiving the output charge signal from the CCD and output an intermediate signal at a drain of the FET. An intermediate amplifier is connected to the drain of the FET for receiving the intermediate signal and outputting a low-noise signal functionally related to the output charge signal from the CCD. The amplifier is preferably connected as a virtual ground to the FET drain. The inherent shunt capacitance of the FET is selected to be at least equal to the sum of the remaining capacitances.

  1. Automated characterization of CCD detectors for DECam

    NASA Astrophysics Data System (ADS)

    Kubik, D.; Alvarez, R.; Abbott, T.; Annis, J.; Bonati, M.; Buckley-Geer, E.; Campa, J.; Cease, H.; Chappa, S.; DePoy, D.; Derylo, G.; Diehl, H. T.; Estrada, J.; Flaugher, B.; Hao, J.; Holland, S.; Huffman, D.; Karliner, I.; Kuhlmann, S.; Kuk, K.; Lin, H.; Montes, J.; Roe, N.; Scarpine, V.; Schmidt, R.; Schultz, K.; Shaw, T.; Simaitis, V.; Spinka, H.; Stuermer, W.; Tucker, D.; Walker, A.; Wester, W.

    2010-07-01

    The Dark Energy Survey Camera (DECam) will be comprised of a mosaic of 74 charge-coupled devices (CCDs). The Dark Energy Survey (DES) science goals set stringent technical requirements for the CCDs. The CCDs are provided by LBNL with valuable cold probe data at 233 K, providing an indication of which CCDs are more likely to pass. After comprehensive testing at 173 K, about half of these qualify as science grade. Testing this large number of CCDs to determine which best meet the DES requirements is a very time-consuming task. We have developed a multistage testing program to automatically collect and analyze CCD test data. The test results are reviewed to select those CCDs that best meet the technical specifications for charge transfer efficiency, linearity, full well capacity, quantum efficiency, noise, dark current, cross talk, diffusion, and cosmetics.

  2. Guerrilla Video: A New Protocol for Producing Classroom Video

    ERIC Educational Resources Information Center

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  3. CCD/CMOS hybrid FPA for low light level imaging

    NASA Astrophysics Data System (ADS)

    Liu, Xinqiao; Fowler, Boyd A.; Onishi, Steve K.; Vu, Paul; Wen, David D.; Do, Hung; Horn, Stuart

    2005-08-01

    We present a CCD / CMOS hybrid focal plane array (FPA) for low light level imaging applications. The hybrid approach combines the best of CCD imaging characteristics (e.g. high quantum efficiency, low dark current, excellent uniformity, and low pixel cross talk) with the high speed, low power and ultra-low read noise of CMOS readout technology. The FPA is comprised of two CMOS readout integrated circuits (ROIC) that are bump bonded to a CCD imaging substrate. Each ROIC is an array of Capacitive Transimpedence Amplifiers (CTIA) that connect to the CCD columns via indium bumps. The proposed column parallel readout architecture eliminates the slow speed, high noise, and high power limitations of a conventional CCD. This results in a compact, low power, ultra-sensitive solid-state FPA that can be used in low light level applications such as live-cell microscopy and security cameras at room temperature operation. The prototype FPA has a 1280×1024 format with 12-um square pixels. Measured dark current is less than 5.8 pA/cm2 at room temperature and the overall read noise is as low as 2.9e at 30 frames/sec.

  4. Scientific CCD characterisation at Universidad Complutense LICA Laboratory

    NASA Astrophysics Data System (ADS)

    Tulloch, S.; Gil de Paz, A.; Gallego, J.; Zamorano, J.; Tapia, Carlos

    2012-07-01

    A CCD test-bench has been built at the Universidad Complutensés LICA laboratory. It is initially intended for commissioning of the MEGARA1 (Multi-Espectrógrafo en GTC de Alta Resolución para Astronomía) instrument but can be considered as a general purpose scientific CCD test-bench. The test-bench uses an incandescent broad-band light source in combination with a monochromator and two filter wheels to provide programmable narrow-band illumination across the visible band. Light from the monochromator can be directed to an integrating sphere for flat-field measurements or sent via a small aperture directly onto the CCD under test for high accuracy diode-mode quantum efficiency measurements. Point spread function measurements can also be performed by interposing additional optics between sphere and the CCD under test. The whole system is under LabView control via a clickable GUI. Automated measurement scans of quantum efficiency can be performed requiring only that the user replace the CCD under test with a calibrated photodiode after each measurement run. A 20cm diameter cryostat with a 10cm window and Brooks Polycold PCC closed-cycle cooler also form part of the test-bench. This cryostat is large enough to accommodate almost all scientific CCD formats has initially been used to house an E2V CCD230 in order to fully prove the test-bench functionality. This device is read-out using an Astronomical Research Camera controller connected to the UKATC's UCAM data acquisition system.

  5. THE DARK ENERGY CAMERA

    SciTech Connect

    Flaugher, B.; Diehl, H. T.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Buckley-Geer, E. J.; Honscheid, K.; Abbott, T. M. C.; Bonati, M.; Antonik, M.; Brooks, D.; Ballester, O.; Cardiel-Sas, L.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Boprie, D.; Campa, J.; Castander, F. J.; Collaboration: DES Collaboration; and others

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  6. The Dark Energy Camera

    SciTech Connect

    Flaugher, B.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  7. The Dark Energy Camera

    DOE PAGES

    Flaugher, B.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar.more » The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less

  8. Neutron counting with cameras

    SciTech Connect

    Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo

    2015-07-01

    A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involved are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)

  9. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  10. CCD Photometer Installed on the Telescope - 600 OF the Shamakhy Astrophysical Observatory: I. Adjustment of CCD Photometer with Optics - 600

    NASA Astrophysics Data System (ADS)

    Lyuty, V. M.; Abdullayev, B. I.; Alekberov, I. A.; Gulmaliyev, N. I.; Mikayilov, Kh. M.; Rustamov, B. N.

    2009-12-01

    Short description of optical and electric scheme of CCD photometer with camera U-47 installed on the Cassegrain focus of ZEISS-600 telescope of the ShAO NAS Azerbaijan is provided. The reducer of focus with factor of reduction 1.7 is applied. It is calculated equivalent focal distances of a telescope with a focus reducer. General calculations of optimum distance from focal plane and t sizes of optical filters of photometer are presented.

  11. Streak camera dynamic range optimization

    SciTech Connect

    Wiedwald, J.D.; Lerche, R.A.

    1987-09-01

    The LLNL optical streak camera is used by the Laser Fusion Program in a wide range of applications. Many of these applications require a large recorded dynamic range. Recent work has focused on maximizing the dynamic range of the streak camera recording system. For our streak cameras, image intensifier saturation limits the upper end of the dynamic range. We have developed procedures to set the image intensifier gain such that the system dynamic range is maximized. Specifically, the gain is set such that a single streak tube photoelectron is recorded with an exposure of about five times the recording system noise. This ensures detection of single photoelectrons, while not consuming intensifier or recording system dynamic range through excessive intensifier gain. The optimum intensifier gain has been determined for two types of film and for a lens-coupled CCD camera. We have determined that by recording the streak camera image with a CCD camera, the system is shot-noise limited up to the onset of image intensifier nonlinearity. When recording on film, the film determines the noise at high exposure levels. There is discussion of the effects of slit width and image intensifier saturation on dynamic range. 8 refs.

  12. Video Mosaicking for Inspection of Gas Pipelines

    NASA Technical Reports Server (NTRS)

    Magruder, Darby; Chien, Chiun-Hong

    2005-01-01

    A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable

  13. LSST camera readout chip ASPIC: test tools

    NASA Astrophysics Data System (ADS)

    Antilogus, P.; Bailly, Ph; Jeglot, J.; Juramy, C.; Lebbolo, H.; Martin, D.; Moniez, M.; Tocut, V.; Wicek, F.

    2012-02-01

    The LSST camera will have more than 3000 video-processing channels. The readout of this large focal plane requires a very compact readout chain. The correlated ''Double Sampling technique'', which is generally used for the signal readout of CCDs, is also adopted for this application and implemented with the so called ''Dual Slope integrator'' method. We have designed and implemented an ASIC for LSST: the Analog Signal Processing asIC (ASPIC). The goal is to amplify the signal close to the output, in order to maximize signal to noise ratio, and to send differential outputs to the digitization. Others requirements are that each chip should process the output of half a CCD, that is 8 channels and should operate at 173 K. A specific Back End board has been designed especially for lab test purposes. It manages the clock signals, digitizes the analog differentials outputs of ASPIC and stores data into a memory. It contains 8 ADCs (18 bits), 512 kwords memory and an USB interface. An FPGA manages all signals from/to all components on board and generates the timing sequence for ASPIC. Its firmware is written in Verilog and VHDL languages. Internals registers permit to define various tests parameters of the ASPIC. A Labview GUI allows to load or update these registers and to check a proper operation. Several series of tests, including linearity, noise and crosstalk, have been performed over the past year to characterize the ASPIC at room and cold temperature. At present, the ASPIC, Back-End board and CCD detectors are being integrated to perform a characterization of the whole readout chain.

  14. Inspection focus technology of space tridimensional mapping camera based on astigmatic method

    NASA Astrophysics Data System (ADS)

    Wang, Zhi; Zhang, Liping

    2010-10-01

    The CCD plane of the space tridimensional mapping camera will be deviated from the focal plane(including the CCD plane deviated due to camera focal length changed), under the condition of space environment and vibration, impact when satellite is launching, image resolution ratio will be descended because defocusing. For tridimensional mapping camera, principal point position and focal length variation of the camera affect positioning accuracy of ground target, conventional solution is under the condition of vacuum and focusing range, calibrate the position of CCD plane with code of photoelectric encoder, when the camera defocusing in orbit, the magnitude and direction of defocusing amount are obtained by photoelectric encoder, then the focusing mechanism driven by step motor to compensate defocusing amount of the CCD plane. For tridimensional mapping camera, under the condition of space environment and vibration, impact when satellite is launching, if the camera focal length changes, above focusing method has been meaningless. Thus, the measuring and focusing method was put forward based on astigmation, a quadrant detector was adopted to measure the astigmation caused by the deviation of the CCD plane, refer to calibrated relation between the CCD plane poison and the asrigmation, the deviation vector of the CCD plane can be obtained. This method includes all factors caused deviation of the CCD plane, experimental results show that the focusing resolution of mapping camera focusing mechanism based on astigmatic method can reach 0.25 μm.

  15. Development of a CCD array as an imaging detector for advanced X-ray astrophysics facilities

    NASA Technical Reports Server (NTRS)

    Schwartz, D. A.

    1981-01-01

    The development of a charge coupled device (CCD) X-ray imager for a large aperture, high angular resolution X-ray telescope is discussed. Existing CCDs were surveyed and three candidate concepts were identified. An electronic camera control and computer interface, including software to drive a Fairchild 211 CCD, is described. In addition a vacuum mounting and cooling system is discussed. Performance data for the various components are given.

  16. Head-free, remote eye-gaze detection system based on pupil-corneal reflection method with easy calibration using two stereo-calibrated video cameras.

    PubMed

    Ebisawa, Yoshinobu; Fukumoto, Kiyotaka

    2013-10-01

    We have developed a pupil-corneal reflection method-based gaze detection system, which allows large head movements and achieves easy gaze calibration. This system contains two optical systems consisting of components such as a camera and a near-infrared light source attached to the camera. The light source has two concentric LED rings with different wavelengths. The inner and outer rings generate bright and dark pupil images, respectively. The pupils are detected from a difference image created by subtracting the bright and dark pupil images. The light source also generates the corneal reflection. The 3-D coordinates of the pupils are determined by the stereo matching method using two optical systems. The vector from the corneal reflection center to the pupil center in the camera image is determined as r. The angle between the line of sight and the line passing through the pupil center and the camera (light source) is denoted as θ. The relationship θ =k |r| is assumed, where k is a constant. The theory implies that head movement of the user is allowed and facilitates the gaze calibration procedure. In the automatic calibration method, k is automatically determined while the user looks around on the PC screen without fixating on any specific calibration target. In the one-point calibration method, the user is asked to fixate on one calibration target at the PC screen in order to correct the difference between the optical and visual axes. In the two-point calibration method, in order to correct the nonlinear relationship between θ and |r|, the user is asked to fixate on two targets. The experimental results show that the three proposed calibration methods improve the precision of gaze detection step by step. In addition, the average gaze error in the visual angle is less than 1° for the seven head positions of the user.

  17. Use of a high-resolution profiling sonar and a towed video camera to map a Zostera marina bed, Solent, UK

    NASA Astrophysics Data System (ADS)

    Lefebvre, A.; Thompson, C. E. L.; Collins, K. J.; Amos, C. L.

    2009-04-01

    Seagrasses are flowering plants that develop into extensive underwater meadows and play a key role in the coastal ecosystem. In the last few years, several techniques have been developed to map and monitor seagrass beds in order to protect them. Here, we present the results of a survey using a profiling sonar, the Sediment Imager Sonar (SIS) and a towed video sledge to study a Zostera marina bed in the Solent, southern UK. The survey aimed to test the instruments for seagrass detection and to describe the area for the first time. On the acoustic data, the bed produced the strongest backscatter along a beam. A high backscatter above the bottom indicated the presence of seagrass. The results of an algorithm developed to detect seagrass from the sonar data were tested against video data. Four parameters were calculated from the SIS data: water depth, a Seagrass Index (average backscatter 10-15 cm above the bed), canopy height (height above the bed where the backscatter crosses a threshold limit) and patchiness (percentage of beams in a sweep where the backscatter 10-15 cm above the bed is greater than a threshold limit). From the video, Zostera density was estimated together with macroalgae abundance and bottom type. Patchiness calculated from the SIS data was strongly correlated to seagrass density evaluated from the video, indicating that this parameter could be used for seagrass detection. The survey area has been classified based upon seagrass density, macroalgae abundance and bottom type. Only a small area was occupied by a dense canopy whereas most of the survey area was characterised by patchy seagrass. Results indicated that Zostera marina developed only on sandy bottoms and was not found in regions of gravel. Furthermore, it was limited to a depth shallower than 1.5 m below the level of Lowest Astronomical Tide and present in small patches across the intertidal zone. The average canopy height was 15 cm and the highest density was 150 shoots m -2.

  18. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Howard, Richard T. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  19. On the Development of a Digital Video Motion Detection Test Set

    SciTech Connect

    Pritchard, Daniel A.; Vigil, Jose T.

    1999-06-07

    This paper describes the current effort to develop a standardized data set, or suite of digital video sequences, that can be used for test and evaluation of digital video motion detectors (VMDS) for exterior applications. We have drawn from an extensive video database of typical application scenarios to assemble a comprehensive data set. These data, some existing for many years on analog videotape, have been converted to a reproducible digital format and edited to generate test sequences several minutes long for many scenarios. Sequences include non- alarm video, intrusions and nuisance alarm sources, taken with a variety of imaging sensors including monochrome CCD cameras and infrared (thermal) imaging cameras, under a variety of daytime and nighttime conditions. The paper presents an analysis of the variables and estimates the complexity of a thorough data set. Some of this video data test has been digitized for CD-ROM storage and playback. We are considering developing a DVD disk for possible use in screening and testing VMDs prior to government testing and deployment. In addition, this digital video data may be used by VMD developers for fhrther refinement or customization of their product to meet specific requirements. These application scenarios may also be used to define the testing parameters for futore procurement qualification. A personal computer may be used to play back either the CD-ROM or the DVD video data. A consumer electronics-style DVD player may be used to replay the DVD disk. This paper also discusses various aspects of digital video storage including formats, resolution, CD-ROM and DVD storage capacity, formats, editing and playback.

  20. Observations with the Real Instituto y Observatorio de la Armada CCD transit circle in Argentina

    NASA Astrophysics Data System (ADS)

    Muiños, J. I.; Belizón, F.; Vallejo, M.; Mallamaci, C.; Pérez, J. A.

    The Real Instituto y Observatorio de la Armada (ROA) meridian circle was moved to the Estación de Altura Carlos Ulrrico Cesco in the República Argentina in 1996. Until November 1999 the observations were carried out with a moving slit micrometer. In spring 2001 the result of these observations has been published, forming the first Hispano-Argentinian Meridian Catalogue (HAMC). In December 1999 was installed a SpectraSource CCD camera of 1552×1024 pixels of 9 μ. The CCD camera observes in drift scan mode. A survey of the southern hemisphere is being observed from +3° to -60° of declination. In this contribution is presented a description of the telescope and the automatic control system, the results of observations carried out with the slit micrometer, and the observational and preliminary reduction techniques with the CCD camera, the present state of the southern hemisphere survey and the future possibilities.

  1. STS-134 Launch Composite Video Comparison

    NASA Video Gallery

    A side-by-side comparison video shows a one-camera view of the STS-134 launch (left) with the six-camera composited view (right). Imaging experts funded by the Space Shuttle Program and located at ...

  2. PAU camera: detectors characterization

    NASA Astrophysics Data System (ADS)

    Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

    2012-07-01

    The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

  3. A CCD image transducer and processor suitable for space flight. [satellite borne solar telescope instrumentation

    NASA Technical Reports Server (NTRS)

    Michels, D. J.

    1975-01-01

    A satellite borne extreme ultraviolet solar telescope makes use of CCD area arrays for both image readout and onboard data processing. The instrument is designed to view the inner solar corona in the wavelength band 170 - 630 A, and the output video stream may be selected by ground command to present the coronal scene, or the time-rate-of-change of the scene. Details of the CCD application to onboard image processing are described, and a discussion of the processor's potential for telemetry bandwidth compression is included. Optical coupling methods, data storage requirements, spatial and temporal resolution, and nonsymmetry of resolution (pitch) in the CCD are discussed.

  4. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  5. High-strain-rate fracture behavior of steel: the new application of a high-speed video camera to the fracture initiation experiments of steel

    NASA Astrophysics Data System (ADS)

    Suzuki, Goro; Ichinose, Kensuke; Gomi, Kenji; Kaneda, Teruo

    1997-12-01

    High-speed event capturing was conducted to determine the fracture initiation load of a hot-rolled steel under rapid loading conditions. The loading tests were carried out on compact specimens which were a single edge-notched and fatigue cracked plate loaded in tension. The impact velocities in the tests were 0.1 - 5.0 m/s. The influences of the impact velocity on the fracture initiation load were confirmed. The new application of a high-speed camera to the fracture initiation experiments has been confirmed.

  6. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  7. Pinhole Camera For Viewing Electron Beam Materials Processing

    NASA Astrophysics Data System (ADS)

    Rushford, M. C.; Kuzmenko, P. J.

    1986-10-01

    A very rugged, compact (4x4x10 inches), gas purged "PINHOLE CAMERA" has been developed for viewing electron beam materials processing (e.g. melting or vaporizing metal). The video image is computer processed, providing dimensional and temperature measurements of objects within the field of view, using an IBM PC. The "pinhole camera" concept is similar to a TRW optics system for viewing into a coal combustor through a 2 mm hole. Gas is purged through the hole to repel particulates from optical surfaces. In our system light from the molten metal passes through the 2 mm hole "PINHOLE", reflects off an aluminum coated glass substrate and passes through a window into a vacuum tight container holding the camera and optics at atmospheric pressure. The mirror filters out X rays which pass through the AL layer and are absorbed in the glass mirror substrate. Since metallic coatings are usually reflective, the image quality is not severely degraded by small amounts of vapor that overcome the gas purge to reach the mirror. Coating thicknesses of up to 2 microns can be tolerated. The mirror is the only element needing occasional servicing. We used a telescope eyepiece as a convenient optical design, but with the traditional optical path reversed. The eyepiece images a scene through a small entrance aperture onto an image plane where a CCD camera is placed. Since the iris of the eyepiece is fixed and the scene intensity varies it was necessary to employ a variable neutral density filter for brightness control. Devices used for this purpose include PLZT light valve from Motorola, mechanically rotated linear polarizer sheets, and nematic liquid crystal light valves. These were placed after the mirror and entrance aperture but before the lens to operate as a voltage variable neutral density filter. The molten metal surface temp being viewed varies from 4000 to 1200 degrees Kelvin. The resultant intensity change (at 488 nm with 10 nm bandwidth) is seven orders of magnitude. This

  8. CCD readout electronics for the Subaru Prime Focus Spectrograph

    NASA Astrophysics Data System (ADS)

    Hope, Stephen C.; Gunn, James E.; Loomis, Craig P.; Fitzgerald, Roger E.; Peacock, Grant O.

    2014-07-01

    The following paper details the design for the CCD readout electronics for the Subaru Telescope Prime Focus Spectrograph (PFS). PFS is designed to gather spectra from 2394 objects simultaneously, covering wavelengths that extend from 380 nm to 1260 nm. The spectrograph is comprised of four identical spectrograph modules, each collecting roughly 600 spectra. The spectrograph modules provide simultaneous wavelength coverage over the entire band through the use of three separate optical channels: blue, red, and near infrared (NIR). A camera in each channel images the multi-object spectra onto a 4k × 4k, 15 μm pixel, detector format. The two visible cameras use a pair of Hamamatsu 2k × 4k CCDs with readout provided by custom electronics, while the NIR camera uses a single Teledyne HgCdTe 4k × 4k detector and Teledyne's ASIC Sidecar to read the device. The CCD readout system is a custom design comprised of three electrical subsystems - the Back End Electronics (BEE), the Front End Electronics (FEE), and a Pre-amplifier. The BEE is an off-the-shelf PC104 computer, with an auxiliary Xilinx FPGA module. The computer serves as the main interface to the Subaru messaging hub and controls other peripheral devices associated with the camera, while the FPGA is used to generate the necessary clocks and transfer image data from the CCDs. The FEE board sets clock biases, substrate bias, and CDS offsets. It also monitors bias voltages, offset voltages, power rail voltage, substrate voltage and CCD temperature. The board translates LVDS clock signals to biased clocks and returns digitized analog data via LVDS. Monitoring and control messages are sent from the BEE to the FEE using a standard serial interface. The Pre-amplifier board resides behind the detectors and acts as an interface to the two Hamamatsu CCDs. The Pre-amplifier passes clocks and biases to the CCDs, and analog CCD data is buffered and amplified prior to being returned to the FEE. In this paper we describe the

  9. Instrumentation for the U.S. Naval Observatory CCD Astrograph

    NASA Astrophysics Data System (ADS)

    Rafferty, T. J.; Germain, M. E.; Zacharias, N.

    The U.S. Naval Observatory CCD Astrograph will start an observing program in mid-1997 on Cerro Tololo (CTIO) in Chile to produce a high density, high accuracy, astrometric catalog of the southern hemisphere stars down to 16th magnitude. The program will be done using a robotic, refracting telescope with a 8-inch five-element red-corrected lens. A Kodak 4k x 4k (9 micron pixels) CCD camera will allow a one square-degree field of view, which is large enough to provide the necessary reference stars. The dome rotation, setting and clamping the X-Y slide for the guidescope, setting and clamping the focus, telescope setting, and use of a Hartman screen for determining the focus will be done automatically. The system is controlled via an embedded single-board computer. A host PC sends commands to the embedded computer, receives status information, controls the camera, saves the images to disk, and does the on-line reduction of the previous CCD frame.

  10. Ultra-clean CCD Cryostats

    NASA Astrophysics Data System (ADS)

    Deiries, S.; Iwert, O.; Cavadore, C.; Geimer, C.; Hummel, E.

    A reproducible method to achieve ultra-clean CCD cryostats is presented, including a list of suitable materials and necessary treatments. In addition, proper handling under clean-room conditions and suitable molecular sieves to eliminate contamination on the detector surface in cold cryostats for years are described.

  11. CCD Photometry of bright stars using objective wire mesh

    SciTech Connect

    Kamiński, Krzysztof; Zgórz, Marika; Schwarzenberg-Czerny, Aleksander

    2014-06-01

    Obtaining accurate photometry of bright stars from the ground remains problematic due to the danger of overexposing the target and/or the lack of suitable nearby comparison stars. The century-old method of using objective wire mesh to produce multiple stellar images seems promising for the precise CCD photometry of such stars. Furthermore, our tests on β Cep and its comparison star, differing by 5 mag, are very encouraging. Using a CCD camera and a 20 cm telescope with the objective covered by a plastic wire mesh, in poor weather conditions, we obtained differential photometry with a precision of 4.5 mmag per two minute exposure. Our technique is flexible and may be tuned to cover a range as big as 6-8 mag. We discuss the possibility of installing a wire mesh directly in the filter wheel.

  12. Video Toroid Cavity Imager

    DOEpatents

    Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  13. Spectral characterization of an ophthalmic fundus camera

    NASA Astrophysics Data System (ADS)

    Miller, Clayton T.; Bassi, Carl J.; Brodsky, Dale; Holmes, Timothy

    2010-02-01

    A fundus camera is an optical system designed to illuminate and image the retina while minimizing stray light and backreflections. Modifying such a device requires characterization of the optical path in order to meet the new design goals and avoid introducing problems. This work describes the characterization of one system, the Topcon TRC-50F, necessary for converting this camera from film photography to spectral imaging with a CCD. This conversion consists of replacing the camera's original xenon flash tube with a monochromatic light source and the film back with a CCD. A critical preliminary step of this modification is determining the spectral throughput of the system, from source to sensor, and ensuring there are sufficient photons at the sensor for imaging. This was done for our system by first measuring the transmission efficiencies of the camera's illumination and imaging optical paths with a spectrophotometer. Combining these results with existing knowledge of the eye's reflectance, a relative sensitivity profile is developed for the system. Image measurements from a volunteer were then made using a few narrowband sources of known power and a calibrated CCD. With these data, a relationship between photoelectrons/pixel collected at the CCD and narrowband illumination source power is developed.

  14. STS-135 Fused Launch Video

    NASA Video Gallery

    Imaging experts funded by the Space Shuttle Program and located at NASA's Ames Research Center prepared this video of the STS-135 launch by merging images taken by a set of six cameras capturing fi...

  15. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (<= 25 e- read noise and <= 10 e- /sec/pixel dark current), in addition to maintaining a stable gain of ≍ 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Three flight cameras and one engineering camera were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise and dark current of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV, EUV and X-ray science cameras at MSFC.

  16. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  17. Influence of the pointing direction and detector sensitivity variations on the detection rate of a double station meteor camera

    NASA Astrophysics Data System (ADS)

    Albin, T.; Koschny, D.; Drolshagen, G.; Soja, R.; Srama, R.; Poppe, B.

    2015-01-01

    The Canary Islands Long-Baseline Observatory (CILBO) is a double station meteor observation site on Tenerife and La Palma (Koschny et al., 2013; Koschny et al., 2014). Meteors are detected within the 40 ms long video frames of the identically built cameras using MetRec (Molau, 1999). MOTS (version 3, Koschny & Diaz, 2002) is used to determine the meteor trajectories of double-station observations. First scientific results regarding the velocity distribution and meteoroid flux have been published by Drolshagen et al., 2014 and Ott et al., 2014. Both authors found effects related to the Apex direction, such as an increasing number of detections in the morning hours. Sporadic meteors from the Apex cause additional observational bias, including in the velocity-magnitude domain and the meteor trail length determination. We show how the detection threshold conditions vary depending on the pointing direction of the cameras for both CILBO cameras. The angular velocity distribution of the meteors depends on the camera orientation. Meteors with a smaller angular velocity illuminate less CCD pixels in the same time interval than faster meteors causing a higher Signal-to-Noise ratio and consequently better detection threshold conditions. Additionally, we analyzed the detection distribution within the field of view of the CILBO cameras. We quantified this effect, which can be attributed mainly to vignetting in the wide-angle system.

  18. Astronomical CCD observing and reduction techniques

    NASA Technical Reports Server (NTRS)

    Howell, Steve B. (Editor)

    1992-01-01

    CCD instrumentation and techniques in observational astronomy are surveyed. The general topics addressed include: history of large array scientific CCD imagers; noise sources and reduction processes; basic photometry techniques; introduction to differential time-series astronomical photometry using CCDs; 2D imagery; point source spectroscopy; extended object spectrophotometry; introduction to CCD astrometry; solar system applications for CCDs; CCD data; observing with infrared arrays; image processing, data analysis software, and computer systems for CCD data reduction and analysis. (No individual items are abstracted in this volume)

  19. The PS1 Gigapixel Camera

    NASA Astrophysics Data System (ADS)

    Tonry, John L.; Isani, S.; Onaka, P.

    2007-12-01

    The world's largest and most advanced digital camera has been installed on the Pan-STARRS-1 (PS1) telescope on Haleakala, Maui. Built at the University of Hawaii at Manoa's Institute for Astronomy (IfA) in Honolulu, the gigapixel camera will capture images that will be used to scan the skies for killer asteroids, and to create the most comprehensive catalog of stars and galaxies ever produced. The CCD sensors at the heart of the camera were developed in collaboration with Lincoln Laboratory of the Massachusetts Institute of Technology. The image area, which is about 40 cm across, contains 60 identical silicon chips, each of which contains 64 independent imaging circuits. Each of these imaging circuits contains approximately 600 x 600 pixels, for a total of about 1.4 gigapixels in the focal plane. The CCDs themselves employ the innovative technology called "orthogonal transfer." Splitting the image area into about 4,000 separate regions in this way has three advantages: data can be recorded more quickly, saturation of the image by a very bright star is confined to a small region, and any defects in the chips only affect only a small part of the image area. The CCD camera is controlled by an ultrafast 480-channel control system developed at the IfA. The individual CCD cells are grouped in 8 x 8 arrays on a single silicon chip called an orthogonal transfer array (OTA), which measures about 5 cm square. There are a total of 60 OTAs in the focal plane of each telescope.

  20. SCUBA-2 Ccd-Style Imaging for the JCMT

    NASA Astrophysics Data System (ADS)

    Ellis, Maureen

    2005-01-01

    SCUBA-2 will replace SCUBA (Submillimetre Common User Bolometer Array) on the James Clerk Maxwell Telescope in 2006 and will be the first CCD-style camera for submillimetre astronomy. The instrument will simultaneously image at 850 and 450 microns using two focal plane arrays of 5120 pixels each. SCUBA-2 will map the submillimetre sky 1000 times faster than SCUBA to the same signal-to-noise ratio. This paper introduces the detector technology and the challenges faced in reading out a detector array cooled to ˜120 mK.

  1. Design and operational characteristics of a PV 001 image tube incorporated with EB CCD readout

    NASA Astrophysics Data System (ADS)

    Bryukhnevich, Gennadii I.; Dalinenko, Ilia N.; Ivanov, K. N.; Kaidalov, S. A.; Kuz'min, G. A.; Moskalev, B. B.; Naumov, Sergei K.; Pischelin, E. V.; Postovalov, Valdis E.; Prokhorov, Alexander M.; Schelev, Mikhail Y.

    1991-06-01

    A luminescence screen was replaced with a thinned, backside-illuminated, electron bombarded (EB) CCD in a well-known PV 001 streak/shutter image converter tube. The tube was mounted into an experimental camera prototype for measurement of its main technical characteristics. Under EB CCD readout operation in a free-scanning, slow-speed mode, the overall system spatial resolution was higher than 40 lp/mm at 10% MTF, and the linear part of the light transfer function was not less than 130. In streak mode the PV 001/EB CCD image tube exhibited threshold sensitivity of not less than 10-10 J/cm2 when recording 40 ps, 850 nm radiation pulses from a semiconductor laser. The preliminary results indicate that the PV 001/EB CCD image tube has quite a stable infrared sensitivity of its S1 photocathode.

  2. Ground-based observations of 951 Gaspra: CCD lightcurves and spectrophotometry with the Galileo filters

    NASA Technical Reports Server (NTRS)

    Mottola, Stefano; Dimartino, M.; Gonano-Beurer, M.; Hoffmann, H.; Neukum, G.

    1992-01-01

    This paper reports the observations of 951 Gaspra carried out at the European Southern Observatory (La Silla, Chile) during the 1991 apparition, using the DLR CCD Camera equipped with a spare set of the Galileo SSI filters. Time-resolved spectrophotometric measurements are presented. The occurrence of spectral variations with rotation suggests the presence of surface variegation.

  3. Feasibility of Radon projection acquisition for compressive imaging in MMW region based new video rate 16×16 GDD FPA camera

    NASA Astrophysics Data System (ADS)

    Levanon, Assaf; Konstantinovsky, Michael; Kopeika, Natan S.; Yitzhaky, Yitzhak; Stern, A.; Turak, Svetlana; Abramovich, Amir

    2015-05-01

    In this article we present preliminary results for the combination of two interesting fields in the last few years: 1) Compressed imaging (CI), which is a joint sensing and compressing process, that attempts to exploit the large redundancy in typical images in order to capture fewer samples than usual. 2) Millimeter Waves (MMW) imaging. MMW based imaging systems are required for a large variety of applications in many growing fields such as medical treatments, homeland security, concealed weapon detection, and space technology. Moreover, the possibility to create a reliable imaging in low visibility conditions such as heavy cloud, smoke, fog and sandstorms in the MMW region, generate high interest from military groups in order to be ready for new combat. The lack of inexpensive room temperature imaging sensors makes it difficult to provide a suitable MMW system for many of the above applications. A system based on Glow Discharge Detector (GDD) Focal Plane Arrays (FPA) can be very efficient in real time imaging with significant results. The GDD is located in free space and it can detect MMW radiation almost isotropically. In this article, we present a new approach of reconstruction MMW imaging by rotation scanning of the target. The Collection process here, based on Radon projections allows implementation of the compressive sensing principles into the MMW region. Feasibility of concept was obtained as radon line imaging results. MMW imaging results with our resent sensor are also presented for the first time. The multiplexing frame rate of 16×16 GDD FPA permits real time video rate imaging of 30 frames per second and comprehensive 3D MMW imaging. It uses commercial GDD lamps with 3mm diameter, Ne indicator lamps as pixel detectors. Combination of these two fields should make significant improvement in MMW region imaging research, and new various of possibilities in compressing sensing technique.

  4. Medición de coeficientes de extinción en CASLEO y características del CCD ROPER-2048B del telescopio JS

    NASA Astrophysics Data System (ADS)

    Fernández-Lajús, E.; Gamen, R.; Sánchez, M.; Scalia, M. C.; Baume, G. L.

    2016-08-01

    From observations made with the ``Jorge Sahade'' telescope of the Complejo Astronomico El Leoncito, the UBVRI-band extinction coeficients were measured, and some parameters and characteristics of the direct-image CCD camera ROPER 2048B were determined.

  5. Improving Radar Snowfall Measurements Using a Video Disdrometer

    NASA Astrophysics Data System (ADS)

    Newman, A. J.; Kucera, P. A.

    2005-05-01

    A video disdrometer has been recently developed at NASA/Wallops Flight Facility in an effort to improve surface precipitation measurements. The recent upgrade of the UND C-band weather radar to dual-polarimetric capabilities along with the development of the UND Glacial Ridge intensive atmospheric observation site has presented a valuable opportunity to attempt to improve radar estimates of snowfall. The video disdrometer, referred to as the Rain Imaging System (RIS), has been deployed at the Glacial Ridge site for most of the 2004-2005 winter season to measure size distributions, precipitation rate, and density estimates of snowfall. The RIS uses CCD grayscale video camera with a zoom lens to observe hydrometers in a sample volume located 2 meters from end of the lens and approximately 1.5 meters away from an independent light source. The design of the RIS may eliminate sampling errors from wind flow around the instrument. The RIS has proven its ability to operate continuously in the adverse conditions often observed in the Northern Plains. The RIS is able to provide crystal habit information, variability of particle size distributions for the lifecycle of the storm, snowfall rates, and estimates of snow density. This information, in conjunction with hand measurements of density and crystal habit, will be used to build a database for comparisons with polarimetric data from the UND radar. This database will serve as the basis for improving snowfall estimates using polarimetric radar observations. Preliminary results from several case studies will be presented.

  6. HONEY -- The Honeywell Camera

    NASA Astrophysics Data System (ADS)

    Clayton, C. A.; Wilkins, T. N.

    The Honeywell model 3000 colour graphic recorder system (hereafter referred to simply as Honeywell) has been bought by Starlink for producing publishable quality photographic hardcopy from the IKON image displays. Full colour and black & white images can be recorded on positive or negative 35mm film. The Honeywell consists of a built-in high resolution flat-faced monochrome video monitor, a red/green/blue colour filter mechanism and a 35mm camera. The device works on the direct video signals from the IKON. This means that changing the brightness or contrast on the IKON monitor will not affect any photographs that you take. The video signals from the IKON consist of separate red, green and blue signals. When you take a picture, the Honeywell takes the red, green and blue signals in turn and displays three pictures consecutively on its internal monitor. It takes an exposure through each of three filters (red, green and blue) onto the film in the camera. This builds up the complete colour picture on the film. Honeywell systems are installed at nine Starlink sites, namely Belfast (locally funded), Birmingham, Cambridge, Durham, Leicester, Manchester, Rutherford, ROE and UCL.

  7. Mosaic CCD method: A new technique for observing dynamics of cometary magnetospheres

    NASA Technical Reports Server (NTRS)

    Saito, T.; Takeuchi, H.; Kozuba, Y.; Okamura, S.; Konno, I.; Hamabe, M.; Aoki, T.; Minami, S.; Isobe, S.

    1992-01-01

    On April 29, 1990, the plasma tail of Comet Austin was observed with a CCD camera on the 105-cm Schmidt telescope at the Kiso Observatory of the University of Tokyo. The area of the CCD used in this observation is only about 1 sq cm. When this CCD is used on the 105-cm Schmidt telescope at the Kiso Observatory, the area corresponds to a narrow square view of 12 ft x 12 ft. By comparison with the photograph of Comet Austin taken by Numazawa (personal communication) on the same night, we see that only a small part of the plasma tail can be photographed at one time with the CCD. However, by shifting the view on the CCD after each exposure, we succeeded in imaging the entire length of the cometary magnetosphere of 1.6 x 10(exp 6) km. This new technique is called 'the mosaic CCD method'. In order to study the dynamics of cometary plasma tails, seven frames of the comet from the head to the tail region were twice imaged with the mosaic CCD method and two sets of images were obtained. Six microstructures, including arcade structures, were identified in both the images. Sketches of the plasma tail including microstructures are included.

  8. Passive Millimeter Wave Camera (PMMWC) at TRW

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Engineers at TRW, Redondo Beach, California, inspect the Passive Millimeter Wave Camera, a weather-piercing camera designed to see through fog, clouds, smoke and dust. Operating in the millimeter wave portion of the electromagnetic spectrum, the camera creates visual-like video images of objects, people, runways, obstacles and the horizon. A demonstration camera (shown in photo) has been completed and is scheduled for checkout tests and flight demonstration. Engineer (left) holds a compact, lightweight circuit board containing 40 complete radiometers, including antenna, monolithic millimeter wave integrated circuit (MMIC) receivers and signal processing and readout electronics that forms the basis for the camera's 1040-element focal plane array.

  9. Passive Millimeter Wave Camera (PMMWC) at TRW

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Engineers at TRW, Redondo Beach, California, inspect the Passive Millimeter Wave Camera, a weather-piercing camera designed to 'see' through fog, clouds, smoke and dust. Operating in the millimeter wave portion of the electromagnetic spectrum, the camera creates visual-like video images of objects, people, runways, obstacles and the horizon. A demonstration camera (shown in photo) has been completed and is scheduled for checkout tests and flight demonstration. Engineer (left) holds a compact, lightweight circuit board containing 40 complete radiometers, including antenna, monolithic millimeter wave integrated circuit (MMIC) receivers and signal processing and readout electronics that forms the basis for the camera's 1040-element focal plane array.

  10. The ratio between CcdA and CcdB modulates the transcriptional repression of the ccd poison-antidote system.

    PubMed

    Afif, H; Allali, N; Couturier, M; Van Melderen, L

    2001-07-01

    The ccd operon of the F plasmid encodes CcdB, a toxin targeting the essential gyrase of Escherichia coli, and CcdA, the unstable antidote that interacts with CcdB to neutralize its toxicity. Although work from our group and others has established that CcdA and CcdB are required for transcriptional repression of the operon, the underlying mechanism remains unclear. The results presented here indicate that, although CcdA is the DNA-binding element of the CcdA-CcdB complex, the stoichiometry of the two proteins determines whether or not the complex binds to the ccd operator-promoter region. Using electrophoretic mobility shift assays, we show that a (CcdA)2-(CcdB)2 complex binds DNA. The addition of extra CcdB to that protein-DNA complex completely abolishes DNA retardation. Based on these results, we propose a model in which the ratio between CcdA and CcdB regulates the repression state of the ccd operon. When the level of CcdA is superior or equal to that of CcdB, repression results. In contrast, derepression occurs when CcdB is in excess of CcdA. By ensuring an antidote-toxin ratio greater than one, this mechanism could prevent the harmful effect of CcdB in plasmid-containing bacteria.

  11. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  12. Making a room-sized camera obscura

    NASA Astrophysics Data System (ADS)

    Flynt, Halima; Ruiz, Michael J.

    2015-01-01

    We describe how to convert a room into a camera obscura as a project for introductory geometrical optics. The view for our camera obscura is a busy street scene set against a beautiful mountain skyline. We include a short video with project instructions, ray diagrams and delightful moving images of cars driving on the road outside.

  13. Cameras Monitor Spacecraft Integrity to Prevent Failures

    NASA Technical Reports Server (NTRS)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  14. Effects On Beam Alignment Due To Neutron-Irradiated CCD Images At The National Ignition Facility

    SciTech Connect

    Awwal, A; Manuel, A; Datte, P; Burkhart, S

    2011-02-28

    The 192 laser beams in the National Ignition Facility (NIF) are automatically aligned to the target-chamber center using images obtained through charged coupled device (CCD) cameras. Several of these cameras are in and around the target chamber during an experiment. Current experiments for the National Ignition Campaign are attempting to achieve nuclear fusion. Neutron yields from these high energy fusion shots expose the alignment cameras to neutron radiation. The present work explores modeling and predicting laser alignment performance degradation due to neutron radiation effects, and demonstrates techniques to mitigate performance degradation. Camera performance models have been created based on the measured camera noise from the cumulative single-shot fluence at the camera location. We have found that the effect of the neutron-generated noise for all shots to date have been well within the alignment tolerance of half a pixel, and image processing techniques can be utilized to reduce the effect even further on the beam alignment.

  15. Snowfall Retrivals Using a Video Disdrometer

    NASA Astrophysics Data System (ADS)

    Newman, A. J.; Kucera, P. A.

    2004-12-01

    A video disdrometer has been recently developed at NASA/Wallops Flight Facility in an effort to improve surface precipitation measurements. One of the goals of the upcoming Global Precipitation Measurement (GPM) mission is to provide improved satellite-based measurements of snowfall in mid-latitudes. Also, with the planned dual-polarization upgrade of US National Weather Service weather radars, there is potential for significant improvements in radar-based estimates of snowfall. The video disdrometer, referred to as the Rain Imaging System (RIS), was deployed in Eastern North Dakota during the 2003-2004 winter season to measure size distributions, precipitation rate, and density estimates of snowfall. The RIS uses CCD grayscale video camera with a zoom lens to observe hydrometers in a sample volume located 2 meters from end of the lens and approximately 1.5 meters away from an independent light source. The design of the RIS may eliminate sampling errors from wind flow around the instrument. The RIS operated almost continuously in the adverse conditions often observed in the Northern Plains. Preliminary analysis of an extended winter snowstorm has shown encouraging results. The RIS was able to provide crystal habit information, variability of particle size distributions for the lifecycle of the storm, snowfall rates, and estimates of snow density. Comparisons with coincident snow core samples and measurements from the nearby NWS Forecast Office indicate the RIS provides reasonable snowfall measurements. WSR-88D radar observations over the RIS were used to generate a snowfall-reflectivity relationship from the storm. These results along with several other cases will be shown during the presentation.

  16. An Overview of the CBERS-2 Satellite and Comparison of the CBERS-2 CCD Data with the L5 TM Data

    NASA Technical Reports Server (NTRS)

    Chandler, Gyanesh

    2007-01-01

    CBERS satellite carries on-board a multi sensor payload with different spatial resolutions and collection frequencies. HRCCD (High Resolution CCD Camera), IRMSS (Infrared Multispectral Scanner), and WFI (Wide-Field Imager). The CCD and the WFI camera operate in the VNIR regions, while the IRMSS operates in SWIR and thermal region. In addition to the imaging payload, the satellite carries a Data Collection System (DCS) and Space Environment Monitor (SEM).

  17. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    NASA Astrophysics Data System (ADS)

    Allin, T.; Neubert, T.; Laursen, S.; Rasmussen, I. L.; Soula, S.

    2003-12-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields and thus eliminated the need for continuous human monitoring of TLE activity. The computer recorded and analyzed two parallel video streams at the full 50 Hz field rate, while uploading status images, TLE images, and system logs to a remote web server. The system detected more than 130 TLEs - mostly sprites - distributed over 9 active evenings. We have thus demonstrated the feasibility of remote agents for TLE observations, which are likely to find use in future ground-based TLE observation campaigns, or to be installed at remote sites in support for space-borne or other global TLE observation efforts.

  18. Illumination box and camera system

    DOEpatents

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  19. Frequency-domain imaging of thick tissues using a CCD

    NASA Astrophysics Data System (ADS)

    French, Todd E.; Gratton, Enrico; Maier, John S.

    1992-04-01

    Imaging of thick tissue has been an area of active research during the past several years. Among the methods proposed to deal with the high scattering of biological tissues, the time resolution of a short light probe traversing a tissue seems to be the most promising. Time resolution can be achieved in the time domain using correlated single photon counting techniques or in the frequency domain using phase resolved methods. We have developed a CCD camera system which provides ultra high time resolution on the entire field of view. The phase of the photon diffusion wave traveling in the highly turbid medium can be measured with an accuracy of about one degree at each pixel. The camera has been successfully modulated at frequencies on the order of 100 MHz. At this frequency, one degree of phase shift corresponds to about 30 ps maximum time resolution. Powerful image processing software displays in real time the phase resolved image on the computer screen.

  20. Fundamental study on identification of CMOS cameras

    NASA Astrophysics Data System (ADS)

    Kurosawa, Kenji; Saitoh, Naoki

    2003-08-01

    In this study, we discussed individual camera identification of CMOS cameras, because CMOS (complementary-metal-oxide-semiconductor) imaging detectors have begun to make their move into the CCD (charge-coupled-device) fields for recent years. It can be identified whether or not the given images have been taken with the given CMOS camera by detecting the imager's intrinsic unique fixed pattern noise (FPN) just like the individual CCD camera identification method proposed by the authors. Both dark and bright pictures taken with the CMOS cameras can be identified by the method, because not only dark current in the photo detectors but also MOS-FET amplifiers incorporated in each pixel may produce pixel-to-pixel nonuniformity in sensitivity. Each pixel in CMOS detectors has the amplifier, which degrades image quality of bright images due to the nonuniformity of the amplifier gain. Two CMOS cameras were evaluated in our experiments. They were WebCamGoPlus (Creative), and EOS D30 (Canon). WebCamGoPlus is a low-priced web camera, whereas EOS D30 is for professional use. Image of a white plate were recorded with the cameras under the plate's luminance condition of 0cd/m2 and 150cd/m2. The recorded images were multiply integrated to reduce the random noise component. From the images of both cameras, characteristic dots patterns were observed. Some bright dots were observed in the dark images, whereas some dark dots were in the bright images. The results show that the camera identification method is also effective for CMOS cameras.

  1. Stationary Camera Aims And Zooms Electronically

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steven D.

    1994-01-01

    Microprocessors select, correct, and orient portions of hemispherical field of view. Video camera pans, tilts, zooms, and provides rotations of images of objects of field of view, all without moving parts. Used for surveillance in areas where movement of camera conspicuous or constrained by obstructions. Also used for closeup tracking of multiple objects in field of view or to break image into sectors for simultaneous viewing, thereby replacing several cameras.

  2. Design of ground-based physical simulation system for satellite-borne TDI-CCD dynamic imaging

    NASA Astrophysics Data System (ADS)

    Sun, Zhiyuan; Zhang, Liu; Jin, Guang; Yang, Xiubin

    2010-11-01

    As we know, the existence of image motion has a bad effect on the image quality of satellite-borne TDI CCD camera. Although many theories on image motion are proposed to cope with this problem, few simulations are done to justify the proposed theories on ground. And thus, in this paper, a ground-based physical simulation system for TDI CCD imaging is developed and specified, which consists of a physical simulation subsystem for precise satellite attitude control based on a 3-axis air bearing table, and an imaging and simulation subsystem utilizing area-array CCD to simulate TDI CCD. The designed system could realize not only a precise simulation of satellite attitude control, whose point accuracy is above 0.1° and steady accuracy above 0.01°/s, but also an imaging simulation of 16-stage TDI CCD with 0.1s its integration time. This paper also gives a mathematical model of image motion of this system analogous with satellite-borne TDI CCD, and detailed descriptions on the principle utilizing area-array CCD to simulate TDI CCD. It is shown that experiment results are in accordance with mathematical simulation, and that the image quality deteriorate seriously when the correspondence between the image velocity and signal charges transfer velocity is broken out, which suggest not only the validity of the system design but also the validity of the proposed image motion theory of TDI CCD.

  3. Caught on Video

    ERIC Educational Resources Information Center

    Sprankle, Bob

    2008-01-01

    When cheaper video cameras with built-in USB connectors were first introduced, the author relates that he pined for one so he introduced the technology into the classroom. The author believes that it would not only be a great tool for students to capture their own learning, but also make his job of collecting authentic assessment more streamlined…

  4. Coaxial fundus camera for opthalmology

    NASA Astrophysics Data System (ADS)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  5. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  6. Optical and dark characterization of the PLATO CCD at ESA

    NASA Astrophysics Data System (ADS)

    Verhoeve, Peter; Prod'homme, Thibaut; Oosterbroek, Tim; Duvet, Ludovic; Beaufort, Thierry; Blommaert, Sander; Butler, Bart; Heijnen, Jerko; Lemmel, Frederic; van der Luijt, Cornelis; Smit, Hans; Visser, Ivo

    2016-07-01

    PLATO - PLAnetary Transits and Oscillations of stars - is the third medium-class mission (M3) to be selected in the European Space Agency (ESA) Science and Robotic Exploration Cosmic Vision programme. It is due for launch in 2025 with the main objective to find and study terrestrial planets in the habitable zone around solar-like stars. The payload consists of >20 cameras; with each camera comprising 4 Charge-Coupled Devices (CCDs), a large number of flight model devices procured by ESA shall ultimately be integrated on the spacecraft. The CCD270 - specially designed and manufactured by e2v for the PLATO mission - is a large format (8 cm x 8 cm) back-illuminated device operating at 4 MHz pixel rate and coming in two variants: full frame and frame transfer. In order to de-risk the PLATO CCD procurement and aid the mission definition process, ESA's Payload Technology Validation section is currently validating the PLATO CCD270. This validation consists in demonstrating that the device achieves its specified electrooptical performance in the relevant environment: operated at 4 MHz, at cold and before and after proton irradiation. As part of this validation, CCD270 devices have been characterized in the dark as well as optically with respect to performance parameters directly relevant for the photometric application of the CCDs. Dark tests comprise the measurement of gain sensitivity to bias voltages, charge injection tests, and measurement of hot and variable pixels after irradiation. In addition, the results of measurements of Quantum Efficiency for a range of angles of incidence, intra- pixel response (non-)uniformity, and response to spot illumination, before and after proton irradiation. In particular, the effect of radiation induced degradation of the charge transfer efficiency on the measured charge in a star-like spot has been studied as a function of signal level and of position on the pixel grid, Also, the effect of various levels of background light on the

  7. The LSST CCD Development Program

    NASA Astrophysics Data System (ADS)

    Kotov, Ivan; Frank, J. S.; Geary, J.; Gilmore, K.; O'Connor, P.; Radeka, V.; Takacs, P.; Tyson, J. A.

    2007-12-01

    The LSST focal plane array (FPA) will be the largest ever made. The sensors must produce low read noise, high QE in the red, and a very tight PSF. This will all be necessary to do the science at the LSST. The principle underlying the development plan is that for an FPA involving about 200 large format (4k x 4k) sensors, an industrial approach has to be developed and adopted. In this initial phase of CCD development, we have targeted specific technology challenges at competitively selected vendors, with the goal of establishing both the technical characteristics of actual sensors, based on our projected requirements, and the industrial feasibility of their production. The CCD technology challenges we have targeted in particular are over-depleted high resistivity devices in the 100 micron thickness range with a biased conductive window. Initial test results from the first devices in a smaller format resulting from this study program will be presented, demonstrating that these challenges can be overcome.

  8. Mobile Panoramic Video Applications for Learning

    ERIC Educational Resources Information Center

    Multisilta, Jari

    2014-01-01

    The use of videos on the internet has grown significantly in the last few years. For example, Khan Academy has a large collection of educational videos, especially on STEM subjects, available for free on the internet. Professional panoramic video cameras are expensive and usually not easy to carry because of the large size of the equipment.…

  9. Cardiac cameras.

    PubMed

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  10. The Pan-STARRS Gigapixel Camera

    NASA Astrophysics Data System (ADS)

    Tonry, J.; Onaka, P.; Luppino, G.; Isani, S.

    The Pan-STARRS project will undertake repeated surveys of the sky to find "Killer Asteroids", everything else which moves or blinks, and to build an unprecedented deep and accurate "static sky". The key enabling technology is a new generation of large format cameras that offer an order of magnitude improvement in size, speed, and cost compared to existing instruments. In this talk, we provide an overview of the camera research and development effort being undertaken by the Institute for Astronomy Camera Group in partnership with MIT Lincoln Laboratories. The main components of the camera subsystem will be identified and briefly described as an introduction to the more specialized talks presented elsewhere at this conference. We will focus on the development process followed at the IfA utilizing the orthogonal transfer CCD in building cameras of various sizes from a single OTA "mcam", to a 16-OTA "Test Camera", to the final 64-OTA 1.4 billion pixel camera (Gigapixel Camera #1 or GPC1) to be used for PS1 survey operations. We also show the design of a deployable Shack-Hartmann device residing in the camera and other auxiliary instrumentation used to support camera operations.

  11. CCD Analog Programmable Microprocessor (APUP) Study

    DTIC Science & Technology

    1980-08-01

    failing module. Furthermore, the analog nature of charge- coupled devices (CCD’s) gives the prospect of a small-area, low-power, cost-effective...category, the data to be processed occur naturally in serial form and with storage times short enough that this category is a close match for CCD-based...future digital processors. Nonetheless, its pipeline nature might make it more suitable for a CCD type of implementation. The envelope detection block is

  12. CCD research. [design, fabrication, and applications

    NASA Technical Reports Server (NTRS)

    Gassaway, J. D.

    1976-01-01

    The fundamental problems encountered in designing, fabricating, and applying CCD's are reviewed. Investigations are described and results and conclusions are given for the following: (1) the development of design analyses employing computer aided techniques and their application to the design of a grapped structure; (2) the role of CCD's in applications to electronic functions, in particular, signal processing; (3) extending the CCD to silicon films on sapphire (SOS); and (4) all aluminum transfer structure with low noise input-output circuits. Related work on CCD imaging devices is summarized.

  13. One frame subnanosecond spectroscopy camera

    NASA Astrophysics Data System (ADS)

    Silkis, E. G.; Titov, V. D.; Fel'Dman, G. G.; Zhilkina, V. M.; Petrokovich, O. A.; Syrtsev, V. N.

    1991-04-01

    The recording of ultraweak spectra is presently undertaken by a high-speed multichannel-spectrum camera (HSMSC) with a subnanosec-range time resolution in its photon-counting mode. This HSMSC's photodetector is a one-frame streak tube equipped with a grid shutter which is connected via fiber-optic contact to a linear CCD. The grain furnished by the streak tube on the basis of a microchannel plate is sufficiently high for recording single photoelectron signals. The HSMSC is compact and easy to handle.

  14. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  15. Real-time full-field photoacoustic imaging using an ultrasonic camera

    NASA Astrophysics Data System (ADS)

    Balogun, Oluwaseyi; Regez, Brad; Zhang, Hao F.; Krishnaswamy, Sridhar

    2010-03-01

    A photoacoustic imaging system that incorporates a commercial ultrasonic camera for real-time imaging of two-dimensional (2-D) projection planes in tissue at video rate (30 Hz) is presented. The system uses a Q-switched frequency-doubled Nd:YAG pulsed laser for photoacoustic generation. The ultrasonic camera consists of a 2-D 12×12 mm CCD chip with 120×120 piezoelectric sensing elements used for detecting the photoacoustic pressure distribution radiated from the target. An ultrasonic lens system is placed in front of the chip to collect the incoming photoacoustic waves, providing the ability for focusing and imaging at different depths. Compared with other existing photoacoustic imaging techniques, the camera-based system is attractive because it is relatively inexpensive and compact, and it can be tailored for real-time clinical imaging applications. Experimental results detailing the real-time photoacoustic imaging of rubber strings and buried absorbing targets in chicken breast tissue are presented, and the spatial resolution of the system is quantified.

  16. Camera for Quasars in Early Universe (CQUEAN)

    NASA Astrophysics Data System (ADS)

    Park, Won-Kee; Pak, Soojong; Im, Myungshin; Choi, Changsu; Jeon, Yiseul; Chang, Seunghyuk; Jeong, Hyeonju; Lim, Juhee; Kim, Eunbin

    2012-08-01

    We describe the overall characteristics and the performance of an optical CCD camera system, Camera for Quasars in Early Universe (CQUEAN), which has been used at the 2.1 m Otto Struve Telescope of the McDonald Observatory since 2010 August. CQUEAN was developed for follow-up imaging observations of red sources such as high-redshift quasar candidates (z ≳ 5), gamma-ray bursts, brown dwarfs, and young stellar objects. For efficient observations of the red objects, CQUEAN has a science camera with a deep-depletion CCD chip, which boasts a higher quantum efficiency at 0.7–1.1 μm than conventional CCD chips. The camera was developed in a short timescale () and has been working reliably. By employing an autoguiding system and a focal reducer to enhance the field of view on the classical Cassegrain focus, we achieve a stable guiding in 20 minute exposures, an imaging quality with FWHM≥0.6‧‧ over the whole field (4.8‧ × 4.8‧), and a limiting magnitude of z = 23.4 AB mag at 5-σ with 1 hr total integration time. This article includes data taken at the McDonald Observatory of The University of Texas at Austin.

  17. Design of high speed camera based on CMOS technology

    NASA Astrophysics Data System (ADS)

    Park, Sei-Hun; An, Jun-Sick; Oh, Tae-Seok; Kim, Il-Hwan

    2007-12-01

    The capacity of a high speed camera in taking high speed images has been evaluated using CMOS image sensors. There are 2 types of image sensors, namely, CCD and CMOS sensors. CMOS sensor consumes less power than CCD sensor and can take images more rapidly. High speed camera with built-in CMOS sensor is widely used in vehicle crash tests and airbag controls, golf training aids, and in bullet direction measurement in the military. The High Speed Camera System made in this study has the following components: CMOS image sensor that can take about 500 frames per second at a resolution of 1280*1024; FPGA and DDR2 memory that control the image sensor and save images; Camera Link Module that transmits saved data to PC; and RS-422 communication function that enables control of the camera from a PC.

  18. Secure authenticated video equipment

    SciTech Connect

    Doren, N.E.

    1993-07-01

    In the verification technology arena, there is a pressing need for surveillance and monitoring equipment that produces authentic, verifiable records of observed activities. Such a record provides the inspecting party with confidence that observed activities occurred as recorded, without undetected tampering or spoofing having taken place. The secure authenticated video equipment (SAVE) system provides an authenticated series of video images of an observed activity. Being self-contained and portable, it can be installed as a stand-alone surveillance system or used in conjunction with existing monitoring equipment in a non-invasive manner. Security is provided by a tamper-proof camera enclosure containing a private, electronic authentication key. Video data is transferred communication link consisting of a coaxial cable, fiber-optic link or other similar media. A video review station, located remotely from the camera, receives, validates, displays and stores the incoming data. Video data is validated within the review station using a public key, a copy of which is held by authorized panics. This scheme allows the holder of the public key to verify the authenticity of the recorded video data but precludes undetectable modification of the data generated by the tamper-protected private authentication key.

  19. The 2060 Chiron: CCD photometry

    NASA Technical Reports Server (NTRS)

    Bus, Schelte J.; Bowell, Edward; Harris, Alan W.

    1987-01-01

    R-band CCD photometry of 2060 was carried out on nine nights in Nov. and Dec. 1986. The rotation period is 5.9181 + or - 0.0003 hr and the peak to peak lightcurve amplitude is 0.088 + or - 0.0003 mag. Photometric parameters are H sub R = 6.24 + or - 0.02 mag and G sub R = + or - 0.15, though formal errors may not be realistic. The lightcurve has two pairs of extrema, but its asymmetry, as evidenced by the presence of significant odd Fourier harmonics, suggests macroscopic surface irregularities and/or the presence of some large scale albedo variegation. The observational rms residual is + or - 0.015 mag. On time scales from minutes to days there is no evidence for nonperiodic (cometary) brightness changes at the level of a few millimagnitudes.

  20. Innovative Solution to Video Enhancement

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.