Sample records for video ccd camera

  1. Electron bombardment CCD camera

    Microsoft Academic Search

    Tadashi Maruno; Masahiko Shirai; Fumio Iwase; Naotaka Hakamata

    1998-01-01

    Two kinds of electron bombardment CCD (EB-CCD) camera are newly developed, employing an EB-CCD sensor made by Hamamatsu Photonics. The slow scan cooled CCD camera installs the full frame transfer type EB-CCD sensor with 512 X 512 pixel format and standard video rate camera installs the frame transfer type EB-CCD sensor with 658 X 490 pixel format. For slow scan

  2. Upgrading a CCD camera for astronomical use 

    E-print Network

    Lamecker, James Frank

    1993-01-01

    Existing charge-coupled device (CCD) video cameras have been modified to be used for astronomical imaging on telescopes in order to improve imaging times over those of photography. An astronomical CCD camera at the Texas A&M Observatory would...

  3. Interline transfer CCD camera

    SciTech Connect

    Prokop, M.S.; McCurnin, T.W.; Stump, C.J.; Stradling, G.L.

    1993-12-31

    An interline CCD sensing device for use in a camera system, includes an imaging area sensitive to impinging light, for generating charges corresponding to the intensity of the impinging light. Sixteen independent registers R1 - R16 sequentially receive the interline data from the imaging area, corresponding to the generated charges. Sixteen output amplifiers S1 - S16 and sixteen ports P1 - P16 for sequentially transferring the interline data, one pixel at a time, in order to supply a desired image transfer speed. The imaging area is segmented into sixteen independent imaging segments A1 - A16, each of which corresponds to one register, on output amplifier, and one output port. Each one of the imaging segments A1 - A16 includes an array of rows and columns of pixels. Each pixel includes a photogate area, an interline CCD channel area, and an anti-blooming area. The anti-blooming area is, in turn, divided into an anti-blooming barrier and an anti-blooming drain.

  4. Calibration Tests of Industrial and Scientific CCD Cameras

    NASA Technical Reports Server (NTRS)

    Shortis, M. R.; Burner, A. W.; Snow, W. L.; Goad, W. K.

    1991-01-01

    Small format, medium resolution CCD cameras are at present widely used for industrial metrology applications. Large format, high resolution CCD cameras are primarily in use for scientific applications, but in due course should increase both the range of applications and the object space accuracy achievable by close range measurement. Slow scan, cooled scientific CCD cameras provide the additional benefit of additional quantisation levels which enables improved radiometric resolution. The calibration of all types of CCD cameras is necessary in order to characterize the geometry of the sensors and lenses. A number of different types of CCD cameras have been calibrated a the NASA Langley Research Center using self calibration and a small test object. The results of these calibration tests will be described, with particular emphasis on the differences between standard CCD video cameras and scientific slow scan CCD cameras.

  5. 3-D eye movement measurements on four Comex's divers using video CCD cameras, during high pressure diving.

    PubMed

    Guillemant, P; Ulmer, E; Freyss, G

    1995-01-01

    Previous studies have shown the vulnerability of the vestibular system regarding barotraumatism (1) and deep diving may induce immediate neurological changes (2). These extreme conditions (high pressure, limited examination time, restricted space, hydrogen-oxygen mixture, communication difficulties etc.) require adapted technology and associated fast experimental procedure. We were able to solve these problems by developing a new system of 3-D ocular movements on line analysis by means of a video camera. This analyser uses image processing and forms recognition software which allows non-invasive video frequency calculation of eye movements including torsional component. As this system is immediately ready for use, we were able to realize the subsequent examinations in a maximum time of 8 min for each diver: oculomotor tests including saccadic, slow and optokinetic traditional automatic measurements; vestibular tests regarding spontaneous and positional nystagmus, and reactional nystagmus to the pendular test. For pendular induced nystagmus we used appropriate head positions to stimulate separately the lateral and the posterior semicircular canal, and we measured the gain by operating successively in visible light and complete darkness. Recordings were done during a simulated onshore dive to an ambient pressure corresponding to a depth of 350 m. The above examinations were completed on the first and last days by caloric tests with the same video system analyser. The results of the investigations demonstrated perfect tolerance of the oculomotor and vestibular systems of these 4 divers thus fulfilling the preventive conditions defined by Comex Co. We were able to overcome the limitations due to low cost PC computer operation and cameras (necessity of adaptation to pressure, focus difficulties and direct light exposure eye reflexions). We still have on line accurate measurements even on the torsional component of the eye movement. Due to this technological efficiency we also present some mathematical aspects of the software. PMID:8749142

  6. Solid state television camera (CCD-buried channel), revision 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  7. Integration design of FPGA software for a miniaturizing CCD remote sensing camera

    NASA Astrophysics Data System (ADS)

    Yin, Na; Li, Qiang; Rong, Peng; Lei, Ning; Wan, Min

    2014-09-01

    Video signal processor (VSP) is an important part for CCD remote sensing cameras, and also is the key part of light miniaturization design for cameras. We need to apply FPGAs to improve the level of integration for simplifying the video signal processor circuit. This paper introduces an integration design of FPGA software for video signal processor in a certain space remote sensing camera in detail. This design has accomplished the functions of integration in CCD timing control, integral time control, CCD data formatting and CCD image processing and correction on one single FPGA chip, which resolved the problem for miniaturization of video signal processor in remote sensing cameras. Currently, this camera has already launched successfully and obtained high quality remote sensing images, which made contribution to the miniaturizing remote sensing camera.

  8. Dynamic light scattering with a CCD camera

    Microsoft Academic Search

    Apollo P. Y. Wong; P. Wiltzius

    1993-01-01

    We have successfully implemented a method to measure intensity autocorrelation functions with a CCD camera and a fast computer. We report light scattering experiments on solutions of diffusing latex spheres in glycerol and compare our results to those obtained with a conventional hardwired electronic correlator. Averaging allows significant reduction of measurement times and makes this technique suitable for the study

  9. A high-sensitivity CCD camera system for observations of early universe objects

    Microsoft Academic Search

    S. V. Markelov; V. A. Murzin; A. N. Borisenko; N. G. Ivaschenko; I. V. Afanasieva; V. I. Ardilanov

    2000-01-01

    The principles of the construction of a new CCD camera system with improved performance in readout noise and charge measurement accuracy for the 6 m telescope are described. Digital real-time processing of video signals applied in the system.

  10. Noise model and simulation analysis of the low noise pre-amplifier of CCD camera

    NASA Astrophysics Data System (ADS)

    Chen, Zhi; Qiu, Yuehong; Wen, Yan; Jiang, Baotan; Yao, Dalei

    2014-02-01

    The low noise video process chain design is the important guarantee of realization of CCD detection performance. While pre-amplifier locate in the front end of the video signal process of CCD camera, so the low noise preamplifier design is the important guarantee of realization of low noise CCD camera. Firstly, calculate the bandwidth and equivalent noise bandwidth of video process chain of CCD camera. Secondly, design the preamplifier circuit and build the noise model of the preamplifier. Thirdly, according to the noise model calculation the noise of the pre-amplifier. Lastly, build the noise simulation model of the preamplifier. Through theoretical calculation and PSPICE simulation result predict the noise level of the preamplifier of the CCD .The noise level of the preamplifier less than 2 electronics, it meets specification requirement.

  11. Typical effects of laser dazzling CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zhang, Jianmin; Shao, Bibo; Cheng, Deyan; Ye, Xisheng; Feng, Guobin

    2015-05-01

    In this article, an overview of laser dazzling effect to buried channel CCD camera is given. The CCDs are sorted into staring and scanning types. The former includes the frame transfer and interline transfer types. The latter includes linear and time delay integration types. All CCDs must perform four primary tasks in generating an image, which are called charge generation, charge collection, charge transfer and charge measurement. In camera, the lenses are needed to input the optical signal to the CCD sensors, in which the techniques for erasing stray light are used. And the electron circuits are needed to process the output signal of CCD, in which many electronic techniques are used. The dazzling effects are the conjunct result of light distribution distortion and charge distribution distortion, which respectively derive from the lens and the sensor. Strictly speaking, in lens, the light distribution is not distorted. In general, the lens are so well designed and fabricated that its stray light can be neglected. But the laser is of much enough intensity to make its stray light obvious. In CCD image sensors, laser can induce a so large electrons generation. Charges transfer inefficiency and charges blooming will cause the distortion of the charge distribution. Commonly, the largest signal outputted from CCD sensor is restricted by capability of the collection well of CCD, and can't go beyond the dynamic range for the subsequent electron circuits maintaining normal work. So the signal is not distorted in the post-processing circuits. But some techniques in the circuit can make some dazzling effects present different phenomenon in final image.

  12. X-ray-sensitive CCD camera

    Microsoft Academic Search

    Andrew A. Krasnjuk; Vladimir J. Stenin; Sergey V. Larionov; Victor A. Shilin; Alexander A. Utenkov

    1999-01-01

    This paper describes the key features and performance data of a 1040(H) X 1160(V) pixels full-frame transfer CCD camera for use as an X-ray detector in X-ray material structure analysis and X-ray fluorescence for the in-situ detection of metals. To achieve good sensitivity at energies below 50 keV we have developed compact units based on VLSI programmable logic devices and

  13. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  14. Observation of capillary flow in human skin during tissue compression using CCD video-microscopy

    Microsoft Academic Search

    Masahiro Shibata; Takehiro Yamakoshi; Ken-ichi Yamakoshi; Takashi Komeda

    2010-01-01

    Recent technological advances of the CCD video-camera have made microscopes more compact and greatly improved their sensitivity. We newly designed a compact capillaroscopy which was composed with a CCD video-probe equipped a contact-type objective lens and illuminator. In the present study, we evaluated usefulness of the instrument for a bed-side human capillaroscopy to observe the capillary flow in various dermal

  15. Colorimetric calibration of CCD cameras for self-luminous images

    NASA Astrophysics Data System (ADS)

    Chang, Gao-Wei; Chen, Yung-Chang

    1998-06-01

    Reproducing colors with rich saturation, from illuminating objects, is usually recognized as an essential issue for CCD camera imaging. In this paper, we propose a colorimetric calibration scheme regarding self-luminous images for CCD cameras. And, an efficient algorithm to generate highly saturated color stimuli is devised for investigating the CCD camera performance of image reproduction. In this scheme, a set of color samples containing highly saturated colors is generated, from an advanced CRT, as color stimuli for colorimetric characterization. To demonstrate the effectiveness of this algorithm, a realization of color samples, uniformly distributed in CIE LAB, are presented for illustration.

  16. High-speed optical shutter coupled to fast-readout CCD camera

    NASA Astrophysics Data System (ADS)

    Yates, George J.; Pena, Claudine R.; McDonald, Thomas E., Jr.; Gallegos, Robert A.; Numkena, Dustin M.; Turko, Bojan T.; Ziska, George; Millaud, Jacques E.; Diaz, Rick; Buckley, John; Anthony, Glen; Araki, Takae; Larson, Eric D.

    1999-04-01

    A high frame rate optically shuttered CCD camera for radiometric imaging of transient optical phenomena has been designed and several prototypes fabricated, which are now in evaluation phase. the camera design incorporates stripline geometry image intensifiers for ultra fast image shutters capable of 200ps exposures. The intensifiers are fiber optically coupled to a multiport CCD capable of 75 MHz pixel clocking to achieve 4KHz frame rate for 512 X 512 pixels from simultaneous readout of 16 individual segments of the CCD array. The intensifier, Philips XX1412MH/E03 is generically a Generation II proximity-focused micro channel plate intensifier (MCPII) redesigned for high speed gating by Los Alamos National Laboratory and manufactured by Philips Components. The CCD is a Reticon HSO512 split storage with bi-direcitonal vertical readout architecture. The camera main frame is designed utilizing a multilayer motherboard for transporting CCD video signals and clocks via imbedded stripline buses designed for 100MHz operation. The MCPII gate duration and gain variables are controlled and measured in real time and up-dated for data logging each frame, with 10-bit resolution, selectable either locally or by computer. The camera provides both analog and 10-bit digital video. The camera's architecture, salient design characteristics, and current test data depicting resolution, dynamic range, shutter sequences, and image reconstruction will be presented and discussed.

  17. Influence of storage media on the accuracy and repeatability of photogrammetric measurements using CCD cameras

    Microsoft Academic Search

    Mark R. Shortis; Walter L. Snow; Brooks A. Childers; William K. Goad

    1993-01-01

    A clear advantage of digital photogrammetric measurement over other, more conventional techniques in the fast sample rate of the data acquisition. CCD cameras and video systems can be used very effectively to analyze dynamic objects or cases of rapid deformation. However, long sequences of images can introduce the penalty of large volumes of digital data, which may not be able

  18. The influence of storage media on the accuracy and repeatability of photogrammetric measurements using CCD cameras

    Microsoft Academic Search

    Mark R. Shortis; Walt L. Snow; Brooks A. Childers; William K. Goad

    A clear advantage of digital photogrammetric measurement over other, more conventional techniques is the fast sample rate of the data acquisition. CCD cameras and video systems can be used very effectively to analyse dynamic objects or cases of rapid deformation. However, long sequences of images can introduce the penalty of large volumes of digital data, which may not be able

  19. Video Cameras on School Buses.

    ERIC Educational Resources Information Center

    Fields, Lynette J.

    1998-01-01

    Because student misbehavior on school buses can endanger the driver, other students, motorists, and pedestrians, schools are considering technological solutions, such as mounted video cameras. The cameras deter misbehavior and help administrators detect inappropriate activities and determine appropriate action. Program implementation is…

  20. Multi-camera video surveillance

    Microsoft Academic Search

    Tim Ellis

    2002-01-01

    This paper describes the development of a multi-view video surveillance and the algorithms to detect and track objects (generally low densities of pedestrians, cyclists and motor vehicles) moving through an outdoor environment imaged by a network of video surveillance cameras. The system is designed to adapt to the widely varying illumination conditions present in such outdoor scenes, as well as

  1. Solid state, CCD-buried channel, television camera study and design

    NASA Technical Reports Server (NTRS)

    Hoagland, K. A.; Balopole, H.

    1976-01-01

    An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.

  2. CCD camera readout system developments for HEP experiments at CERN

    Microsoft Academic Search

    E. Falk; J. Feyt; B. Friend; C. Mommaert; S. Reynaud; G. Stefanini

    1991-01-01

    Two current developments in charge coupled device (CCD) camera readout for HEP detectors are reported. The first one, to be used in a fixed target experiment, can be configured for a megapixel device (frame transfer, single field). It features a detachable sensor head, with high-speed readout via multiple ports, a controller, and a data compactor implemented in VMEbus. The camera

  3. Feasibility study of CCD-based gamma camera

    NASA Astrophysics Data System (ADS)

    Lee, Hakjae; Jeong, Young-Jun; Yoon, Joochul; Kang, Jungwon; Lee, Sangjoon; Shin, Hyungsup; Lee, Kisung

    2010-08-01

    Conventional gamma cameras which uses photomultiplier tubes(PMT) is very heavy, bulky, and expensive. In addition, its spatial resolution is low because of geometrical limitation of PMTs. This low resolution and large size is not efficient for the small animal imaging system which is useful in preclinical imaging application. We have developed a small size but high spatial resolution gamma ray detector, based on charge-coupled device(CCD) which is useful to develop a prototype model of small animal gamma camera. Recently the sensitivity of CCD was improved and the peltier cooling system helped to minimize the dark currents of CCD significantly. The enhanced sensitivity and high intrinsic resolution of CCD enabled researchers to develop the small size gamma camera with low cost. In this study we used peltier cooled CCD sensor which has about 70% of quantum efficiency at 650nm wave length. CsI(Tl) scintillator was also used to convert the gamma ray to visible lights. These light photons from the scintillator have been collected to the CCD surface by Nikorr macro lens to enhance the collection efficiency. The experimental results showed that the proposed CCD-based detection system is feasible for gamma ray detection.

  4. Detonation phenomena observed with a CCD camera

    Microsoft Academic Search

    Manfred Held

    1995-01-01

    With an appropriate test set up, the Hadland Photonics Ballistic Range Camera (SVR), designed primarily for exterior and terminal ballistics, can also be used very well for studying initiation events and analyzing a variety of detonation phenomena. This paper explains in detail the test set up of one interesting detonic experiment, observed with the Ballistic Range Camera, and the analysis

  5. Structural and thermal modeling of a cooled CCD camera

    NASA Astrophysics Data System (ADS)

    Ahmad, Anees; Arndt, Thomas D.; Gross, Robert; Hahn, Mark; Panasiti, Mark

    2001-11-01

    This paper presents structural and thermal modeling of a high-performance CCD camera designed to operate under severe environments. Minimizing the dark current noise required the CCD to be maintained at low temperature while the camera operated in a 70 degrees C environment. A thermoelectric cooler (TEC) was selected due to its simplicity, and relatively low cost. Minimizing the thermal parasitic loads due to conduction and convection, and maximizing the heat sink performance was critical in this design. The critical structural features of this camera are the CCD leads and the bond joint that holds the CCD in alignment relative to the lens. The CCD leads are susceptible to fatigue failure when subjected to random vibrations for an extended period of time. This paper outlines the methods used to model and analyze the CCD leads for fatigue, the supportive vibration testing performed and the steps taken to correct for structural inadequacies found in the original design. The key results of all this thermal and structural modeling and testing are presented.

  6. ULTRAHIGH-SPEED, HIGH-SENSITIVITY, PORTABLE CCD COLOR CAMERA

    Microsoft Academic Search

    H. Ohtake; K. Kitamura; T. Arai; J. Yonai; T. Hayashida; T. Kurita; K. Tanioka; H. Maruyama; Y. Mita; J. Namiki; T. Yanagi; T. Yoshida; H. van Kuijk; Jan T. Bosiers; T. Goji Etoh

    We have been developing ultrahigh-speed, high-sensitivity broadcast cameras that are capable of capturing clear, smooth, slow-motion video even in conditions with limited lighting, such as at professional baseball games played at night. In 2003, we developed the first broadcast color camera using three 80,000- pixel ultrahigh-speed, high-sensitivity charge-coupled devices (CCDs). This camera is capable of ultrahigh-speed video recording at up

  7. Video camera use at nuclear power plants

    SciTech Connect

    Estabrook, M.L.; Langan, M.O.; Owen, D.E. (ENCORE Technical Resources, Inc., Middletown, PA (USA))

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs.

  8. Detonation phenomena observed with a CCD camera

    NASA Astrophysics Data System (ADS)

    Held, Manfred

    1995-05-01

    With an appropriate test set up, the Hadland Photonics Ballistic Range Camera (SVR), designed primarily for exterior and terminal ballistics, can also be used very well for studying initiation events and analyzing a variety of detonation phenomena. This paper explains in detail the test set up of one interesting detonic experiment, observed with the Ballistic Range Camera, and the analysis of the results. The ability of the camera to superimpose up to 16 exposures on a single image allowed particularly detailed examination of the detonation propagation, the detonation velocities, the corner turning distance and the nonreacting radial zones.

  9. Single line CCD camera to improve positioning.

    PubMed

    Nyssen, M; Vounckx, R; Cornelis, J

    1978-01-01

    This article describes an opto-electronic device that allows on-line visualization of the optical intensity function along a strip. The use of a linear optical charge-coupled device (CCD) array eliminates all moving parts. This makes adaptation to a standard microscope possible without modifications. The setup described here is essential where precise positioning of a reference graticule on diffuse lines must be performed. PMID:18698940

  10. A CCD Camera?based Hyperspectral Imaging System for Stationary and Airborne Applications

    Microsoft Academic Search

    Chenghai Yang; James H. Everitt; Michael R. Davis; Chengye Mao

    2003-01-01

    This paper describes a CCD (charge coupled device) camera?based hyperspectral imaging system designed for both stationary and airborne remote sensing applications. The system consists of a high performance digital CCD camera, an imaging spectrograph, an optional focal plane scanner, and a PC computer equipped with a frame grabbing board and camera utility software. The CCD camera provides 1280(h) × 1024(v)

  11. High frame rate CCD cameras with fast optical shutters for military and medical imaging applications

    SciTech Connect

    King, N.S.P.; Albright, K.; Jaramillo, S.A.; McDonald, T.E.; Yates, G.J. [Los Alamos National Lab., NM (United States); Turko, B.T. [Lawrence Berkeley Lab., CA (United States)

    1994-09-01

    Los Alamos National Laboratory has designed and prototyped high-frame rate intensified/shuttered Charge-Coupled-Device (CCD) cameras capable of operating at kilohertz frame rates (non-interlaced mode) with optical shutters capable of acquiring nanosecond-to-microsecond exposures each frame. These cameras utilize an Interline Transfer CCD, Loral Fairchild CCD-222 with 244 {times} 380 pixels operated at pixel rates approaching 100 Mhz. Initial prototype designs demonstrated single-port serial readout rates exceeding 3.97 Kilohertz with greater than 51p/mm spatial resolution at shutter speeds as short as 5ns. Readout was achieved by using a truncated format of 128 {times} 128 pixels by partial masking of the CCD and then subclocking the array at approximately 65Mhz pixel rate. Shuttering was accomplished with a proximity focused microchannel plate (MCP) image intensifier (MCPII) that incorporated a high strip current MCP and a design modification for high-speed stripline gating geometry to provide both fast shuttering and high repetition rate capabilities. Later camera designs use a close-packed quadruple head geometry fabricated using an array of four separate CCDs (pseudo 4-port device). This design provides four video outputs with optional parallel or time-phased sequential readout modes. The quad head format was designed with flexibility for coupling to various image intensifier configurations, including individual intensifiers for each CCD imager, a single intensifier with fiber optic or lens/prism coupled fanout of the input image to be shared by the four CCD imagers or a large diameter phosphor screen of a gateable framing type intensifier for time sequential relaying of a complete new input image to each CCD imager. Camera designs and their potential use in ongoing military and medical time-resolved imaging applications are discussed.

  12. VME image acquisition and processing using standard TV CCD cameras

    Microsoft Academic Search

    F. Epaud; P. Verdier

    1994-01-01

    The ESRF has released the first version of a low-cost image acquisition and processing system based on a industrial VME board and commercial CCD TV cameras. The images from standard CCIR (625 lines) or EIA (525 lines) inputs are digitised with 8-bit dynamic range and stored in a general purpose frame buffer to be processed by the embedded firmware. They

  13. Large Format, Dual Head,Triple Sensor, Self-Guiding CCD Cameras

    E-print Network

    Walter, Frederick M.

    to the main imaging CCD. One guiding CCD is located next to the imaging CCD in our patented design, similarSTL-1001E Large Format, Dual Head,Triple Sensor, Self-Guiding CCD Cameras The Research Line at a lower price than other previously available cameras. Our goal was to produce a high performance imaging

  14. Low-noise video amplifiers for imaging CCD's

    NASA Technical Reports Server (NTRS)

    Scinicariello, F.

    1976-01-01

    Various techniques were developed which enable the CCD (charge coupled device) imaging array user to obtain optimum performance from the device. A CCD video channel was described, and detector-preamplifier interface requirements were examined. A noise model for the system was discussed at length and laboratory data presented and compared to predicted results.

  15. CCD camera response to diffraction patterns simulating particle images.

    PubMed

    Stanislas, M; Abdelsalam, D G; Coudert, S

    2013-07-01

    We present a statistical study of CCD (or CMOS) camera response to small images. Diffraction patterns simulating particle images of a size around 2-3 pixels were experimentally generated and characterized using three-point Gaussian peak fitting, currently used in particle image velocimetry (PIV) for accurate location estimation. Based on this peak-fitting technique, the bias and RMS error between locations of simulated and real images were accurately calculated by using a homemade program. The influence of the intensity variation of the simulated particle images on the response of the CCD camera was studied. The experimental results show that the accuracy of the position determination is very good and brings attention to superresolution PIV algorithms. Some tracks are proposed in the conclusion to enlarge and improve the study. PMID:23842270

  16. High frame rate CCD camera with fast optical shutter

    SciTech Connect

    Yates, G.J.; McDonald, T.E. Jr. [Los Alamos National Lab., NM (United States); Turko, B.T. [Lawrence Berkeley National Lab., CA (United States)

    1998-09-01

    A high frame rate CCD camera coupled with a fast optical shutter has been designed for high repetition rate imaging applications. The design uses state-of-the-art microchannel plate image intensifier (MCPII) technology fostered/developed by Los Alamos National Laboratory to support nuclear, military, and medical research requiring high-speed imagery. Key design features include asynchronous resetting of the camera to acquire random transient images, patented real-time analog signal processing with 10-bit digitization at 40--75 MHz pixel rates, synchronized shutter exposures as short as 200pS, sustained continuous readout of 512 x 512 pixels per frame at 1--5Hz rates via parallel multiport (16-port CCD) data transfer. Salient characterization/performance test data for the prototype camera are presented, temporally and spatially resolved images obtained from range-gated LADAR field testing are included, an alternative system configuration using several cameras sequenced to deliver discrete numbers of consecutive frames at effective burst rates up to 5GHz (accomplished by time-phasing of consecutive MCPII shutter gates without overlap) is discussed. Potential applications including dynamic radiography and optical correlation will be presented.

  17. Using a trichromatic CCD camera for spectral skylight estimation.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Olmo, F J; Cazorla, A; Alados-Arboledas, L

    2008-12-01

    In a previous work [J. Opt. Soc. Am. A 24, 942-956 (2007)] we showed how to design an optimum multispectral system aimed at spectral recovery of skylight. Since high-resolution multispectral images of skylight could be interesting for many scientific disciplines, here we also propose a nonoptimum but much cheaper and faster approach to achieve this goal by using a trichromatic RGB charge-coupled device (CCD) digital camera. The camera is attached to a fish-eye lens, hence permitting us to obtain a spectrum of every point of the skydome corresponding to each pixel of the image. In this work we show how to apply multispectral techniques to the sensors' responses of a common trichromatic camera in order to obtain skylight spectra from them. This spectral information is accurate enough to estimate experimental values of some climate parameters or to be used in algorithms for automatic cloud detection, among many other possible scientific applications. PMID:19037348

  18. High-resolution image digitizing through 12x3-bit RGB-filtered CCD camera

    Microsoft Academic Search

    Andrew Y. Cheng; C. Y. Pau

    1996-01-01

    A high resolution computer-controlled CCD image capturing system is developed by using a 12 bits 1024 by 1024 pixels CCD camera and motorized RGB filters to grasp an image with color depth up to 36 bits. The filters distinguish the major components of color and collect them separately while the CCD camera maintains the spatial resolution and detector filling factor.

  19. The U. H. Institute for Astronomy CCD Camera Control System

    NASA Astrophysics Data System (ADS)

    Jim, K. T. C.; Yamada, H. T.; Luppino, G. A.; Hlivak, R. J.

    1993-01-01

    The U. H. CCD Camera Control System consists of a NeXT workstation, a graphical user interface, and a fiber optics interface which is connected to a San Diego State University CCD controller. The U. H. system employs the NeXT-resident Motorola DSP 56001 as a real time hardware controller interfaced to the Mach-based UNIX of the NeXT workstation by DMA. Since the SDSU controller also uses the DSP 56001, the NeXT is used as a development platform for the embedded control software. The fiber optic interface links the two DSP 56001s through their Synchronous Serial Interfaces. The user interface is based on the NeXTStep windowing system. It is easy to use and features real-time display of image data and control over all camera functions. Both Loral and Tektronix 2048times 2048 CCDs have been driven at full readout speeds, and the system is designed to readout four such CCDs simultaneously. The total hardware package is compact and portable, and has been used on five different telescopes on Mauna Kea. The complete CCD control system can be assembled for a very low cost. The hardware and software of the control system have proven to be reliable, well adapted to the needs of astronomers, and extensible to increasingly complicated control requirements.

  20. RDS and IRDS filters for high-speed CCD video sensors

    Microsoft Academic Search

    M. Jung; Y. Reibel; B. Cunin; C. Draman

    2000-01-01

    In this brief, we present two filters called “reflection delayed noise suppression” (RDS) and “integration reflection delayed noise suppression” (IRDS) which are useful to increase dynamics of video charge-coupled device (CCD) cameras. The RDS is a band-pass filter built with a parallel short-circuited line. It significantly lowers the most important noise, called reset noise. Unfortunately, this signal processing unit increases

  1. Television camera video level control system

    NASA Technical Reports Server (NTRS)

    Kravitz, M.; Freedman, L. A.; Fredd, E. H.; Denef, D. E. (inventors)

    1985-01-01

    A video level control system is provided which generates a normalized video signal for a camera processing circuit. The video level control system includes a lens iris which provides a controlled light signal to a camera tube. The camera tube converts the light signal provided by the lens iris into electrical signals. A feedback circuit in response to the electrical signals generated by the camera tube, provides feedback signals to the lens iris and the camera tube. This assures that a normalized video signal is provided in a first illumination range. An automatic gain control loop, which is also responsive to the electrical signals generated by the camera tube 4, operates in tandem with the feedback circuit. This assures that the normalized video signal is maintained in a second illumination range.

  2. Ultracam - AN Ultra-Fast Triple-Beam CCD Camera

    NASA Astrophysics Data System (ADS)

    Dhillon, Vikram S.; Marsh, Tom R.; Watson, Chris A.; Ultracam Team

    ULTRACAM is a high-speed three-colour CCD camera designed to provide imaging photometry at high temporal resolutions. The instrument is highly portable and will be used at a number of large telescopes around the world. ULTRACAM was successfully commissioned on the 4.2-m William Herschel Telescope on La Palma on 16 May 2002 over 3 months ahead of schedule and within budget. The instrument was funded by PPARC and designed and built by a consortium involving the Universities of Sheffield Southampton and the UKATC Edinburgh. We present an overview of the design and performance characteristics of ULTRACAM and highlight some of its most recent scientific results.

  3. Ultrahigh-speed, high-sensitivity color camera with 300,000-pixel single CCD

    Microsoft Academic Search

    K. Kitamura; T. Arai; J. Yonai; T. Hayashida; H. Ohtake; T. Kurita; K. Tanioka; H. Maruyama; J. Namiki; T. Yanagi; T. Yoshida; H. van Kuijk; Jan T. Bosiers; T. G. Etoh

    2007-01-01

    We have developed an ultrahigh-speed, high-sensitivity portable color camera with a new 300,000-pixel single CCD. The 300,000-pixel CCD, which has four times the number of pixels of our initial model, was developed by seamlessly joining two 150,000-pixel CCDs. A green-red-green-blue (GRGB) Bayer filter is used to realize a color camera with the single-chip CCD. The camera is capable of ultrahigh-speed

  4. X-ray sensitive video camera

    Microsoft Academic Search

    Randy Luhta; John A. Rowlands

    1993-01-01

    By converting the absorbed X-ray image directly to an electrical video signal, the x-ray sensitive video camera offers improved resolution and reduced veiling glare over a conventional x-ray image intensifier for medical fluoroscopy. Unfortunately, currently available x-ray sensitive video cameras are limited to a 1' field of view and poor quantum efficiency. We are developing an x-ray sensitive vidicon for

  5. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  6. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  7. The CCD imager electronics for the Mars pathfinder and Mars surveyor cameras

    Microsoft Academic Search

    J. Rainer Kramm; Nicolas Thomas; H. Uwe Keller; Peter H. Smith

    1998-01-01

    The Mars pathfinder stereo camera and both cameras on the Mars surveyor lander use CCD detectors for image acquisition. The frame transfer type CCD's were produced by Loral for space applications under contract from MPAE. A detector consists of two sections of 256 lines and 512 columns each. Pixels in the image section contain an anti-blooming structure to remove excessive

  8. Video Analysis with a Web Camera

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  9. Design and implementation of timing generator of frame transfer area-array CCD camera

    NASA Astrophysics Data System (ADS)

    Zhou, Jian-kang; Chen, Xin-hua; Zhou, Wang; Shen, Wei-min

    2008-03-01

    Frame transfer area-array CCD camera is the perfect solution for high-end real-time medical, scientific and industrial applications because it has characteristics of high fill factor, low dark current, high resolving power, high sensitivity, high linear dynamic range and electronic shutter capability. Time sequences of frame transfer area-array CCD camera have two compact segments: CCD driving sequences and CCD signal processing sequences. Proper working of CCD sensor lies on good driving sequences while accurate CCD signal processing sequences ensures high quality of CCD image. The relationship among CCD camera time sequences is complex and precise. The conventional methods are uneasy to implement time sequences of Frame transfer area-array CCD. Embedded designing method is introduced in this paper and field programmable gate array device is chosen as the hardware design platform. Phase-locked loops are used for precise phase shifting and embedded logic analyzer for waveform verification. CCD driving clocks, electronic shutter signal, A/D and black pixels clamp clocks and double correlation sampling clocks have been attained on the hardware platform and this timing generator can control exposure time flexibly. High quality images have been acquired through using this timing generator on the CCD circuit system board which has been designed by our team.

  10. NIR spectrophotometric system based on a conventional CCD camera

    NASA Astrophysics Data System (ADS)

    Vilaseca, Meritxell; Pujol, Jaume; Arjona, Montserrat

    2003-05-01

    The near infrared spectral region (NIR) is useful in many applications. These include agriculture, the food and chemical industry, and textile and medical applications. In this region, spectral reflectance measurements are currently made with conventional spectrophotometers. These instruments are expensive since they use a diffraction grating to obtain monochromatic light. In this work, we present a multispectral imaging based technique for obtaining the reflectance spectra of samples in the NIR region (800 - 1000 nm), using a small number of measurements taken through different channels of a conventional CCD camera. We used methods based on the Wiener estimation, non-linear methods and principal component analysis (PCA) to reconstruct the spectral reflectance. We also analyzed, by numerical simulation, the number and shape of the filters that need to be used in order to obtain good spectral reconstructions. We obtained the reflectance spectra of a set of 30 spectral curves using a minimum of 2 and a maximum of 6 filters under the influence of two different halogen lamps with color temperatures Tc1 = 2852K and Tc2 = 3371K. The results obtained show that using between three and five filters with a large spectral bandwidth (FWHM = 60 nm), the reconstructed spectral reflectance of the samples was very similar to that of the original spectrum. The small amount of errors in the spectral reconstruction shows the potential of this method for reconstructing spectral reflectances in the NIR range.

  11. The photometrical system and positional accuracy of the CCD camera ST7 of Lisnyki Observational Station

    Microsoft Academic Search

    V. V. Kleshchonok; M. T. Pogoreltsev; V. M. Andruk; I. V. Lukyanyk

    2005-01-01

    The results of testing of CCD ST7 camera are reported. Determinations of photometrical system and positional accuracy were made by processing of open cluster Stock 1 observations at the AZT-8 telescope of the Lisnyki Observational Station.

  12. Measuring neutron fluences and gamma/x-ray fluxes with CCD cameras

    SciTech Connect

    Yates, G.J. (Los Alamos National Lab., NM (United States)); Smith, G.W. (Ministry of Defense, Aldermaston (United Kingdom). Atomic Weapons Establishment); Zagarino, P.; Thomas, M.C. (EG and G Energy Measurements, Inc., Goleta, CA (United States). Santa Barbara Operations)

    1991-01-01

    The capability to measure bursts of neutron fluences and gamma/x-ray fluxes directly with charge coupled device (CCD) cameras while being able to distinguish between the video signals produced by these two types of radiation, even when they occur simultaneously, has been demonstrated. Volume and area measurements of transient radiation-induced pixel charge in English Electric Valve (EEV) Frame Transfer (FT) charge coupled devices (CCDs) from irradiation with pulsed neutrons (14 MeV) and Bremsstrahlung photons (4--12 MeV endpoint) are utilized to calibrate the devices as radiometric imaging sensors capable of distinguishing between the two types of ionizing radiation. Measurements indicate {approx}.05 V/rad responsivity with {ge}1 rad required for saturation from photon irradiation. Neutron-generated localized charge centers or peaks'' binned by area and amplitude as functions of fluence in the 10{sup 5} to 10{sup 7} n/cm{sup 2} range indicate smearing over {approx}1 to 10% of CCD array with charge per pixel ranging between noise and saturation levels.

  13. Measuring neutron fluences and gamma/x-ray fluxes with CCD cameras

    SciTech Connect

    Yates, G.J. [Los Alamos National Lab., NM (United States); Smith, G.W. [Ministry of Defense, Aldermaston (United Kingdom). Atomic Weapons Establishment; Zagarino, P.; Thomas, M.C. [EG and G Energy Measurements, Inc., Goleta, CA (United States). Santa Barbara Operations

    1991-12-01

    The capability to measure bursts of neutron fluences and gamma/x-ray fluxes directly with charge coupled device (CCD) cameras while being able to distinguish between the video signals produced by these two types of radiation, even when they occur simultaneously, has been demonstrated. Volume and area measurements of transient radiation-induced pixel charge in English Electric Valve (EEV) Frame Transfer (FT) charge coupled devices (CCDs) from irradiation with pulsed neutrons (14 MeV) and Bremsstrahlung photons (4--12 MeV endpoint) are utilized to calibrate the devices as radiometric imaging sensors capable of distinguishing between the two types of ionizing radiation. Measurements indicate {approx}.05 V/rad responsivity with {ge}1 rad required for saturation from photon irradiation. Neutron-generated localized charge centers or ``peaks`` binned by area and amplitude as functions of fluence in the 10{sup 5} to 10{sup 7} n/cm{sup 2} range indicate smearing over {approx}1 to 10% of CCD array with charge per pixel ranging between noise and saturation levels.

  14. The digital video camera LSI system

    Microsoft Academic Search

    M. Kobayashi; T. Kuwajima; H. Nikoh; K. Ohsawa; Y. Kitano; M. Fujiike; T. Shimizu; K. Kanno

    1989-01-01

    The authors have obtained the first digital LSI solution for a consumer video camera system. This paper shows features of two LSIs, which construct the main portion of the system and achieve much advantage in digital processing and adaptability of computer control

  15. Development of filter exchangeable 3CCD camera for multispectral imaging acquisition

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Park, Soo Hyun; Kim, Moon S.; Noh, Sang Ha

    2012-05-01

    There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallel image signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.

  16. Video Chat with Multiple Cameras John MacCormick

    E-print Network

    MacCormick, John

    . Benchmark experiments em- ploying up to four webcams simultaneously demonstrate that multi-camera video chatVideo Chat with Multiple Cameras John MacCormick Dickinson College Technical Report March 2012 Abstract The dominant paradigm for video chat employs a single camera at each end of the con- versation

  17. Automated CCD camera characterization. 1998 summer research program for high school juniors at the University of Rochester`s Laboratory for Laser Energetics: Student research reports

    SciTech Connect

    Silbermann, J. [Penfield High School, NY (United States)

    1999-03-01

    The OMEGA system uses CCD cameras for a broad range of applications. Over 100 video rate CCD cameras are used for such purposes as targeting, aligning, and monitoring areas such as the target chamber, laser bay, and viewing gallery. There are approximately 14 scientific grade CCD cameras on the system which are used to obtain precise photometric results from the laser beam as well as target diagnostics. It is very important that these scientific grade CCDs are properly characterized so that the results received from them can be evaluated appropriately. Currently characterization is a tedious process done by hand. The operator must manually operate the camera and light source simultaneously. Because more exposures means more accurate information on the camera, the characterization tests can become very length affairs. Sometimes it takes an entire day to complete just a single plot. Characterization requires the testing of many aspects of the camera`s operation. Such aspects include the following: variance vs. mean signal level--this should be proportional due to Poisson statistics of the incident photon flux; linearity--the ability of the CCD to produce signals proportional to the light it received; signal-to-noise ratio--the relative magnitude of the signal vs. the uncertainty in that signal; dark current--the amount of noise due to thermal generation of electrons (cooling lowers this noise contribution significantly). These tests, as well as many others, must be conducted in order to properly understand a CCD camera. The goal of this project was to construct an apparatus that could characterize a camera automatically.

  18. Characterization of a CCD-camera-based system for measurement of the solar radial energy distribution

    NASA Astrophysics Data System (ADS)

    Gambardella, A.; Galleano, R.

    2011-10-01

    Charge-coupled device (CCD)-camera-based measurement systems offer the possibility to gather information on the solar radial energy distribution (sunshape). Sunshape measurements are very useful in designing high concentration photovoltaic systems and heliostats as they collect light only within a narrow field of view, the dimension of which has to be defined in the context of several different system design parameters. However, in this regard the CCD camera response needs to be adequately characterized. In this paper, uncertainty components for optical and other CCD-specific sources have been evaluated using indoor test procedures. We have considered CCD linearity and background noise, blooming, lens aberration, exposure time linearity and quantization error. Uncertainty calculation showed that a 0.94% (k = 2) combined expanded uncertainty on the solar radial energy distribution can be assumed.

  19. In flight MTF monitoring and compensation for CCD camera on CBERS-02

    Microsoft Academic Search

    Xingfa Gu; Xiaoying Li; Xiangjun Min; Tao Yu; Jijuan Sun; Yong Zeng; Hua Xu; Ding Guo

    2005-01-01

    In this article, the approach of simulating ideal tarp scene was proposed to determine the MTF for CCD camera on CBERS-02.\\u000a The MTF acquired from this technique was compared to those from some common methods. MTFs achieved from different approaches\\u000a were employed to compensate the CCD images based on three restoration algorithms: the iterative method, the Wiener filter\\u000a and the

  20. Characteristics relevant to portal dosimetry of a cooled CCD camera-based EPID.

    PubMed

    Franken, E M; de Boer, J C J; Barnhoorn, J C; Heijmen, B J M

    2004-09-01

    Our EPIDs have recently been equipped with Peltier-cooled CCD cameras. The CCD cooling dramatically reduced deteriorating effects of radiation damage on image quality. Over more than 600 days of clinical operation, the radiation induced noise contribution has remained stable at a very low level (1 SD < or = 0.15% of the camera dynamic range), in marked contrast with the previously used noncooled cameras. The camera response (output signal versus incident EPID radiation exposure) can be accurately described with a quadratic function. This response reproduced well, both in short and long term (variation < 0.2% respectively < 0.4% (1 SD)), rendering the cooled camera well-suited for EPID dosimetry applications. PMID:15487737

  1. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  2. High-speed video recording system using multiple CCD imagers and digital storage

    NASA Astrophysics Data System (ADS)

    Racca, Roberto G.; Clements, Reginald M.

    1995-05-01

    This paper describes a fully solid state high speed video recording system. Its principle of operation is based on the use of several independent CCD imagers and an array of liquid crystal light valves that control which imager receives the light from the subject. The imagers are exposed in rapid succession and are then read out sequentially at standard video rate into digital memory, generating a time-resolved sequence with as many frames as there are imagers. This design allows the use of inexpensive, consumer-grade camera modules and electronics. A microprocessor-based controller, designed to accept up to ten imagers, handles all phases of the recording: exposure timing, image digitization and storage, and sequential playback onto a standard video monitor. The system is capable of recording full screen black and white images with spatial resolution similar to that of standard television, at rates of about 10,000 images per second in pulsed illumination mode. We have designed and built two optical configurations for the imager multiplexing system. The first one involves permanently splitting the subject light into multiple channels and placing a liquid crystal shutter in front of each imager. A prototype with three CCD imagers and shutters based on this configuration has allowed successful three-image video recordings of phenomena such as the action of an air rifle pellet shattering a piece of glass, using a high-intensity pulsed light emitting diode as the light source. The second configuration is more light-efficient in that it routes the entire subject light to each individual imager in sequence by using the liquid crystal cells as selectable binary switches. Despite some operational limitations, this method offers a solution when the available light, if subdivided among all the imagers, would not allow a sufficiently short exposure time.

  3. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    SciTech Connect

    Teruya, A. T. [LLNL; Palmer, N. E. [LLNL; Schneider, M. B. [LLNL; Bell, P. M. [LLNL; Sims, G. [Spectral Instruments; Toerne, K. [Spectral Instruments; Rodenburg, K. [Spectral Instruments; Croft, M. [Spectral Instruments; Haugh, M. J. [NSTec; Charest, M. R. [NSTec; Romano, E. D. [NSTec; Jacoby, K. D. [NSTec

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effort was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.

  4. Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing

    NASA Technical Reports Server (NTRS)

    Crooke, Julie A.

    2003-01-01

    The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.

  5. Wilbur: A low-cost CCD camera system for MDM Observatory

    NASA Technical Reports Server (NTRS)

    Metzger, M. R.; Luppino, G. A.; Tonry, J. L.

    1992-01-01

    The recent availability of several 'off-the-shelf' components, particularly CCD control electronics from SDSU, has made it possible to put together a flexible CCD camera system at relatively low cost and effort. The authors describe Wilbur, a complete CCD camera system constructed for the Michigan-Dartmouth-MIT Observatory. The hardware consists of a Loral 2048(exp 2) CCD controlled by the SDSU electronics, an existing dewar design modified for use at MDM, a Sun Sparcstation 2 with a commercial high-speed parallel controller, and a simple custom interface between the controller and the SDSU electronics. The camera is controlled from the Sparcstation by software that provides low-level I/O in real time, collection of additional information from the telescope, and a simple command interface for use by an observer. Readout of the 2048(exp 2) array is complete in under two minutes at 5 e(sup -) read noise, and readout time can be decreased at the cost of increased noise. The system can be easily expanded to handle multiple CCD's/multiple readouts, and can control other dewars/CCD's using the same host software.

  6. Charge-coupled device (CCD) Camera/Memory Optimization For Expendable Autonomous Vehicles

    NASA Astrophysics Data System (ADS)

    Roberts, A.; Mathews, B.

    1980-04-01

    An expendable, autonomous vehicle by definition and implication will require small, low-cost sensors for observation of the outside world and interface to smart, decision-making avionics. This paper describes results of several interrelated CCD camera projects directed toward achieving such an integrated sensor package. A shuttered high resolution CCD detector combined with a CCD analog frame store memory is described. This system results in a full resolution frame rate reduced, deinterlaced image. This image data is suitable for transform or differential pulse code data compression as well as various other 3 X 3 element operators directed at extracting image intelligence for on-board decision-making.

  7. Preliminary results from a single-photon imaging X-ray charge coupled device /CCD/ camera

    NASA Technical Reports Server (NTRS)

    Griffiths, R. E.; Polucci, G.; Mak, A.; Murray, S. S.; Schwartz, D. A.; Zombeck, M. V.

    1981-01-01

    A CCD camera is described which has been designed for single-photon X-ray imaging in the 1-10 keV energy range. Preliminary results are presented from the front-side illuminated Fairchild CCD 211, which has been shown to image well at 3 keV. The problem of charge-spreading above 4 keV is discussed by analogy with a similar problem at infrared wavelengths. The total system noise is discussed and compared with values obtained by other CCD users.

  8. Fluorescent magnetic inspection system used by special CCD cameras to identify axles of railway vehicles

    NASA Astrophysics Data System (ADS)

    Yu, Xiang; Liu, Xiulan; Xing, Juheng; Gao, Jianbin; Yin, Yuhua; Pan, Yueshan; Bian, Fusheng; Zhang, Yijie; Xu, Yongzhong; Chang, Tai'an

    1994-08-01

    This paper has summarized the achievements in the research on the digital image sampling and processing system for the automation of the fluorescent magnetic inspection for the axles of railway vehicles. The hardware of the system consists of 3 line array CCD-cameras, a multiplex A/D converter and an advanced microcomputer and its software has the functions of waveform display, real-time sampling and processing as well as automatic decision. A new method of intermittent driving with a long integration time and a high driving frequency is employed in the CCD-camera.

  9. EEV CCD39 wavefront sensor cameras for AO and interferometry

    NASA Astrophysics Data System (ADS)

    DuVarney, Raymond C.; Bleau, Charles A.; Motter, Garry T.; Shaklan, Stuart B.; Kuhnert, Andreas C.; Brack, Gary; Palmer, Dean; Troy, Mitchell; Kieu, Thangh; Dekany, Richard G.

    2000-07-01

    SciMeasure, in collaboration with Emory University and the Jet Propulsion Laboratory (JPL), has developed an extremely versatile CCD controller for use in adaptive optics, optical interferometry, and other applications requiring high-speed readout rates and/or low read noise. The overall architecture of this controller system will be discussed and its performance using both EEV CCD39 and MIT/LL CCID-19 detectors will be presented. Initially developed for adaptive optics applications, this controller is used in the Palomar Adaptive Optics program (PALAO), the AO system developed by JPL for the 200' Hale telescope at Palomar Mountain. An overview of the PALAO system is discussed and diffraction-limited science results will be shown. Recently modified under NASA SBIR Phase II funding for use in the Space Interferometry Mission testbeds, this controller is currently in use on the Micro- Arcsecond Metrology testbed at JPL. Details of a new vacuum- compatible remote CCD enclosure and specialized readout sequence programming will also be presented.

  10. Mobile phone camera-based video scanning of paper documents

    E-print Network

    Paris-Sud XI, Université de

    Mobile phone camera-based video scanning of paper documents Muhammad Muzzamil Luqman, Petra Gomez-based document video scan- ning is an interesting research problem which has entered into a new era research on mobile phone camera-based document image mosaic reconstruction method for video scanning

  11. Development of x-ray CCD camera system with high readout rate using ASIC

    NASA Astrophysics Data System (ADS)

    Nakajima, H.; Matsuura, D.; Anabuki, N.; Miyata, E.; Tsunemi, H.; Doty, J. P.; Ikeda, H.; Katayama, H.

    2008-07-01

    We report on the development of high-speed and low-noise readout system of X-ray CCD camera with ASIC and the Camera Link standard. The ASIC is characterized by AD-conversion capability and it processes CCD output signals with a high pixel rate of 600 kHz, which is ten times quicker than conventional frame transfer type X-ray CCD cameras in orbit. There are four identical circuits inside the chip and all of them process CCD signals simultaneously. ?? modulator is adopted to achieve effective noise shaping and obtain a high resolution decimal values with relatively simple circuits. The results of the unit test shows that it works properly with moderately low input noise of ~70 ?V at pixel rate of 625 kHz, and ~40 ?V @ 40 kHz. Power consumption is sufficiently low of <120 ?uV @ 1.25 MHz. We have also developed the rest of readout and driving circuits. As a data acquisition scheme we adopt the Camera Link standard in order to support the high readout rate of the ASIC. In the initial test of the CCD camera system, we used the P-channel CCD developed for Soft X-ray Imager onboard next Japanese X-ray astronomical satellite. The thickness of its depletion layer reaches up to 220 ?m and therefore we can detect the X-rays from 109Cd with high sensitivity rather than N-channel CCDs. The energy resolution by our system is 379 (+/-7)eV (FWHM) @ 22.1 keV, that is, ?E/E=1.8% was achieved with a readout rate of 44 kHz.

  12. 1 Astrometric calibration for INT Wide Field Camera images The Wide Field Camera instrument on the Isaac Newton Telescope contains four CCD chips of

    E-print Network

    Taylor, Mark

    1 Astrometric calibration for INT Wide Field Camera images The Wide Field Camera instrument on the Isaac Newton Telescope contains four CCD chips of 2048 #2; 4096 pixels positioned roughly, it is necessary to correct for the exact orientation and position of each CCD in relation to the others, as well

  13. Raster linearity of video cameras calibrated with precision tester

    NASA Technical Reports Server (NTRS)

    1964-01-01

    The time between transitions in the video output of a camera is measured when registered at reticle marks on the vidicon faceplate. This device permits precision calibration of raster linearity of television camera tubes.

  14. A range-resolved bistatic lidar using a high-sensitive CCD-camera

    Microsoft Academic Search

    K. Yamaguchi; A. Nomura; Y. Saito; T. Kano

    1992-01-01

    Until now monostatic type lidar systems have been mainly utilized in the field of lidar measurements of the atmosphere. We propose here a range-resolved bistatic lidar system using a high-sensitive cooled charge coupled device (CCD) camera. This system has the ability to measure the three dimensional distributions of aerosol, atmospheric density, and cloud by processing the image data of the

  15. Curved CCD detector devices and arrays for multispectral astrophysical applications and terrestrial stereo panoramic cameras

    Microsoft Academic Search

    Pradyumna Swain; David Mark

    2004-01-01

    The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope\\

  16. Video-Based Point Cloud Generation Using Multiple Action Cameras

    NASA Astrophysics Data System (ADS)

    Teo, T.

    2015-05-01

    Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1) camera calibration, (2) video conversion and alignment, (3) orientation modelling, (4) dense matching, and (5) evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM) technique is utilized to obtain the image orientations. Then, semi-global matching (SGM) algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  17. CCD camera calibration for underwater laser scanning system

    Microsoft Academic Search

    C. C. Wang; S. W. Shyue; H. C. Hsu; J. S. Sue; T. C. Huang

    2001-01-01

    The quality of underwater video photography is limited by the visibility of the water column. Generally speaking, it is difficult to tell the dimension of the target directly from the underwater images. To overcome this problem, one may project a laser stripe onto the target and measure the displacement of the laser scan lines relative to a straight baseline. The

  18. A Distributed Camera Network Architecture Supporting Video Adaptation

    E-print Network

    Ottawa, University of

    surveillance systems. In addition, the proposed system has the ability to collect/capture video streams from; Surveillance; Distributed Lookup; Video Adaptation; Smart Camera. I. INTRODUCTION Modern video surveillance detection, facial recognition, object tracking, and event notification by the video surveillance system

  19. Evaluating stereoscopic CCD still video imagery for determining object height in forestry applications 

    E-print Network

    Jacobs, Dennis Murray

    1990-01-01

    EVALUATING STEREOSCOPIC CCD STILL VIDEO IMAGERY FOR DETERMINING OBJECT HEIGHT IN FORESTRY APPLICATIONS A Thesis by DENNIS MURRAY JACOBS Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment... of the requirements for the degree of MASTER OF SCIENCE August 1990 Major Subject: Forestry EVALUATING STEREOSCOPIC CCD STILL VIDEO IMAGERY FOR DETERMINING OBJECT HEIGHT IN FORESTRY APPLICATIONS A Thesis by DENNIS MURRAY JACOBS Approved as to style...

  20. Evaluating stereoscopic CCD still video imagery for determining object height in forestry applications

    E-print Network

    Jacobs, Dennis Murray

    1990-01-01

    EVALUATING STEREOSCOPIC CCD STILL VIDEO IMAGERY FOR DETERMINING OBJECT HEIGHT IN FORESTRY APPLICATIONS A Thesis by DENNIS MURRAY JACOBS Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment... of the requirements for the degree of MASTER OF SCIENCE August 1990 Major Subject: Forestry EVALUATING STEREOSCOPIC CCD STILL VIDEO IMAGERY FOR DETERMINING OBJECT HEIGHT IN FORESTRY APPLICATIONS A Thesis by DENNIS MURRAY JACOBS Approved as to style...

  1. Design principles and applications of a cooled CCD camera for electron microscopy.

    PubMed

    Faruqi, A R

    1998-01-01

    Cooled CCD cameras offer a number of advantages in recording electron microscope images with CCDs rather than film which include: immediate availability of the image in a digital format suitable for further computer processing, high dynamic range, excellent linearity and a high detective quantum efficiency for recording electrons. In one important respect however, film has superior properties: the spatial resolution of CCD detectors tested so far (in terms of point spread function or modulation transfer function) are inferior to film and a great deal of our effort has been spent in designing detectors with improved spatial resolution. Various instrumental contributions to spatial resolution have been analysed and in this paper we discuss the contribution of the phosphor-fibre optics system in this measurement. We have evaluated the performance of a number of detector components and parameters, e.g. different phosphors (and a scintillator), optical coupling with lens or fibre optics with various demagnification factors, to improve the detector performance. The camera described in this paper, which is based on this analysis, uses a tapered fibre optics coupling between the phosphor and the CCD and is installed on a Philips CM12 electron microscope equipped to perform cryo-microscopy. The main use of the camera so far has been in recording electron diffraction patterns from two dimensional crystals of bacteriorhodopsin--from wild type and from different trapped states during the photocycle. As one example of the type of data obtained with the CCD camera a two dimensional Fourier projection map from the trapped O-state is also included. With faster computers, it will soon be possible to undertake this type of work on an on-line basis. Also, with improvements in detector size and resolution, CCD detectors, already ideal for diffraction, will be able to compete with film in the recording of high resolution images. PMID:9889815

  2. BIMA Optical Pointing Project. I. The STV Video Camera

    E-print Network

    BIMA Optical Pointing Project. I. The STV Video Camera Jonathan Swift UC Berkeley Radio Astronomy system at BIMA. The specifications of the STV video camera mounted on the optical pointing telescopes of the Hat Creek interferometer are shown. The sensitivity of the system has been empirically determined (m

  3. CCD camera and data acquisition system of the scientific instrument ELMER for the GTC 10-m telescope

    NASA Astrophysics Data System (ADS)

    Kohley, Ralf; Martin-Fleitas, Juan Manuel; Cavaller-Marques, Lluis; Hammersley, Peter L.; Suarez-Valles, Marcos; Vilela, Rafael; Beigbeder, Francis

    2004-09-01

    ELMER is a multi-purpose instrument for the GTC designed for both, Imaging and Spectrosopy in the visible range. The CCD camera employs a E2V Technologies CCD44-82 detector mounted in a high performance LN2 Bath Cryostat based on an ESO design and a SDSU-II CCD controller with parallel interface. The design including the low-noise fan-out electronics has been kept flexible to allow alternatively the use of MIT/LL CCID-20 detectors. We present the design of the CCD camera and data acquisition system and first performance test results.

  4. High-speed video recording system using multiple CCD imagers and digital storage

    Microsoft Academic Search

    Roberto G. Racca; Reginald M. Clements

    1995-01-01

    This paper describes a fully solid state high speed video recording system. Its principle of operation is based on the use of several independent CCD imagers and an array of liquid crystal light valves that control which imager receives the light from the subject. The imagers are exposed in rapid succession and are then read out sequentially at standard video

  5. Camera Calibration from Video of a Walking Human

    E-print Network

    Southern California, University of

    Camera Calibration from Video of a Walking Human Fengjun Lv, Member, IEEE, Tao Zhao, Member, IEEE, and Ramakant Nevatia, Fellow, IEEE Abstract--A self-calibration method to estimate a camera's intrinsic to various viewing angles and subjects. Index Terms--Camera calibration, self-calibration, vanishing point

  6. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera

  7. Scintillator-CCD camera system light output response to dosimetry parameters for proton beam range measurement

    NASA Astrophysics Data System (ADS)

    Daftari, Inder K.; Castaneda, Carlos M.; Essert, Timothy; Phillips, Theodore L.; Mishra, Kavita K.

    2012-09-01

    The purpose of this study is to investigate the luminescence light output response in a plastic scintillator irradiated by a 67.5 MeV proton beam using various dosimetry parameters. The relationship of the visible scintillator light with the beam current or dose rate, aperture size and the thickness of water in the water-column was studied. The images captured on a CCD camera system were used to determine optimal dosimetry parameters for measuring the range of a clinical proton beam. The method was developed as a simple quality assurance tool to measure the range of the proton beam and compare it to (a) measurements using two segmented ionization chambers and water column between them, and (b) with an ionization chamber (IC-18) measurements in water. We used a block of plastic scintillator that measured 5×5×5 cm3 to record visible light generated by a 67.5 MeV proton beam. A high-definition digital video camera Moticam 2300 connected to a PC via USB 2.0 communication channel was used to record images of scintillation luminescence. The brightness of the visible light was measured while changing beam current and aperture size. The results were analyzed to obtain the range and were compared with the Bragg peak measurements with an ionization chamber. The luminescence light from the scintillator increased linearly with the increase of proton beam current. The light output also increased linearly with aperture size. The relationship between the proton range in the scintillator and the thickness of the water column showed good linearity with a precision of 0.33 mm (SD) in proton range measurement. For the 67.5 MeV proton beam utilized, the optimal parameters for scintillator light output response were found to be 15 nA (16 Gy/min) and an aperture size of 15 mm with image integration time of 100 ms. The Bragg peak depth brightness distribution was compared with the depth dose distribution from ionization chamber measurements and good agreement was observed. The peak/plateau ratio observed for the scintillator was found to be 2.21 as compared to the ionization chamber measurements of 3.01. The response of a scintillator block-CCD camera in 67.5 MeV proton beam was investigated. A linear response was seen between light output and beam current as well as aperture size. The relation between the thickness of water in the water column and the measured range also showed linearity. The results from the scintillator response was used to develop a simple approach to measuring the range and the Bragg peak of a proton beam by recording the visible light from a scintillator block with an accuracy of less than 0.33 mm. Optimal dosimetry parameters for our proton beam were evaluated. It is observed that this method can be used to confirm the range of a proton beam during daily treatment and will be useful as daily QA measurement for proton beam therapy.

  8. Deflection Measurements of a Thermally Simulated Nuclear Core using a High-Resolution CCD-Camera

    SciTech Connect

    Stanojev, B.J. [Marshall Space Flight Center, National Aeronautics and Space Administration, Huntsville, Al, 35812 (United States); Houts, M. [Los Alamos National Laboratory, Department of Energy, Los Alamos, NM, 87545 (United States)

    2004-07-01

    Space fission systems under consideration for near-term missions all use compact, fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage, is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system 'nuclear' equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three-dimensional deformation profile of the core during test. (authors)

  9. Panoramic Video Capturing and Compressed Domain Virtual Camera Control

    E-print Network

    California at Santa Barbara, University of

    Panoramic Video Capturing and Compressed Domain Virtual Camera Control Xinding Sun*, Jonathan Foote Avenue Palo Alto, CA 94304 {foote, kimber}@pal.xerox.com ABSTRACT A system for capturing panoramic video applications such as classroom lectures and video conferencing. The proposed method is based on the Fly

  10. Multi-tasking Smart Cameras for Intelligent Video Surveillance Systems

    E-print Network

    Qureshi, Faisal Z.

    Multi-tasking Smart Cameras for Intelligent Video Surveillance Systems Wiktor Starzyk Faculty.starzyk@mycampus.uoit.ca Faisal Z. Qureshi http://faculty.uoit.ca/qureshi Abstract We demonstrate a video surveillance system observation tasks simultaneously. The research presented herein is a step towards video surveillance systems

  11. Automated Technology for Video Surveillance Vast numbers of surveillance cameras

    E-print Network

    Hill, Wendell T.

    Automated Technology for Video Surveillance Vast numbers of surveillance cameras monitor public spaces. Far more video is recorded than people have time to watch, and the quality of the images is often on video in real time. Taking advantage of the university's high-performance wireless communication

  12. OCam with CCD220, the Fastest and Most Sensitive Camera to Date for AO Wavefront Sensing

    Microsoft Academic Search

    Philippe Feautrier; Jean-Luc Gach; Philippe Balard; Christian Guillaume; Mark Downing; Norbert Hubin; Eric Stadler; Yves Magnard; Michael Skegg; Mark Robbins; Sandy Denney; Wolfgang Suske; Paul Jorden; Patrick Wheeler; Peter Pool; Ray Bell; David Burt; Ian Davies; Javier Reyes; Manfred Meyer; Dietrich Baade; Markus Kasper; Robin Arsenault; Thierry Fusco; José Javier Diaz Garcia

    2011-01-01

    For the first time, subelectron readout noise has been achieved with a camera dedicated to astronomical wavefront-sensing applications. The OCam system demonstrated this performance at a 1300 Hz frame rate and with 240 × 240 pixel frame size. ESO and JRA2 OPTICON jointly funded e2v Technologies to develop a custom CCD for adaptive optics (AO) wavefront-sensing applications. The device, called

  13. Outer planet investigations using a CCD camera system. [Saturn disk photommetry

    NASA Technical Reports Server (NTRS)

    Price, M. J.

    1980-01-01

    Problems related to analog noise, data transfer from the camera buffer to the storage computer, and loss of sensitivity of a two dimensional charge coupled device imaging system are reported. To calibrate the CCD system, calibrated UBV pinhole scans of the Saturn disk were obtained with a photoelectric area scanning photometer. Atmospheric point spread functions were also obtained. The UBV observations and models of the Saturn atmosphere are analyzed.

  14. Star-field identification algorithm. [for implementation on CCD-based imaging camera

    NASA Technical Reports Server (NTRS)

    Scholl, M. S.

    1993-01-01

    A description of a new star-field identification algorithm that is suitable for implementation on CCD-based imaging cameras is presented. The minimum identifiable star pattern element consists of an oriented star triplet defined by three stars, their celestial coordinates, and their visual magnitudes. The algorithm incorporates tolerance to faulty input data, errors in the reference catalog, and instrument-induced systematic errors.

  15. Initial laboratory evaluation of color video cameras: Phase 2

    SciTech Connect

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  16. Cramer-Rao lower bound optimization of an EM-CCD-based scintillation gamma camera.

    PubMed

    Korevaar, Marc A N; Goorden, Marlies C; Beekman, Freek J

    2013-04-21

    Scintillation gamma cameras based on low-noise electron multiplication (EM-)CCDs can reach high spatial resolutions. For further improvement of these gamma cameras, more insight is needed into how various parameters that characterize these devices influence their performance. Here, we use the Cramer-Rao lower bound (CRLB) to investigate the sensitivity of the energy and spatial resolution of an EM-CCD-based gamma camera to several parameters. The gamma camera setup consists of a 3 mm thick CsI(Tl) scintillator optically coupled by a fiber optic plate to the E2V CCD97 EM-CCD. For this setup, the position and energy of incoming gamma photons are determined with a maximum-likelihood detection algorithm. To serve as the basis for the CRLB calculations, accurate models for the depth-dependent scintillation light distribution are derived and combined with a previously validated statistical response model for the EM-CCD. The sensitivity of the lower bounds for energy and spatial resolution to the EM gain and the depth-of-interaction (DOI) are calculated and compared to experimentally obtained values. Furthermore, calculations of the influence of the number of detected optical photons and noise sources in the image area on the energy and spatial resolution are presented. Trends predicted by CRLB calculations agree with experiments, although experimental values for spatial and energy resolution are typically a factor of 1.5 above the calculated lower bounds. Calculations and experiments both show that an intermediate EM gain setting results in the best possible spatial or energy resolution and that the spatial resolution of the gamma camera degrades rapidly as a function of the DOI. Furthermore, calculations suggest that a large improvement in gamma camera performance is achieved by an increase in the number of detected photons or a reduction of noise in the image area. A large noise reduction, as is possible with a new generation of EM-CCD electronics, may improve the energy and spatial resolution by a factor of 1.5. PMID:23552717

  17. Cramer-Rao lower bound optimization of an EM-CCD-based scintillation gamma camera

    NASA Astrophysics Data System (ADS)

    Korevaar, Marc A. N.; Goorden, Marlies C.; Beekman, Freek J.

    2013-04-01

    Scintillation gamma cameras based on low-noise electron multiplication (EM-)CCDs can reach high spatial resolutions. For further improvement of these gamma cameras, more insight is needed into how various parameters that characterize these devices influence their performance. Here, we use the Cramer-Rao lower bound (CRLB) to investigate the sensitivity of the energy and spatial resolution of an EM-CCD-based gamma camera to several parameters. The gamma camera setup consists of a 3 mm thick CsI(Tl) scintillator optically coupled by a fiber optic plate to the E2V CCD97 EM-CCD. For this setup, the position and energy of incoming gamma photons are determined with a maximum-likelihood detection algorithm. To serve as the basis for the CRLB calculations, accurate models for the depth-dependent scintillation light distribution are derived and combined with a previously validated statistical response model for the EM-CCD. The sensitivity of the lower bounds for energy and spatial resolution to the EM gain and the depth-of-interaction (DOI) are calculated and compared to experimentally obtained values. Furthermore, calculations of the influence of the number of detected optical photons and noise sources in the image area on the energy and spatial resolution are presented. Trends predicted by CRLB calculations agree with experiments, although experimental values for spatial and energy resolution are typically a factor of 1.5 above the calculated lower bounds. Calculations and experiments both show that an intermediate EM gain setting results in the best possible spatial or energy resolution and that the spatial resolution of the gamma camera degrades rapidly as a function of the DOI. Furthermore, calculations suggest that a large improvement in gamma camera performance is achieved by an increase in the number of detected photons or a reduction of noise in the image area. A large noise reduction, as is possible with a new generation of EM-CCD electronics, may improve the energy and spatial resolution by a factor of 1.5.

  18. A range-resolved bistatic lidar using a high-sensitive CCD-camera

    NASA Technical Reports Server (NTRS)

    Yamaguchi, K.; Nomura, A.; Saito, Y.; Kano, T.

    1992-01-01

    Until now monostatic type lidar systems have been mainly utilized in the field of lidar measurements of the atmosphere. We propose here a range-resolved bistatic lidar system using a high-sensitive cooled charge coupled device (CCD) camera. This system has the ability to measure the three dimensional distributions of aerosol, atmospheric density, and cloud by processing the image data of the laser beam trajectory obtained by a CCD camera. Also, this lidar system has a feature that allows dual utilization of continuous wave (CW) lasers and pulse lasers. The scheme of measurement with this bistatic lidar is shown. A laser beam is emitted vertically and the image of its trajectory is taken with a remote high-sensitive CCD detector using an interference filter and a camera lens. The specifications of the bistatic lidar system used in the experiments are shown. The preliminary experimental results of our range-resolved bistatic lidar system suggest potential applications in the field of lidar measurements of the atmosphere.

  19. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  20. Design of an Event-Driven Random-Access-Windowing CCD-Based Camera

    NASA Technical Reports Server (NTRS)

    Monacos, Steve P.; Lam, Raymond K.; Portillo, Angel A.; Ortiz, Gerardo G.

    2003-01-01

    Commercially available cameras are not design for the combination of single frame and high-speed streaming digital video with real-time control of size and location of multiple regions-of-interest (ROI). A new control paradigm is defined to eliminate the tight coupling between the camera logic and the host controller. This functionality is achieved by defining the indivisible pixel read out operation on a per ROI basis with in-camera time keeping capability. This methodology provides a Random Access, Real-Time, Event-driven (RARE) camera for adaptive camera control and is will suited for target tracking applications requiring autonomous control of multiple ROI's. This methodology additionally provides for reduced ROI read out time and higher frame rates compared to the original architecture by avoiding external control intervention during the ROI read out process.

  1. Autoguiding on the 20-inch Telescope The direct imaging camera on the telescope has a second, smaller, CCD that can be used to

    E-print Network

    Glashausser, Charles

    , smaller, CCD that can be used to autoguide the telescope while exposing an image on the main CCD. Use the imager CCD and the guider CCD, so you can move the telescope to bring a good (i.e. as bright as possibleAutoguiding on the 20-inch Telescope The direct imaging camera on the telescope has a second

  2. Charge-Coupled Device (CCD) Camera/Memory Optimization For Expendable Autonomous Vehicles

    NASA Astrophysics Data System (ADS)

    Roberts, A.; Mathews, B.

    1982-04-01

    An expendable, autonomous vehicle by definition and implication will require small, low-cost sensors for observation of the outside world, and interface to smart, decision-making avionics. This paper describes results of several interrelated charge-coupled device (COD) camera projects directed toward achieving such an integrated sensor package. A shuttered high-resolution CCD detector combined with a CCD analog frame store memory is described. This system results in a full television (TV) resolution, frame-rate-reduced, deinterlaced image. These image data are suitable for transform or differential pulse-code data compression as well as various other 3 x 3 element operators directed at extracting image intelligence for on-board decision making.

  3. An intensified/shuttered cooled CCD camera for dynamic proton radiography

    SciTech Connect

    Yates, G.J.; Albright, K.L.; Alrick, K.R. [and others

    1998-12-31

    An intensified/shuttered cooled PC-based CCD camera system was designed and successfully fielded on proton radiography experiments at the Los Alamos National Laboratory LANSCE facility using 800-MeV protons. The four camera detector system used front-illuminated full-frame CCD arrays (two 1,024 x 1,024 pixels and two 512 x 512 pixels) fiber optically coupled to either 25-mm diameter planar diode or microchannel plate image intensifiers which provided optical shuttering for time resolved imaging of shock propagation in high explosives. The intensifiers also provided wavelength shifting and optical gain. Typical sequences consisting of four images corresponding to consecutive exposures of about 500 ns duration for 40-ns proton burst images (from a fast scintillating fiber array) separated by approximately 1 microsecond were taken during the radiography experiments. Camera design goals and measured performance characteristics including resolution, dynamic range, responsivity, system detection quantum efficiency (DQE), and signal-to-noise will be discussed.

  4. The Laboratory Radiometric Calibration of the CCD Stereo Camera for the Optical Payload of the Lunar Explorer Project

    NASA Astrophysics Data System (ADS)

    Wang, Jue; Li, Chun-Lai; Zhao, Bao-Chang

    2007-03-01

    The system of the optical payload for the Lunar Explorer includes a CCD stereo camera and an imaging interferometer. The former is devised to get the solid images of the lunar surface with a laser altimeter. The camera working principle, calibration purpose, and content, nude chip detection, and the process of the relative and absolute calibration in the laboratory are introduced.

  5. Optimal video camera network deployment to support security monitoring

    Microsoft Academic Search

    Benoit Debaque; Rym Jedidi; Donald Prévost

    2009-01-01

    Designing an optimal video camera network for monitoring ground activities is a critical problem to ensure sufficient performance in video surveillance system. Several authors have addressed this issue in the past, and considered this problem to be essentially a coverage optimization problem. Because of its nature and complexity, this problem is considered to be a multiobjective global optimization problem, which

  6. Optical system design of multi-spectral and large format color CCD aerial photogrammetric camera

    NASA Astrophysics Data System (ADS)

    Qian, Yixian; Sun, Tianxiang; Gao, Xiaodong; Liang, Wei

    2007-12-01

    Multi-spectrum and high spatial resolution is the vital problem for optical design of aerial photogrammetric camera all the time. It is difficult to obtain an outstanding optical system with high modulation transfer function (MTF) as a result of wide band. At the same time, for acquiring high qualified image, chromatic distortion in optical system must be expected to be controlled below 0.5 pixels; it is a trouble thing because of wide field and multi-spectrum. In this paper, MTF and band of the system are analyzed. A Russar type photogrammetric objective is chosen as the basic optical structure. A novel optical system is presented to solve the problem. The new optical photogrammetric system, which consists of panchromatic optical system and chromatic optical system, is designed. The panchromatic optical system, which can obtain panchromatic image, makes up of a 9k×9k large format CCD and high-accuracy photographic objective len, its focal length is 69.83mm, field angle is 60°×60°, the size of CCD pixels is 8.75um×8.75um, spectral scope is from 0.43um to 0.74um, modulation transfer function is all above 0.4 in whole field when spatial frequency is at 60lp/mm, distortion is less than 0.007%. In a chromatic optical system, three 2k×2k array CCDs combine individually three same photographic objectives, the high resolution chromatic image is acquired by the synthesis of red, green, blue image data information delivered by three CCD sensors. For the chromatic system, their focal length is 24.83mm and they have the same spectral range of 0.39um to 0.74um. A difference is that they are coated in different film on their protect glass. The pixel number is 2048 × 2048; its MTF exceeds 0.4 in full field when spatial frequency is 30lp/mm. The advantages of digital aerial photogrammetric camera comparison with traditional film camera are described. It is considered that the two development trends on digital aerial photogrammetric camera are high-spectral resolution and high-spatial resolution. Merits of the aerial photogrammetric camera are multi-spectral, high resolution, low distortion and light-weight and wide field. It can apply in aerial photography and remote sense in place of traditional film camera. After put on trial and analyzing from the design results, the system can meet large scale aerial survey.

  7. Review of intelligent video surveillance with single camera

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Fan, Jiu-lun; Wang, DianWei

    2012-01-01

    Intelligent video surveillance has found a wide range of applications in public security. This paper describes the state-of- the-art techniques in video surveillance system with single camera. This can serve as a starting point for building practical video surveillance systems in developing regions, leveraging existing ubiquitous infrastructure. In addition, this paper discusses the gap between existing technologies and the requirements in real-world scenario, and proposes potential solutions to reduce this gap.

  8. Review of intelligent video surveillance with single camera

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Fan, Jiu-Lun; Wang, Dianwei

    2011-12-01

    Intelligent video surveillance has found a wide range of applications in public security. This paper describes the state-of- the-art techniques in video surveillance system with single camera. This can serve as a starting point for building practical video surveillance systems in developing regions, leveraging existing ubiquitous infrastructure. In addition, this paper discusses the gap between existing technologies and the requirements in real-world scenario, and proposes potential solutions to reduce this gap.

  9. Digital imaging microscopy: the marriage of spectroscopy and the solid state CCD camera

    NASA Astrophysics Data System (ADS)

    Jovin, Thomas M.; Arndt-Jovin, Donna J.

    1991-12-01

    Biological samples have been imaged using microscopes equipped with slow-scan CCD cameras. Examples are presented of studies based on the detection of light emission signals in the form of fluorescence and phosphorescence. They include applications in the field of cell biology: (a) replication and topology of mammalian cell nuclei; (b) cytogenetic analysis of human metaphase chromosomes; and (c) time-resolved measurements of DNA-binding dyes in cells and on isolated chromosomes, as well as of mammalian cell surface antigens, using the phosphorescence of acridine orange and fluorescence resonance energy transfer of labeled lectins, respectively.

  10. The measurement of astronomical parallaxes with CCD imaging cameras on small telescopes

    SciTech Connect

    Ratcliff, S.J. (Department of Physics, Middlebury College, Middlebury, Vermont 05753 (United States)); Balonek, T.J. (Department of Physics and Astronomy, Colgate University, 13 Oak Dr., Hamilton, New York 13346 (United States)); Marschall, L.A. (Department of Physics, Gettysburg College, Gettysburg, Pennsylvania 17325 (United States)); DuPuy, D.L. (Department of Physics and Astronomy, Virginia Military Institute, Lexington, Virginia 24450 (United States)); Pennypacker, C.R. (Space Sciences Laboratory, University of California, Berkeley, California 94720 (United States)); Verma, R. (Department of Physics, Middlebury College, Middlebury, Vermont 05753 (United States)); Alexov, A. (Department of Astronomy, Wesleyan University, Middletown, Connecticut 06457 (United States)); Bonney, V. (Space Sciences Laboratory, University of California, Berkeley, California 94720 (United States))

    1993-03-01

    Small telescopes equipped with charge-coupled device (CCD) imaging cameras are well suited to introductory laboratory exercises in positional astronomy (astrometry). An elegant example is the determination of the parallax of extraterrestrial objects, such as asteroids. For laboratory exercises suitable for introductory students, the astronomical hardware needs are relatively modest, and, under the best circumstances, the analysis requires little more than arithmetic and a microcomputer with image display capabilities. Results from the first such coordinated parallax observations of asteroids ever made are presented. In addition, procedures for several related experiments, involving single-site observations and/or parallaxes of earth-orbiting artificial satellites, are outlined.

  11. Wide-Brightness-Range Video Camera

    NASA Technical Reports Server (NTRS)

    Craig, G. D.

    1986-01-01

    Television camera selectively attenuates bright areas in scene without affecting dim areas. Camera views scenes containing extremes of light and dark without overexposing light areas and underexposing dark ones. Camera uses liquid-crystal light valve for selective attenuation. Feedback cathoderay tube locally alters reflection characteristics of liquid-crystal light valve. Results in point-to-point optoelectronic automatic gain control to enable viewing of both dark and very bright areas within scene.

  12. Source video camera identification for multiply compressed videos originating from YouTube

    Microsoft Academic Search

    Wiger van Houten; Zeno Geradts

    2009-01-01

    The Photo Response Non-Uniformity is a unique sensor noise pattern that is present in each image or video acquired with a digital camera. In this work a wavelet-based technique used to extract these patterns from digital images is applied to compressed low resolution videos originating mainly from webcams. After recording these videos with a variety of codec and resolution settings,

  13. Controlled Impact Demonstration (CID) tail camera video

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The Controlled Impact Demonstration (CID) was a joint research project by NASA and the FAA to test a survivable aircraft impact using a remotely piloted Boeing 720 aircraft. The tail camera movie is one shot running 27 seconds. It shows the impact from the perspective of a camera mounted high on the vertical stabilizer, looking forward over the fuselage and wings.

  14. HERSCHEL/SCORE, imaging the solar corona in visible and EUV light: CCD camera characterization.

    PubMed

    Pancrazzi, M; Focardi, M; Landini, F; Romoli, M; Fineschi, S; Gherardi, A; Pace, E; Massone, G; Antonucci, E; Moses, D; Newmark, J; Wang, D; Rossi, G

    2010-07-01

    The HERSCHEL (helium resonant scattering in the corona and heliosphere) experiment is a rocket mission that was successfully launched last September from White Sands Missile Range, New Mexico, USA. HERSCHEL was conceived to investigate the solar corona in the extreme UV (EUV) and in the visible broadband polarized brightness and provided, for the first time, a global map of helium in the solar environment. The HERSCHEL payload consisted of a telescope, HERSCHEL EUV Imaging Telescope (HEIT), and two coronagraphs, HECOR (helium coronagraph) and SCORE (sounding coronagraph experiment). The SCORE instrument was designed and developed mainly by Italian research institutes and it is an imaging coronagraph to observe the solar corona from 1.4 to 4 solar radii. SCORE has two detectors for the EUV lines at 121.6 nm (HI) and 30.4 nm (HeII) and the visible broadband polarized brightness. The SCORE UV detector is an intensified CCD with a microchannel plate coupled to a CCD through a fiber-optic bundle. The SCORE visible light detector is a frame-transfer CCD coupled to a polarimeter based on a liquid crystal variable retarder plate. The SCORE coronagraph is described together with the performances of the cameras for imaging the solar corona. PMID:20428852

  15. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  16. Studying the characteristics of a VS-CTT-075-60 CCD camera during recording of focal spots

    Microsoft Academic Search

    D. S. Gavrilov; A. G. Kakshin; E. A. Loboda; I. A. Sorokin; A. A. Ugodenko

    2007-01-01

    A technique for monitoring the focusing quality of a diffraction-limited optical system with the use of a VS-CTT-075-60 CCD\\u000a camera with an uncooled CCD-based sensor is presented. The advantages of this technique over the widespread testing method\\u000a involving consecutive measurements of the radiation power transmitted through calibrated diaphragms of different diameters,\\u000a which are installed in the focal plane, are demonstrated.

  17. Real-Time Foreground Segmentation for the Moving Camera Based on H.264 Video Coding Information

    E-print Network

    Chang, Pao-Chi

    played an important role in many video applications, such as video surveillance, video indexing, etc. Due applications, such as the video conferencing, video indexing, video surveillance, object tracking, and object, in a multi- video surveillance system, the camera with the large foreground area of video contents

  18. Performance of the low light level CCD camera for speckle imaging

    E-print Network

    S. K. Saha; V. Chinnappan

    2002-09-20

    A new generation CCD detector called low light level CCD (L3CCD) that performs like an intensified CCD without incorporating a micro channel plate (MCP) for light amplification was procured and tested. A series of short exposure images with millisecond integration time has been obtained. The L3CCD is cooled to about $-80^\\circ$ C by Peltier cooling.

  19. MOA-cam3: a wide-field mosaic CCD camera for a gravitational microlensing survey in New Zealand

    Microsoft Academic Search

    T. Sako; T. Sekiguchi; M. Sasaki; K. Okajima; F. Abe; I. A. Bond; J. B. Hearnshaw; Y. Itow; K. Kamiya; P. M. Kilmartin; K. Masuda; Y. Matsubara; Y. Muraki; N. J. Rattenbury; D. J. Sullivan; T. Sumi; P. Tristram; T. Yanagisawa; P. C. M. Yock

    2008-01-01

    We have developed a wide-field mosaic CCD camera, MOA-cam3, mounted at the prime focus of the Microlensing Observations in\\u000a Astrophysics (MOA) 1.8-m telescope. The camera consists of ten E2V CCD4482 chips, each having 2k×4k pixels, and covers a 2.2\\u000a deg2 field of view with a single exposure. The optical system is well optimized to realize uniform image quality over this

  20. Charge-coupled device (CCD) television camera for NASA's Galileo mission to Jupiter

    NASA Technical Reports Server (NTRS)

    Klaasen, K. P.; Clary, M. C.; Janesick, J. R.

    1982-01-01

    The CCD detector under construction for use in the slow-scan television camera for the NASA Galileo Jupiter orbiter to be launched in 1985 is presented. The science objectives and the design constraints imposed by the earth telemetry link, platform residual motion, and the Jovian radiation environment are discussed. Camera optics are inherited from Voyager; filter wavelengths are chosen to enable discrimination of Galilean-satellite surface chemical composition. The CCO design, an 800 by 800-element 'virtual-phase' solid-state silicon image-sensor array with supporting electronics, is described with detailed discussion of the thermally generated dark current, quantum efficiency, signal-to-noise ratio, and resolution. Tests of the effect of ionizing radiation were performed and are analyzed statistically. An imaging mode using a 2-1/3-sec frame time and on-chip summation of the signal in 2 x 2 blocks of adjacent pixels is designed to limit the effects of the most extreme Jovian radiation. Smearing due to spacecraft/target relative velocity and platform instability will be corrected for via an algorithm maximizing spacial resolution at a given signal-to-noise level. The camera is expected to produce 40,000 images of Jupiter and its satellites during the 20-month mission.

  1. Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

  2. Image\\/video deblurring using a hybrid camera

    Microsoft Academic Search

    Yu-wing Tai; Hao Du; Michael S. Brown; Stephen Lin

    2008-01-01

    We propose a novel approach to reduce spatially varying motion blur using a hybrid camera system that simultane- ously captures high-resolution video at a low-frame rate to- gether with low-resolution video at a high-frame rate. Our work is inspired by Ben-Ezra and Nayar (3) who introduced thehybridcameraideaforcorrectingglobalmotionblurfor a single still image. We broaden the scope of the problem to address

  3. High resolution three-dimensional photoacoutic tomography with CCD-camera based ultrasound detection

    PubMed Central

    Nuster, Robert; Slezak, Paul; Paltauf, Guenther

    2014-01-01

    A photoacoustic tomograph based on optical ultrasound detection is demonstrated, which is capable of high resolution real-time projection imaging and fast three-dimensional (3D) imaging. Snapshots of the pressure field outside the imaged object are taken at defined delay times after photoacoustic excitation by use of a charge coupled device (CCD) camera in combination with an optical phase contrast method. From the obtained wave patterns photoacoustic projection images are reconstructed using a back propagation Fourier domain reconstruction algorithm. Applying the inverse Radon transform to a set of projections recorded over a half rotation of the sample provides 3D photoacoustic tomography images in less than one minute with a resolution below 100 µm. The sensitivity of the device was experimentally determined to be 5.1 kPa over a projection length of 1 mm. In vivo images of the vasculature of a mouse demonstrate the potential of the developed method for biomedical applications. PMID:25136491

  4. ULTRACAM: an ultra-fast, triple-beam CCD camera for high-speed astrophysics

    E-print Network

    V. S. Dhillon; T. R. Marsh; M. J. Stevenson; D. C. Atkinson; P. Kerry; P. T. Peacocke; A. J. A. Vick; S. M. Beard; D. J. Ives; D. W. Lunney; S. A. McLay; C. J. Tierney; J. Kelly; S. P. Littlefair; R. Nicholson; R. Pashley; E. T. Harlaftis; K. O'Brien

    2007-04-19

    ULTRACAM is a portable, high-speed imaging photometer designed to study faint astronomical objects at high temporal resolutions. ULTRACAM employs two dichroic beamsplitters and three frame-transfer CCD cameras to provide three-colour optical imaging at frame rates of up to 500 Hz. The instrument has been mounted on both the 4.2-m William Herschel Telescope on La Palma and the 8.2-m Very Large Telescope in Chile, and has been used to study white dwarfs, brown dwarfs, pulsars, black-hole/neutron-star X-ray binaries, gamma-ray bursts, cataclysmic variables, eclipsing binary stars, extrasolar planets, flare stars, ultra-compact binaries, active galactic nuclei, asteroseismology and occultations by Solar System objects (Titan, Pluto and Kuiper Belt objects). In this paper we describe the scientific motivation behind ULTRACAM, present an outline of its design and report on its measured performance.

  5. Improvement of relief algorithm to prevent inpatient's downfall accident with night-vision CCD camera

    NASA Astrophysics Data System (ADS)

    Matsuda, Noriyuki; Yamamoto, Takeshi; Miwa, Masafumi; Nukumi, Shinobu; Mori, Kumiko; Kuinose, Yuko; Maeda, Etuko; Miura, Hirokazu; Taki, Hirokazu; Hori, Satoshi; Abe, Norihiro

    2005-12-01

    "ROSAI" hospital, Wakayama City in Japan, reported that inpatient's bed-downfall is one of the most serious accidents in hospital at night. Many inpatients have been having serious damages from downfall accidents from a bed. To prevent accidents, the hospital tested several sensors in a sickroom to send warning-signal of inpatient's downfall accidents to a nurse. However, it sent too much inadequate wrong warning about inpatients' sleeping situation. To send a nurse useful information, precise automatic detection for an inpatient's sleeping situation is necessary. In this paper, we focus on a clustering-algorithm which evaluates inpatient's situation from multiple angles by several kinds of sensor including night-vision CCD camera. This paper indicates new relief algorithm to improve the weakness about exceptional cases.

  6. Retrieval of the optical depth using an all-sky CCD camera.

    PubMed

    Olmo, Francisco J; Cazorla, Alberto; Alados-Arboledas, Lucas; López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier

    2008-12-01

    A new method is presented for retrieval of the aerosol and cloud optical depth using a CCD camera equipped with a fish-eye lens (all-sky imager system). In a first step, the proposed method retrieves the spectral radiance from sky images acquired by the all-sky imager system using a linear pseudoinverse algorithm. Then, the aerosol or cloud optical depth at 500 nm is obtained as that which minimizes the residuals between the zenith spectral radiance retrieved from the sky images and that estimated by the radiative transfer code. The method is tested under extreme situations including the presence of nonspherical aerosol particles. The comparison of optical depths derived from the all-sky imager with those retrieved with a sunphotometer operated side by side shows differences similar to the nominal error claimed in the aerosol optical depth retrievals from sunphotometer networks. PMID:19037341

  7. High resolution three-dimensional photoacoutic tomography with CCD-camera based ultrasound detection.

    PubMed

    Nuster, Robert; Slezak, Paul; Paltauf, Guenther

    2014-08-01

    A photoacoustic tomograph based on optical ultrasound detection is demonstrated, which is capable of high resolution real-time projection imaging and fast three-dimensional (3D) imaging. Snapshots of the pressure field outside the imaged object are taken at defined delay times after photoacoustic excitation by use of a charge coupled device (CCD) camera in combination with an optical phase contrast method. From the obtained wave patterns photoacoustic projection images are reconstructed using a back propagation Fourier domain reconstruction algorithm. Applying the inverse Radon transform to a set of projections recorded over a half rotation of the sample provides 3D photoacoustic tomography images in less than one minute with a resolution below 100 µm. The sensitivity of the device was experimentally determined to be 5.1 kPa over a projection length of 1 mm. In vivo images of the vasculature of a mouse demonstrate the potential of the developed method for biomedical applications. PMID:25136491

  8. 0.25mm-thick CCD packaging for the Dark Energy Survey Camera array

    SciTech Connect

    Derylo, Greg; Diehl, H.Thomas; Estrada, Juan; /Fermilab

    2006-06-01

    The Dark Energy Survey Camera focal plane array will consist of 62 2k x 4k CCDs with a pixel size of 15 microns and a silicon thickness of 250 microns for use at wavelengths between 400 and 1000 nm. Bare CCD die will be received from the Lawrence Berkeley National Laboratory (LBNL). At the Fermi National Accelerator Laboratory, the bare die will be packaged into a custom back-side-illuminated module design. Cold probe data from LBNL will be used to select the CCDs to be packaged. The module design utilizes an aluminum nitride readout board and spacer and an Invar foot. A module flatness of 3 microns over small (1 sqcm) areas and less than 10 microns over neighboring areas on a CCD are required for uniform images over the focal plane. A confocal chromatic inspection system is being developed to precisely measure flatness over a grid up to 300 x 300 mm. This system will be utilized to inspect not only room-temperature modules, but also cold individual modules and partial arrays through flat dewar windows.

  9. A new tubeless nanosecond streak camera based on optical deflection and direct CCD imaging

    SciTech Connect

    Lai, C.C.

    1992-12-01

    A new optically deflected streaking camera with performance of nanosecond-range resolution, superior imaging quality, high signal detectability, and large format recording has been conceived and developed. Its construction is composed of an optomechanical deflector that deflects the line-shape image of spatial-distributed time-varying signals across the sensing surface of a cooled scientific two-dimensional CCD array with slow readout driving electronics, a lens assembly, and a desk-top computer for prompt digital data acquisition and processing. Its development utilizes the synergism of modern technologies in sensor, optical deflector, optics and microcomputer. With laser light as signal carrier, the deflecting optics produces near diffraction-limited streak images resolving to a single pixel size of 25[mu]. A 1kx1k-pixel array can thus provide a vast record of 1,000 digital data points along each spatial or temporal axis. Since only one photon-to-electron conversion exists in the entire signal recording path, the camera responses linearly to the incident light over a wide dynamic range in excess of 10[sup 4]:1. Various image deflection techniques are assessed for imaging fidelity, deflection speed, and capacity for external triggering. Innovative multiple-pass deflection methods for utilizing optomechanical deflector have been conceived and developed to attain multi-fold amplification for the optical scanning. speed across the CCD surface at a given angular deflector speed. Without significantly compromising imaging. quality or flux throughput efficiency, these optical methods enable a sub-10 ns/pixel streak speed with the deflector moving benignly at 500 radians/second, or equivalently 80 revolutions /second. Test results of the prototype performance are summarized including a spatial resolution of 10 lp/mm at 65% CTF and a temporal resolution of 11.4 ns at 3.8 ns/pixel.

  10. A new tubeless nanosecond streak camera based on optical deflection and direct CCD imaging

    SciTech Connect

    Lai, C.C.

    1992-12-01

    A new optically deflected streaking camera with performance of nanosecond-range resolution, superior imaging quality, high signal detectability, and large format recording has been conceived and developed. Its construction is composed of an optomechanical deflector that deflects the line-shape image of spatial-distributed time-varying signals across the sensing surface of a cooled scientific two-dimensional CCD array with slow readout driving electronics, a lens assembly, and a desk-top computer for prompt digital data acquisition and processing. Its development utilizes the synergism of modern technologies in sensor, optical deflector, optics and microcomputer. With laser light as signal carrier, the deflecting optics produces near diffraction-limited streak images resolving to a single pixel size of 25{mu}. A 1kx1k-pixel array can thus provide a vast record of 1,000 digital data points along each spatial or temporal axis. Since only one photon-to-electron conversion exists in the entire signal recording path, the camera responses linearly to the incident light over a wide dynamic range in excess of 10{sup 4}:1. Various image deflection techniques are assessed for imaging fidelity, deflection speed, and capacity for external triggering. Innovative multiple-pass deflection methods for utilizing optomechanical deflector have been conceived and developed to attain multi-fold amplification for the optical scanning. speed across the CCD surface at a given angular deflector speed. Without significantly compromising imaging. quality or flux throughput efficiency, these optical methods enable a sub-10 ns/pixel streak speed with the deflector moving benignly at 500 radians/second, or equivalently 80 revolutions /second. Test results of the prototype performance are summarized including a spatial resolution of 10 lp/mm at 65% CTF and a temporal resolution of 11.4 ns at 3.8 ns/pixel.

  11. Real-Time Spherical Videos from a Fast Rotating Camera

    E-print Network

    Nielsen, Frank

    Real-Time Spherical Videos from a Fast Rotating Camera Frank Nielsen1 , Alexis Andr´e1, Tokyo Japan frank.nielsen@acm.org 2 Tokyo Institute of Technology 2-12-1 Oookayama, 152-8552 Meguro century to find the first artistic full cylindrical panoramas, by the Irish painter Barker [1

  12. Compact 3D flash lidar video cameras and applications

    Microsoft Academic Search

    Roger Stettner

    2010-01-01

    The theory and operation of Advanced Scientific Concepts, Inc.'s (ASC) latest compact 3D Flash LIDAR Video Cameras (3D FLVCs) and a growing number of technical problems and solutions are discussed. The solutions range from space shuttle docking, planetary entry, decent and landing, surveillance, autonomous and manned ground vehicle navigation and 3D imaging through particle obscurants.

  13. Compact 3D flash lidar video cameras and applications

    NASA Astrophysics Data System (ADS)

    Stettner, Roger

    2010-04-01

    The theory and operation of Advanced Scientific Concepts, Inc.'s (ASC) latest compact 3D Flash LIDAR Video Cameras (3D FLVCs) and a growing number of technical problems and solutions are discussed. The solutions range from space shuttle docking, planetary entry, decent and landing, surveillance, autonomous and manned ground vehicle navigation and 3D imaging through particle obscurants.

  14. Lights, Camera, Action! Using Video Recordings to Evaluate Teachers

    ERIC Educational Resources Information Center

    Petrilli, Michael J.

    2011-01-01

    Teachers and their unions do not want test scores to count for everything; classroom observations are key, too. But planning a couple of visits from the principal is hardly sufficient. These visits may "change the teacher's behavior"; furthermore, principals may not be the best judges of effective teaching. So why not put video cameras in…

  15. A Torque Distribution Method Using CCD Cameras That Is Suitable for Electric Vehicles Driven by Front and Rear Wheels Independently

    Microsoft Academic Search

    Nobuyoshi Mutoh; K. Yokota

    2007-01-01

    This paper describes a method to distribute driving and breaking torques to electric vehicles (EVs) driven by front and rear wheels independently while estimating the distance between an obstacle and the running vehicle and the road surface situations using CCD cameras. The method is comprised of braking torque distribution procedures to control wheel locks generated due to load movement occurring

  16. Development of Measurement Device of Working Radius of Crane Based on Single CCD Camera and Laser Range Finder

    Microsoft Academic Search

    Shunsuke Nara; Satoru Takahashi

    2007-01-01

    In this paper, what we want to do is to develop an observation device to measure the working radius of a crane truck. The device has a single CCD camera, a laser range finder and two AC servo motors. First, in order to measure the working radius, we need to consider algorithm of a crane hook recognition. Then, we attach

  17. Double Star Measurements at the Southern Sky with a 50 cm Reflector and a Fast CCD Camera in 2014

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2015-04-01

    A Ritchey-Chrétien reflector with 50 cm aperture was used in Namibia for recordings of double stars with a fast CCD camera and a notebook computer. From superposition of "lucky images", measurements of 91 pairings in 79 double and multiple systems were obtained and compared with literature data. Occasional deviations are discussed. Some images of noteworthy systems are also presented.

  18. Scientific CCD technology at JPL

    NASA Astrophysics Data System (ADS)

    Janesick, J.; Collins, S. A.; Fossum, E. R.

    1991-03-01

    Charge-coupled devices (CCD's) were recognized for their potential as an imaging technology almost immediately following their conception in 1970. Twenty years later, they are firmly established as the technology of choice for visible imaging. While consumer applications of CCD's, especially the emerging home video camera market, dominated manufacturing activity, the scientific market for CCD imagers has become significant. Activity of the Jet Propulsion Laboratory and its industrial partners in the area of CCD imagers for space scientific instruments is described. Requirements for scientific imagers are significantly different from those needed for home video cameras, and are described. An imager for an instrument on the CRAF/Cassini mission is described in detail to highlight achieved levels of performance.

  19. An Immersive Free-Viewpoint Video System Using Multiple Outer\\/Inner Cameras

    Microsoft Academic Search

    Hansung Kim; Itaru Kitahara; Ryuuki Sakamoto; Kiyoshi Kogure

    2006-01-01

    We propose a new free-view video system that generates immersive 3D video from arbitrary point of view, using outer cameras and an inner omni-directional camera. The system reconstructs 3D modes from the captured video streams and generates realistic free-view video of those objects from a virtual camera. In this paper, we propose a real-time omni-directional camera calibration method, and describe

  20. The high resolution gamma imager (HRGI): a CCD based camera for medical imaging

    NASA Astrophysics Data System (ADS)

    Lees, John. E.; Fraser, George. W.; Keay, Adam; Bassford, David; Ott, Robert; Ryder, William

    2003-11-01

    We describe the High Resolution Gamma Imager (HRGI): a Charge Coupled Device (CCD) based camera for imaging small volumes of radionuclide uptake in tissues. The HRGI is a collimated, scintillator-coated, low cost, high performance imager using low noise CCDs that will complement whole-body imaging Gamma Cameras in nuclear medicine. Using 59.5 keV radiation from a 241Am source we have measured the spatial resolution and relative efficiency of test CCDs from E2V Technologies (formerly EEV Ltd.) coated with Gadox (Gd 2O 2S(Tb)) layers of varying thicknesses. The spatial resolution degrades from 0.44 to 0.6 mm and the detection efficiency increases (×3) as the scintillator thickness increases from 100 to 500 ?m. We also describe our first image using the clinically important isotope 99mTc. The final HRGI will have intrinsic sub-mm spatial resolution (˜0.7 mm) and good energy resolution over the energy range 30-160 keV.

  1. CameraCast: flexible access to remote video sensors

    NASA Astrophysics Data System (ADS)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  2. Pattern Recognition Letters 00 (2012) 125 Intelligent Multi-Camera Video Surveillance: A Review

    E-print Network

    Wang, Xiaogang

    2012-01-01

    Pattern Recognition Letters 00 (2012) 1­25 Journal Logo Intelligent Multi-Camera Video Surveillance, Hong Kong Abstract Intelligent multi-camera video surveillance is a multidisciplinary field related analysis and co- operative video surveillance both with active and static cameras. Detailed descriptions

  3. Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion

    Microsoft Academic Search

    Junlan Yang; Dan Schonfeld; Magdi A. Mohamed

    2009-01-01

    Video stabilization is an important technique in digital cameras. Its impact increases rapidly with the rising popularity of handheld cameras and cameras mounted on moving platforms (e.g., cars). Stabilization of two images can be viewed as an image registration problem. However, to ensure the visual quality of the whole video, video stabilization has a particular emphasis on the accuracy and

  4. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  5. Measuring the Flatness of Focal Plane for Very Large Mosaic CCD Camera

    SciTech Connect

    Hao, Jiangang; Estrada, Juan; Cease, Herman; Diehl, H.Thomas; Flaugher, Brenna L.; Kubik, Donna; Kuk, Keivin; Kuropatkine, Nickolai; Lin, Huan; Montes, Jorge; Scarpine, Vic; /Fermilab

    2010-06-08

    Large mosaic multiCCD camera is the key instrument for modern digital sky survey. DECam is an extremely red sensitive 520 Megapixel camera designed for the incoming Dark Energy Survey (DES). It is consist of sixty two 4k x 2k and twelve 2k x 2k 250-micron thick fully-depleted CCDs, with a focal plane of 44 cm in diameter and a field of view of 2.2 square degree. It will be attached to the Blanco 4-meter telescope at CTIO. The DES will cover 5000 square-degrees of the southern galactic cap in 5 color bands (g, r, i, z, Y) in 5 years starting from 2011. To achieve the science goal of constraining the Dark Energy evolution, stringent requirements are laid down for the design of DECam. Among them, the flatness of the focal plane needs to be controlled within a 60-micron envelope in order to achieve the specified PSF variation limit. It is very challenging to measure the flatness of the focal plane to such precision when it is placed in a high vacuum dewar at 173 K. We developed two image based techniques to measure the flatness of the focal plane. By imaging a regular grid of dots on the focal plane, the CCD offset along the optical axis is converted to the variation the grid spacings at different positions on the focal plane. After extracting the patterns and comparing the change in spacings, we can measure the flatness to high precision. In method 1, the regular dots are kept in high sub micron precision and cover the whole focal plane. In method 2, no high precision for the grid is required. Instead, we use a precise XY stage moves the pattern across the whole focal plane and comparing the variations of the spacing when it is imaged by different CCDs. Simulation and real measurements show that the two methods work very well for our purpose, and are in good agreement with the direct optical measurements.

  6. Comparison of mechanically egg-triggered cameras and time-lapse video cameras in identifying predators at Dusky Flycatcher nests

    Microsoft Academic Search

    Joseph R. Liebezeit; T. Luke George

    2003-01-01

    We compared the effectiveness and reliability of mechanically egg-triggered set-cameras and time- lapse video cameras in identifying nest predators at active Dusky Flycatcher (Empidonax oberholseri) nests in Siskiyou County, California. We monitored 72 active flycatcher nests using camera systems from 1998 to 2000. Nest abandonment, hatching success, and daily survival rate did not differ between camera systems. Set-cameras were less

  7. A game-theoretic design for collaborative tracking in a video camera network

    Microsoft Academic Search

    Zongjie Tu; Prabir Bhattacharya

    2011-01-01

    Tracking a moving target of interest at a high resolution with a dynamically designated network camera while ensuring complete-coverage of the area under surveillance of a video camera network can be a challenging task. Game theory can be applied to the situation, treating individual cameras as players and area coverage as utility. Camera collaboration is needed when one camera handoff

  8. Camera model compensation for image integration of time-of-flight depth video and color video

    NASA Astrophysics Data System (ADS)

    Yamashita, Hiromu; Tokai, Shogo; Uchino, Shunpei

    2015-03-01

    In this paper, we describe a consideration of a method of a camera calibration for TOF depth camera in the case of using with color video camera to combine their images into colored 3D models of a scene. Mainly, there are two problems with the calibration to combine them. One is stability of the TOF measurements, and another is deviation between the measured depth values and actual distances that are based on a geometrical camera model. To solve them, we propose a calibration method for it. At first, we estimate an optimum offset distance and intrinsic parameters for the depth camera to match both the measured depth value and its ideal one. Using estimated offset to consecutive frames and compensating the measured values to the actual distance values each frame, we try to remove the difference of camera models and suppress the noise as temporal variation For the estimation, we used the Zhang's calibration method for the intensity image from the depth camera and the color video image of a chessboard pattern. Using this method, we can get the 3D models which are matched between depth and color information correctly and stably. We also explain effectiveness of our approach by showing several experimental results.

  9. Performance of an Analog ASIC Developed for X-ray CCD Camera Readout System Onboard Astronomical Satellite

    Microsoft Academic Search

    Hiroshi Nakajima; Daisuke Matsuura; Naohisa Anabuki; Emi Miyata; Hiroshi Tsunemi; John P. Doty; Hirokazu Ikeda; Takeshi Takashima; Haruyoshi Katayama

    2009-01-01

    We present the performance and radiation tolerance of an analog application-specified integrated circuit (ASIC) developed for the engineering model of X-ray charge-coupled device (CCD) camera onboard the next Japanese astronomical satellite. The ASIC has four identical channels and each of them equips a pre-amplifier and two DeltaSigma analog-to-digital converters. The 3 mm square bare chip has been packaged into the

  10. Characterization of OCam and CCD220: the fastest and most sensitive camera to date for AO wavefront sensing

    Microsoft Academic Search

    Philippe Feautrier; Jean-Luc Gach; Philippe Balard; Christian Guillaume; Mark Downing; Norbert Hubin; Eric Stadler; Yves Magnard; Michael Skegg; Mark Robbins; Sandy Denney; Wolfgang Suske; Paul Jorden; Patrick Wheeler; Peter Pool; Ray Bell; David Burt; Ian Davies; Javier Reyes; Manfred Meyer; Dietrich Baade; Markus Kasper; Robin Arsenault; Thierry Fusco; José Javier Diaz-Garcia

    2010-01-01

    For the first time, sub-electron read noise has been achieved with a camera suitable for astronomical wavefront-sensing (WFS) applications. The OCam system has demonstrated this performance at 1300 Hz frame rate and with 240×240-pixel frame rate. ESO and JRA2 OPTICON2 have jointly funded e2v technologies to develop a custom CCD for Adaptive Optics (AO) wavefront sensing applications. The device, called

  11. MOA-cam3: a wide-field mosaic CCD camera for a gravitational microlensing survey in New Zealand

    E-print Network

    T. Sako; T. Sekiguchi; M. Sasaki; K. Okajima; F. Abe; I. A. Bond; J. B. Hearnshaw; Y. Itow; K. Kamiya; P. M. Kilmartin; K. Masuda; Y. Matsubara; Y. Muraki; N. J. Rattenbury; D. J. Sullivan; T. Sumi; P. Tristram; T. Yanagisawa; P. C. M. Yock

    2008-04-04

    We have developed a wide-field mosaic CCD camera, MOA-cam3, mounted at the prime focus of the Microlensing Observations in Astrophysics (MOA) 1.8-m telescope. The camera consists of ten E2V CCD4482 chips, each having 2kx4k pixels, and covers a 2.2 deg^2 field of view with a single exposure. The optical system is well optimized to realize uniform image quality over this wide field. The chips are constantly cooled by a cryocooler at -80C, at which temperature dark current noise is negligible for a typical 1-3 minute exposure. The CCD output charge is converted to a 16-bit digital signal by the GenIII system (Astronomical Research Cameras Inc.) and readout is within 25 seconds. Readout noise of 2--3 ADU (rms) is also negligible. We prepared a wide-band red filter for an effective microlensing survey and also Bessell V, I filters for standard astronomical studies. Microlensing studies have entered into a new era, which requires more statistics, and more rapid alerts to catch exotic light curves. Our new system is a powerful tool to realize both these requirements.

  12. Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range

    NASA Astrophysics Data System (ADS)

    Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.

    2013-12-01

    The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/?E) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 ?m from the current 24 ?m spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 ?m square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 ?m and 3.9±0.1 ?m at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these worst-case resolution measurements, estimating the spatial resolution to be approximately 3.5 ?m and 3.0 ?m at 530 eV and 680 eV, well below the resolution limit of 5 ?m required to improve the spectral resolution by a factor of 2.

  13. Highly flexible and Internet-programmable CCD camera with a frequency-selectable read-out for imaging and spectroscopy applications

    Microsoft Academic Search

    Luca Gori; Emanuele Pace; Leonardo Tommasi; D. Sarocchi; V. Bagnoli; M. Sozzi; S. Puri

    2001-01-01

    A new concept CCD camera is currently being realized at the XUV Lab of the Department of Astronomy and Space Science of the University of Florence. The main features we aim to get are a high level of versatility and a fast pixel rate. Within this project, a versatile CCD sequencer has been realized with interesting and innovative features. Based

  14. Hydrometry Using Numerical Video Camera Sheltered From Floods

    NASA Astrophysics Data System (ADS)

    Fourquet, G.; Saulnier, G.-M.

    Accurate real time flood management requires as much measurements as possible. This is not easy as, for example, rating curve are established for large discharges with many difficulties (high hazard in sampling river velocities during the flood). Build a dense measurements system on a river is also an expensive task. There is thus a need for light (i.e. cheap) measurements sytem, that can work during strong floods and that can help to manage ungauged catchment. The work presented here try to contribute to this task. A hydrological measure system using a numerical video camera is presented. This system can feed the manager with images of the flood in real-time, which is an efficient way to assess the flood hazard. In the same time, detection algorithm are performed on the images of the video camera to get two additional informations: the water level and the water surface velocity. These informations can be used as it is in alarm system or be assimilated by hydrological model of flood forecast. This is a non-contact measurement system which remains the most expensive part (the camera) sheltered from the floods, costs are low compared to gauge station setting up and it allows non-permanent measurements network for temporary intense observa- tion period for example. First results of the water level detection algorithm will be presented with consider- ations on uncertainty measurements. Tests are made on the Isere river at Grenoble (5800 km2), France.

  15. Video Chat with Multiple Cameras John MacCormick, Dickinson College

    E-print Network

    MacCormick, John

    employing up to four webcams si- multaneously demonstrate that multi-camera video chat is feasible for video chat employs a single webcam at each end of the conversation. For many purposes, this is perfectlyVideo Chat with Multiple Cameras John MacCormick, Dickinson College ABSTRACT The dominant paradigm

  16. The Terrascope Dataset: A Scripted Multi-Camera Indoor Video Surveillance Dataset with Ground-truth

    E-print Network

    Kale, Amit

    The Terrascope Dataset: A Scripted Multi-Camera Indoor Video Surveillance Dataset with Ground introduces a new video surveillance dataset that was captured by a network of synchronized cameras placed of research efforts related to video surveillance in multiple, potentially non-overlapping, cam- era networks

  17. Classification of volcanic ash particles from Sakurajima volcano using CCD camera image and cluster analysis

    NASA Astrophysics Data System (ADS)

    Miwa, T.; Shimano, T.; Nishimura, T.

    2012-12-01

    Quantitative and speedy characterization of volcanic ash particle is needed to conduct a petrologic monitoring of ongoing eruption. We develop a new simple system using CCD camera images for quantitatively characterizing ash properties, and apply it to volcanic ash collected at Sakurajima. Our method characterizes volcanic ash particles by 1) apparent luminance through RGB filters and 2) a quasi-fractal dimension of the shape of particles. Using a monochromatic CCD camera (Starshoot by Orion Co. LTD.) attached to a stereoscopic microscope, we capture digital images of ash particles that are set on a glass plate under which white colored paper or polarizing plate is set. The images of 1390 x 1080 pixels are taken through three kinds of color filters (Red, Green and Blue) under incident-light and transmitted-light through polarizing plate. Brightness of the light sources is set to be constant, and luminance is calibrated by white and black colored papers. About fifteen ash particles are set on the plate at the same time, and their images are saved with a bit map format. We first extract the outlines of particles from the image taken under transmitted-light through polarizing plate. Then, luminances for each color are represented by 256 tones at each pixel in the particles, and the average and its standard deviation are calculated for each ash particle. We also measure the quasi-fractal dimension (qfd) of ash particles. We perform box counting that counts the number of boxes which consist of 1×1 and 128×128 pixels that catch the area of the ash particle. The qfd is estimated by taking the ratio of the former number to the latter one. These parameters are calculated by using software R. We characterize volcanic ash from Showa crater of Sakurajima collected in two days (Feb 09, 2009, and Jan 13, 2010), and apply cluster analyses. Dendrograms are formed from the qfd and following four parameters calculated from the luminance: Rf=R/(R+G+B), G=G/(R+G+B), B=B/(R+G+B), and total luminance=(R+G+B)/665. We classify the volcanic ash particles from the Dendrograms into three groups based on the euclid distance. The groups are named as Group A, B and C in order of increasing of the average value of total luminance. The classification shows that the numbers of particles belonging to Group A, B and C are 77, 25 and 6 in Feb, 09, 2009 sample, and 102, 19 and 6 in Jan, 13, 2010 sample, respectively. The examination under stereoscopic microscope suggests that Group A, B and C mainly correspond with juvenile, altered and free-crystal particles, respectively. So the result of classification by present method demonstrates a difference in the contribution of juvenile material between the two days. To evaluate reliability of our classification, we classify pseudo-samples in which errors of 10% are added in the measured parameters. We apply our method to one thousand psuedo-samples, and the result shows that the numbers of particles classified into the three groups vary less than 20 % of the total number of 235 particles. Our system can classify 120 particles within 6 minutes so that we easily increase the number of ash particles, which enable us to improve reliabilities and resolutions of the classification and to speedily capture temporal changes of the property of ash particles from active volcanoes.

  18. Flat Field Anomalies in an X-ray CCD Camera Measured Using a Manson X-ray Source

    SciTech Connect

    M. J. Haugh and M. B. Schneider

    2008-10-31

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 ?m square pixels, and 15 ?m thick. A multi-anode Manson X-ray source, operating up to 10kV and 10W, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/?E?10. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within ±1% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation occurred at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was not observable below 4 keV. We were also able to observe debris, damage, and surface defects on the CCD chip. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.

  19. Correction of Spatially Varying Image and Video Motion Blur Using a Hybrid Camera

    E-print Network

    Kim, Dae-Shik

    -blurred, high-resolution images yields high-frequency details, but with ringing artifacts due to the lack of low-frequenc a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary

  20. Video Surveillance using a Multi-Camera Tracking and Fusion System

    E-print Network

    Paris-Sud XI, Université de

    Video Surveillance using a Multi-Camera Tracking and Fusion System Zhong Zhang, Andrew Scanlon.com Abstract. Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems, Weihong Yin, Li Yu, Péter L. Venetianer ObjectVideo Inc. {zzhang, ascanlon, wyin, liyu, pvenetianer}@ObjectVideo

  1. High-speed CCD movie camera with random pixel selection, for neurobiology research

    E-print Network

    recording, high-speed programmable CCD, neural imaging. 1. INTRODUCTION The brain is probably the most recall. This amazing organ effortlessly provides creativity, emotions, language., and many other brain/L) conversion technology, high-speed read

  2. Acceptance/operational test report 103-SY and 101-SY tank camera purge system and 103-SY video camera system

    SciTech Connect

    Castleberry, J.L.

    1994-11-01

    This Acceptance/Operational Test Report will document the satisfactory operation of the 103-SY/101-SY Purge Control System and the 103-SY Video Camera System after installation into riser 5B of tank 241-SY-103.

  3. Applications of visible CCD cameras on the Alcator C-Mod C. J. Boswell, J. L. Terry, B. Lipschultz, J. Stillerman

    E-print Network

    Boswell, Christopher

    a wide-angle view of the tokamak. All five of the CCD camera are off-the-shelf remote-head "pencil field coils and magnetic fields of up to 4 T. Fig. 1 shows the location of the cameras in the reentrant

  4. Photon-counting gamma camera based on columnar CsI(Tl) optically coupled to a back-illuminated CCD

    NASA Astrophysics Data System (ADS)

    Miller, Brian W.; Barber, H. Bradford; Barrett, Harrison H.; Chen, Liying; Taylor, Sean J.

    2007-03-01

    Recent advances have been made in a new class of CCD-based, single-photon-counting gamma-ray detectors which offer sub-100 ?m intrinsic resolutions. 1-7 These detectors show great promise in small-animal SPECT and molecular imaging and exist in a variety of cofigurations. Typically, a columnar CsI(Tl) scintillator or a radiography screen (Gd IIO IIS:Tb) is imaged onto the CCD. Gamma-ray interactions are seen as clusters of signal spread over multiple pixels. When the detector is operated in a charge-integration mode, signal spread across pixels results in spatial-resolution degradation. However, if the detector is operated in photon-counting mode, the gamma-ray interaction position can be estimated using either Anger (centroid) estimation or maximum-likelihood position estimation resulting in a substantial improvement in spatial resolution.2 Due to the low-light-level nature of the scintillation process, CCD-based gamma cameras implement an amplfication stage in the CCD via electron multiplying (EMCCDs) 8-10 or via an image intensfier prior to the optical path.1 We have applied ideas and techniques from previous systems to our high-resolution LumiSPECT detector. 11, 12 LumiSPECT is a dual-modality optical/SPECT small-animal imaging system which was originally designed to operate in charge-integration mode. It employs a cryogenically cooled, high-quantum-efficiency, back-illuminated large-format CCD and operates in single-photon-counting mode without any intermediate amplfication process. Operating in photon-counting mode, the detector has an intrinsic spatial resolution of 64 ?m compared to 134 ?m in integrating mode.

  5. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white squares to an object of interest (see Figure 2). For other situations, where circular symmetry is more desirable, circular targets also can be created. Such a target can readily be generated and modified by use of commercially available software and printed by use of a standard office printer. All three relative coordinates (x, y, and z) of each target can be determined by processing the video image of the target. Because of the unique design of corresponding image-processing filters and targets, the vision-based position- measurement system is extremely robust and tolerant of widely varying fields of view, lighting conditions, and varying background imagery.

  6. Correction of Spatially Varying Image and Video Motion Blur using a Hybrid Camera

    E-print Network

    Washington at Seattle, University of

    -resolution images yields high-frequency d is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower

  7. Evaluation of Motion Blur Considering Temporal Frequency Characteristics of Video Camera and LCD Systems

    NASA Astrophysics Data System (ADS)

    Chae, Seok-Min; Song, In-Ho; Lee, Sung-Hak; Sohng, Kyu-Ik

    In this study, we show that the motion blur is caused by exposure time of video camera as well as the characteristics of LCD system. Also, we suggest that evaluation method of motion picture quality according to the frequency response of video camera and LCD systems of hold and scanning backlight type.

  8. Calibration of stereo cameras using a non-linear distortion model [CCD sensory

    Microsoft Academic Search

    Juyang Weng; P. Cohen; M. Herniou

    1990-01-01

    A camera model is presented which accounts for major sources of camera distortion: radial, decentering, and thin-prism distortions. The proposed calibration procedure consists of two steps. In the first step, calibration parameters are estimated using a closed-form solution based on a distortion-free camera model. In the second step, the parameters estimated in the first step are improved iteratively through nonlinear

  9. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  10. Influence of the TDI CCD camera takes pictures when high resolution satellite lateral swaying

    Microsoft Academic Search

    Xiubin Yang; Xing Zhong; Guang Jin; Liu Zhang; Zhiyuan Sun

    2009-01-01

    When the satellite was swinging and taking lateral imaging, the TDI CCD pixel photovoltaic charge purses transferred velocity not matched with the image motion velocity. According to the requirement of high-resolution satellite which was taking lateral imaging at a low orbit, the factors that bring about the unmatched phenomenon were analysed, inclusive of the Earth's rotation, the Earth curvature, satellite

  11. High Resolution Measurements of Beach Face Morphology Using Stereo Video Cameras

    Microsoft Academic Search

    L. Clarke; R. Holman

    2006-01-01

    High resolution measurements of beach elevation are computed using images from a pair of video cameras viewing the same scene from different angles. Given the camera positions and camera calibration data, the beach face can be accurately reconstructed from 3-D coordinates computed at positions corresponding to every image pixel. Measurements of subaerial beach morphology at Duck Beach, North Carolina and

  12. Construction of a Junction Box for Use with an Inexpensive, Commercially Available Underwater Video Camera Suitable for Aquatic Research

    Microsoft Academic Search

    Steven J. Cooke; Christopher M. Bunt

    2004-01-01

    Underwater video camera apparatus is an important fisheries research tool. Such cameras, developed and marketed for recreational anglers, provide an opportunity for researchers to easily obtain cost-effective and waterproof video apparatus for fisheries research. We detail a series of modifications to an inexpensive, commercially available underwater video camera (about US$125) that provide flexibility for deploying the equipment in the laboratory

  13. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    NASA Astrophysics Data System (ADS)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  14. Compact pnCCD-based X-ray camera with high spatial and energy resolution: a color X-ray camera.

    PubMed

    Scharf, O; Ihle, S; Ordavo, I; Arkadiev, V; Bjeoumikhov, A; Bjeoumikhova, S; Buzanich, G; Gubzhokov, R; Günther, A; Hartmann, R; Kühbacher, M; Lang, M; Langhoff, N; Liebel, A; Radtke, M; Reinholz, U; Riesemeier, H; Soltau, H; Strüder, L; Thünemann, A F; Wedell, R

    2011-04-01

    For many applications there is a requirement for nondestructive analytical investigation of the elemental distribution in a sample. With the improvement of X-ray optics and spectroscopic X-ray imagers, full field X-ray fluorescence (FF-XRF) methods are feasible. A new device for high-resolution X-ray imaging, an energy and spatial resolving X-ray camera, is presented. The basic idea behind this so-called "color X-ray camera" (CXC) is to combine an energy dispersive array detector for X-rays, in this case a pnCCD, with polycapillary optics. Imaging is achieved using multiframe recording of the energy and the point of impact of single photons. The camera was tested using a laboratory 30 ?m microfocus X-ray tube and synchrotron radiation from BESSY II at the BAMline facility. These experiments demonstrate the suitability of the camera for X-ray fluorescence analytics. The camera simultaneously records 69,696 spectra with an energy resolution of 152 eV for manganese K(?) with a spatial resolution of 50 ?m over an imaging area of 12.7 × 12.7 mm(2). It is sensitive to photons in the energy region between 3 and 40 keV, limited by a 50 ?m beryllium window, and the sensitive thickness of 450 ?m of the chip. Online preview of the sample is possible as the software updates the sums of the counts for certain energy channel ranges during the measurement and displays 2-D false-color maps as well as spectra of selected regions. The complete data cube of 264 × 264 spectra is saved for further qualitative and quantitative processing. PMID:21355541

  15. Automated morphometry of corneal endothelial cell: use of video camera and video tape recorder.

    PubMed Central

    Nishi, O; Hanasaki, K

    1988-01-01

    We developed an apparatus for automated morphometry of the corneal endothelium, which was photographed through a specular microscope connected to a video camera, and the images were stored on a video tape. The clearest stationary image was input into an image analyser to determine automatically the cell boundaries. Although human interaction is generally necessary, the mean time required to complete this procedure was about 13 minutes, based on the results of the 30 normal eyes, and the time needed for manual correction was about 4 minutes. The mean cell area obtained by this method correlated well (r = 0.9335) with that obtained by tracing the same images. This apparatus is clinically useful for immediately obtaining the mean cell area of corneal endothelium and will extend the application of specular microscopy to the routine clinical setting. Images PMID:3342222

  16. Data acquisition at the 700mm CCD camera of the Ond?ejov telescope coudé spectrograph

    NASA Astrophysics Data System (ADS)

    Slechta, Miroslav; Skoda, Petr

    We present a short tutorial on driving the 2-meter telescope coudé spectrograph and on acquisition of scientific data. We hope that visitors collaborating with stellar department will appreciate this tutorial as an useful cookbook. Although the user interface driving the spectrograph and CCD controller is quite comfortable, it is not possible to find or understand all keywords and commands intuitively. Therefore we present this manual containing all necessary commands with descriptions and examples.

  17. A toolkit for the characterization of CCD cameras for transmission electron microscopy

    Microsoft Academic Search

    M. Vulovic; B. Rieger; L. J. Van Vliet; A. J. Koster; R. B. G. Ravelli

    2009-01-01

    Charge-coupled devices (CCD) are nowadays commonly utilized in transmission electron microscopy (TEM) for applications in life sciences. Direct access to digitized images has revolutionized the use of electron microscopy, sparking developments such as automated collection of tomographic data, focal series, random conical tilt pairs and ultralarge single-particle data sets. Nevertheless, for ultrahigh-resolution work photographic plates are often still preferred. In

  18. A toolkit for the characterization of CCD cameras for transmission electron microscopy

    Microsoft Academic Search

    M. Vulovic; B. Rieger; L. J. van Vliet; A. J. Koster; R. B. G. Ravelli

    2010-01-01

    Charge-coupled devices (CCD) are nowadays commonly utilized in transmission electron microscopy (TEM) for applications in life sciences. Direct access to digitized images has revolutionized the use of electron microscopy, sparking developments such as automated collection of tomographic data, focal series, random conical tilt pairs and ultralarge single-particle data sets. Nevertheless, for ultrahigh-resolution work photographic plates are often still preferred. In

  19. [Research Award providing funds for a tracking video camera

    NASA Technical Reports Server (NTRS)

    Collett, Thomas

    2000-01-01

    The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.

  20. Single-step calibration, prediction and real samples data acquisition for artificial neural network using a CCD camera.

    PubMed

    Maleki, N; Safavi, A; Sedaghatpour, F

    2004-11-15

    An artificial neural network (ANN) model is developed for simultaneous determination of Al(III) and Fe(III) in alloys by using chrome azurol S (CAS) as the chromogenic reagent and CCD camera as the detection system. All calibration, prediction and real samples data were obtained by taking a single image. Experimental conditions were established to reduce interferences and increase sensitivity and selectivity in the analysis of Al(III) and Fe(III). In this way, an artificial neural network consisting of three layers of nodes was trained by applying a back-propagation learning rule. Sigmoid transfer functions were used in the hidden and output layers to facilitate nonlinear calibration. Both Al(III) and Fe(III) can be determined in the concentration range of 0.25-4mugml(-1) with satisfactory accuracy and precision. The proposed method was also applied satisfactorily to the determination of considered metal ions in two synthetic alloys. PMID:18969677

  1. Infrared imaging spectrometry by the use of bundled chalcogenide glass fibers and a PtSi CCD camera

    NASA Astrophysics Data System (ADS)

    Saito, Mitsunori; Kikuchi, Katsuhiro; Tanaka, Chinari; Sone, Hiroshi; Morimoto, Shozo; Yamashita, Toshiharu T.; Nishii, Junji

    1999-10-01

    A coherent fiber bundle for infrared image transmission was prepared by arranging 8400 chalcogenide (AsS) glass fibers. The fiber bundle, 1 m in length, is transmissive in the infrared spectral region of 1 - 6 micrometer. A remote spectroscopic imaging system was constructed with the fiber bundle and an infrared PtSi CCD camera. The system was used for the real-time observation (frame time: 1/60 s) of gas distribution. Infrared light from a SiC heater was delivered to a gas cell through a chalcogenide fiber, and transmitted light was observed through the fiber bundle. A band-pass filter was used for the selection of gas species. A He-Ne laser of 3.4 micrometer wavelength was also used for the observation of hydrocarbon gases. Gases bursting from a nozzle were observed successfully by a remote imaging system.

  2. A 2 million pixel FIT-CCD image sensor for HDTV camera system

    Microsoft Academic Search

    K. Yonemoto; T. Iizuku; S. Nakamura; K. Harada; K. Wada; M. Negishi; H. Yamada; T. Tsunakawa; K. Shinohara; T. Ishimaru; Y. Kamide; T. Yamasaki; M. Yamagishi

    1990-01-01

    The image area of the frame FIT (frame-interline-transfer)-CCD (charge-coupled-device) image sensor is 14.0 mm (H)*7.9 mm (V), the effective number of pixels is 1920 (H)*1036 (V) and the unit cell size of a pixel is 7.3 mu m (H)*7.6 mu m (V). These specifications are for the high-definition-television (HDTV) format. The horizontal shift register consists of dual-channel, two-phase CCDs driven

  3. Development of Slow Scan Digital CCD Camera for Low light level Image

    Microsoft Academic Search

    YAOYU CHENG; YAN HU; YONGHONG LI

    this paper studies the method of the development of low cost and high resolving power scientific grade camera for low light level image, its image can be received by computer. The main performance parameter and readout driving signal are introduced, the total scheme of image acquisition is designed. Using computer Expand Parallel Port and the pipelining work method of readout,

  4. Operational test procedure 241-AZ-101 waste tank color video camera system

    SciTech Connect

    Robinson, R.S.

    1996-10-30

    The purpose of this procedure is to provide a documented means of verifying that all of the functional components of the 241-AZ- 101 Waste Tank Video Camera System operate properly before and after installation.

  5. Fused Six-Camera Video of STS-134 Launch - Duration: 79 seconds.

    NASA Video Gallery

    Imaging experts funded by the Space Shuttle Program and located at NASA's Ames Research Center prepared this video by merging nearly 20,000 photographs taken by a set of six cameras capturing 250 i...

  6. Station Cameras Capture New Videos of Hurricane Katia - Duration: 5 minutes, 36 seconds.

    NASA Video Gallery

    Aboard the International Space Station, external cameras captured new video of Hurricane Katia as it moved northwest across the western Atlantic north of Puerto Rico at 10:35 a.m. EDT on September ...

  7. Computer-assisted skull identification system using video superimposition

    Microsoft Academic Search

    Mineo Yoshino; Hideaki Matsuda; Satoshi Kubota; Kazuhiko Imaizumi; Sachio Miyasaka; Sueshige Seta

    1997-01-01

    This system consists of two main units, namely a video superimposition system and a computer-assisted skull identification system. The video superimposition system is comprised of the following five parts: a skull-positioning box having a monochrome CCD camera, a photo-stand having a color CCD camera, a video image mixing device, a TV monitor and a videotape recorder. The computer-assisted skull identification

  8. Progress of the x-ray CCD camera development for the eROSITA telescope

    NASA Astrophysics Data System (ADS)

    Meidinger, Norbert; Andritschke, Robert; Aschauer, Florian; Bornemann, Walter; Emberger, Valentin; Eraerds, Tanja; Fürmetz, Maria; Hälker, Olaf; Hartner, Gisela; Kink, Walter; Müller, Siegfried; Pietschner, Daniel; Predehl, Peter; Reiffers, Jonas; Walther, Sabine; Weidenspointner, Georg

    2013-09-01

    The eROSITA space telescope is presently developed for the determination of cosmological parameters and the equation of state of dark energy via evolution of galaxy clusters. It will perform in addition a census of the obscured black hole growth in the Universe. The instrument development was also strongly motivated by the intention of a first imaging X-ray all-sky survey above an energy of 2 keV. eROSITA is scientific payload on the Russian research satellite SRG and the mission duration is scheduled for 7.5 years. The instrument comprises an array of seven identical and parallel-aligned telescopes. The mirror system is of Wolter-I type and the focal plane is equipped with a PNCCD camera for each of the telescopes. This instrumentation permits spectroscopy and imaging of X-rays in the energy band from 0.3 keV to 10 keV with a field of view of 1.0 degree. The camera development is done at the Max-Planck-Institute for Extraterrestrial Physics and in particular the key component, the PNCCD sensor, has been designed and fabricated at the semiconductor laboratory of the Max-Planck Society. All produced devices have been tested and the best selected for the eROSITA project. Based on calculations, simulations, and experimental testing of prototype systems, the flight cameras have been configured. We describe the detector and its performance, the camera design and electronics, the thermal system, and report on the latest estimates of the expected radiation damage taking into account the generation of secondary neutrons. The most recent test results will be presented as well as the status of the instrument development.

  9. Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura

    E-print Network

    North Carolina at Chapel Hill, University of

    a method for finding the position, orientation, and field of view of a video camera mounted on a tracked of the tracking system. The problem of finding a camera's position and orientation is thus one of once finding its fixed position and orientation relative to the reported position of the tracking system

  10. Capturing and analyzing stability of human body motions using video cameras

    Microsoft Academic Search

    Yoshihisa Shinagawa; Jun-ichi Nakajinia; Tosiyasu L. Kunii; Kazuhiro Hara

    1997-01-01

    The need for capturing human body motions has been increasing recently for making movies, sports instruction systems and robots that can simulate human motions. The paper proposes a method to facilitate motion capturing using inexpensive video cameras. In our system, a few cameras are used to obtain multiple views of a human body and a three dimensional (3D) volume consistent

  11. Quantitative underwater 3D motion analysis using submerged video cameras: accuracy analysis and trajectory reconstruction

    Microsoft Academic Search

    Amanda P. Silvatti; Pietro Cerveri; Thiago Telles; Fábio A. S. Dias; Guido Baroni; Ricardo M. L. Barros

    2012-01-01

    In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of

  12. Detecting Abandoned Packages in a Multi-Camera Video Surveillance System

    Microsoft Academic Search

    Michael D. Beynon; Daniel J. Van Hook; Michael Seibert; Alen Peacock; Dan E. Dudgeon

    2003-01-01

    We describe a video surveillance system that detects abandoned packages automatically. In this system, multiple cameras locate objects in space and time despite occlusions and distracting lighting effects observed by subsets of the cameras. A multiple-state model of an abandoned package provides the ability to detect realistic abandoned package events. The paper outlines the system by describing the modules for

  13. Low cost referenced luminescent imaging of oxygen and pH with a 2-CCD colour near infrared camera.

    PubMed

    Ehgartner, Josef; Wiltsche, Helmar; Borisov, Sergey M; Mayr, Torsten

    2014-10-01

    A low cost imaging set-up for optical chemical sensors based on NIR-emitting dyes is presented. It is based on a commercially available 2-CCD colour near infrared camera, LEDs and tailor-made optical sensing materials for oxygen and pH. The set-up extends common ratiometric RGB imaging based on the red, green and blue channels of colour cameras by an additional NIR channel. The hardware and software of the camera were adapted to perform ratiometric imaging. A series of new planar sensing foils were introduced to image oxygen, pH and both parameters simultaneously. The used NIR-emitting indicators are based on benzoporphyrins and aza-BODIPYs for oxygen and pH, respectively. Moreover, a wide dynamic range oxygen sensor is presented. It allows accurate imaging of oxygen from trace levels up to ambient air concentrations. The imaging set-up in combination with the normal range ratiometric oxygen sensor showed a resolution of 4-5 hPa at low oxygen concentrations (<50 hPa) and 10-15 hPa at ambient air oxygen concentrations; the trace range oxygen sensor (<20 hPa) revealed a resolution of about 0.5-1.8 hPa. The working range of the pH-sensor was in the physiological region from pH 6.0 up to pH 8.0 and showed an apparent pKa-value of 7.3 with a resolution of about 0.1 pH units. The performance of the dual parameter oxygen/pH sensor was comparable to the single analyte pH and normal range oxygen sensors. PMID:25096329

  14. Design and Implementation of a Wireless Video Camera Network for Coastal Erosion Monitoring

    E-print Network

    Little, Thomas

    dynamic range in both time and space. We describe recent work in the development of a wireless video and supported by solar energy harvesting. The cameras are Internet-enabled and thus live video can be accessed (shown in Figure 1) experienced bluff retreat of 18 meters. The highest rates have been observed along

  15. Designing an Embedded Video Processing Camera Using a 16-bit Microprocessor for Surveillance System

    E-print Network

    Evans, Brian L.

    Designing an Embedded Video Processing Camera Using a 16-bit Microprocessor for Surveillance System@mail.utexas.edu } Abstract This paper describes the design and implementation of a hybrid intelligent surveillance system in 89 ms, which is within three frame-cycle periods for a 30Hz video system. In addition, the real

  16. Rayleigh Laser Guide Star Systems: UnISIS Bow Tie Shutter and CCD39 Wavefront Camera

    E-print Network

    Laird A. Thompson; Scott W. Teare; Samuel L. Crawford; Robert W. Leach

    2002-07-10

    Laser guide star systems based on Rayleigh scattering require some means to deal with the flash of low altitude laser light that follows immediately after each laser pulse. These systems also need a fast shutter to isolate the high altitude portion of the focused laser beam to make it appear star-like to the wavefront sensor. We describe how these tasks are accomplished with UnISIS, the Rayleigh laser guided adaptive optics system at the Mt. Wilson Observatory 2.5-m telescope. We use several methods: a 10,000 RPM rotating disk, dichroics, a fast sweep and clear mode of the CCD readout electronics on a 10 $\\mu$s timescale, and a Pockel's cell shutter system. The Pockel's cell shutter would be conventional in design if the laser light were naturally polarized, but the UnISIS 351 nm laser is unpolarized. So we have designed and put into operation a dual Pockel's cell shutter in a unique bow tie arrangement.

  17. A digital color CCD imaging system using custom VLSI circuits

    Microsoft Academic Search

    K. A. Parulski; L. J. D'Luna; R. H. Hibbard

    1989-01-01

    The authors describe a prototype digital imaging system that can be configured as a single-sensor video camera or a film-to-video converter. The system includes a CCD (charge-coupled device) image sensor with a 3G color filter pattern, two full-custom CMOS digital video signal processing chips, and a custom electronically programmable sequencer chip. The CMOS VLSI digital circuits offer real-time operation while

  18. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  19. Digital video technology and production 101: lights, camera, action.

    PubMed

    Elliot, Diane L; Goldberg, Linn; Goldberg, Michael J

    2014-01-01

    Videos are powerful tools for enhancing the reach and effectiveness of health promotion programs. They can be used for program promotion and recruitment, for training program implementation staff/volunteers, and as elements of an intervention. Although certain brief videos may be produced without technical assistance, others often require collaboration and contracting with professional videographers. To get practitioners started and to facilitate interactions with professional videographers, this Tool includes a guide to the jargon of video production and suggestions for how to integrate videos into health education and promotion work. For each type of video, production principles and issues to consider when working with a professional videographer are provided. The Tool also includes links to examples in each category of video applications to health promotion. PMID:24335238

  20. Spatial resolution limit study of a CCD camera and scintillator based neutron imaging system according to MTF determination and analysis.

    PubMed

    Kharfi, F; Denden, O; Bourenane, A; Bitam, T; Ali, A

    2012-01-01

    Spatial resolution limit is a very important parameter of an imaging system that should be taken into consideration before examination of any object. The objectives of this work are the determination of a neutron imaging system's response in terms of spatial resolution. The proposed procedure is based on establishment of the Modulation Transfer Function (MTF). The imaging system being studied is based on a high sensitivity CCD neutron camera (2×10(-5)lx at f1.4). The neutron beam used is from the horizontal beam port (H.6) of the Algerian Es-Salam research reactor. Our contribution is on the MTF determination by proposing an accurate edge identification method and a line spread function undersampling problem-resolving procedure. These methods and procedure are integrated into a MatLab code. The methods, procedures and approaches proposed in this work are available for any other neutron imaging system and allow for judging the ability of a neutron imaging system to produce spatial (internal details) properties of any object under examination. PMID:22014891

  1. Lights, Cameras, Pencils! Using Descriptive Video to Enhance Writing

    ERIC Educational Resources Information Center

    Hoffner, Helen; Baker, Eileen; Quinn, Kathleen Benson

    2008-01-01

    Students of various ages and abilities can increase their comprehension and build vocabulary with the help of a new technology, Descriptive Video. Descriptive Video (also known as described programming) was developed to give individuals with visual impairments access to visual media such as television programs and films. Described programs,…

  2. Temperature monitoring of Nd:YAG laser cladding (CW and PP) by advanced pyrometry and CCD-camera-based diagnostic tool

    NASA Astrophysics Data System (ADS)

    Doubenskaia, M.; Bertrand, Ph.; Smurov, Igor Y.

    2004-04-01

    The set of original pyrometers and the special diagnostic CCD-camera were applied for monitoring of Nd:YAG laser cladding (Pulsed-Periodic and Continuous Wave) with coaxial powder injection and on-line measurement of cladded layer temperature. The experiments were carried out in course of elaboration of wear resistant coatings using various powder blends (WC-Co, CuSn, Mo, Stellite grade 12, etc.) applying variation of different process parameters: laser power, cladding velocity, powder feeding rate, etc. Surface temperature distribution to the cladding seam and the overall temperature mapping were registered. The CCD-camera based diagnostic tool was applied for: (1) monitoring of flux of hot particles and its instability; (2) measurement of particle-in-flight size and velocity; (3) monitoring of particle collision with the clad in the interaction zone.

  3. High-density 3-D packaging technology based on the sidewall interconnection method and its application for CCD micro-camera visual inspection system

    Microsoft Academic Search

    Hiroshi Yamada; Takashi Togasaki; M. Kimura; H. Sudo

    2003-01-01

    High-density three-dimensional (3-D) packaging technology for a charge coupled device (CCD) micro-camera visual inspection system module has been developed by applying high-density interconnection stacked unit modules. The stacked unit modules have fine-pitch flip-chip interconnections within Cu-column-based solder bumps and high-aspect-ratio Cu sidewall footprints for vertical interconnections. Cu-column-based solder bump design and underfill encapsulation resin characteristics were optimized to reduce the

  4. Temperature monitoring of Nd:YAG laser cladding (CW and PP) by advanced pyrometry and CCD-camera-based diagnostic tool

    Microsoft Academic Search

    M. Doubenskaia; Ph. Bertrand; Igor Y. Smurov

    2004-01-01

    The set of original pyrometers and the special diagnostic CCD-camera were applied for monitoring of Nd:YAG laser cladding (Pulsed-Periodic and Continuous Wave) with coaxial powder injection and on-line measurement of cladded layer temperature. The experiments were carried out in course of elaboration of wear resistant coatings using various powder blends (WC-Co, CuSn, Mo, Stellite grade 12, etc.) applying variation of

  5. Using Stationary-Dynamic Camera Assemblies for Wide-area Video Surveillance and Selective Attention

    Microsoft Academic Search

    Ankur Jain; Dan Kopell; Kyle Kakligian; Yuan-fang Wang

    2006-01-01

    In this paper, we present a prototype video surveillance system that uses stationary-dynamic (or master-slave) cam- era assemblies to achieve wide-area surveillance and selec- tive focus-of-attention. We address two critical issues in de- ploying such camera assemblies in real-world applications: o -line camera calibration and on-line selective focus-of- attention. Our contributions over existing techniques are twofold: (1) in terms of

  6. BOREAS RSS-3 Imagery and Snapshots from a Helicopter-Mounted Video Camera

    NASA Technical Reports Server (NTRS)

    Walthall, Charles L.; Loechel, Sara; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor)

    2000-01-01

    The BOREAS RSS-3 team collected helicopter-based video coverage of forested sites acquired during BOREAS as well as single-frame "snapshots" processed to still images. Helicopter data used in this analysis were collected during all three 1994 IFCs (24-May to 16-Jun, 19-Jul to 10-Aug, and 30-Aug to 19-Sep), at numerous tower and auxiliary sites in both the NSA and the SSA. The VHS-camera observations correspond to other coincident helicopter measurements. The field of view of the camera is unknown. The video tapes are in both VHS and Beta format. The still images are stored in JPEG format.

  7. Using hand-held point and shoot video cameras in clinical education.

    PubMed

    Stoten, Sharon

    2011-02-01

    Clinical educators are challenged to design and implement creative instructional strategies to provide employees with optimal clinical practice learning opportunities. Using hand-held video cameras to capture patient encounters or skills demonstrations involves employees in active learning and can increase dialogue between employees and clinical educators. The video that is created also can be used for evaluation and feedback. Hands-on experiences may energize employees with different talents and styles of learning. PMID:21323214

  8. The Evolution of Digital Imaging: From CCD to CMOS

    E-print Network

    La Rosa, Andres H.

    the grandfathers of the digital imaging revolution, which has all but converted cameras and video recorders fromThe Evolution of Digital Imaging: From CCD to CMOS A Micron White Paper Digital imaging began into electrical charges have become increasingly efficient. The processes for transforming optical to digital have

  9. Kids behind the Camera: Education for the Video Age.

    ERIC Educational Resources Information Center

    Berwick, Beverly

    1994-01-01

    Some San Diego teachers created the Montgomery Media Institute to tap the varied talents of young people attending area high schools and junior high schools. Featuring courses in video programming and production, photography, and journalism, this program engages students' interest while introducing them to fields with current employment…

  10. Spectral-based calorimetric calibration of a 3CCD color camera for fast and accurate characterization and calibration of LCD displays

    NASA Astrophysics Data System (ADS)

    Safaee-Rad, Reza; Aleksic, Milivoje

    2011-03-01

    LCD displays exhibit significant amount of variability in their tone-responses, color responses and backlight-modulation responses. LCD display characterization and calibration using a spectrometer or a color meter, however, leads to two basic deficiencies: (a) It can only generate calibration data based on a single spot on the display (usually at panel center); and (b) It generally takes a significant amount of time to do the required measurement. As a result, a fast and efficient system for a full LCD display characterization and calibration is required. Herein, a system based on a 3CCD calorimetrically-calibrated camera is presented which can be used for full characterization and calibration of LCD displays. The camera can provide full tri-stimulus measurements in real time. To achieve high-degree of accuracy, colorimetric calibration of camera is carried out based on spectral method.

  11. Large area x-ray sensitive video camera: overall feasibility

    Microsoft Academic Search

    Randy Luhta; John A. Rowlands

    1997-01-01

    A large area x-ray sensitive vidicon is an alternative to the x-ray image intensifier and television camera combination. The proposed x-ray vidicon utilizes an amorphous selenium photoconductive layer which has a higher intrinsic resolution in comparison to the input phosphor of an XRII. This higher resolution could benefit diagnostic cardiac angiography as well as interventional cardiac procedures which now frequency

  12. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  13. Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera

    Microsoft Academic Search

    1990-01-01

    Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed

  14. Velocity measurement of compressible air flows utilizing a high-speed video camera

    Microsoft Academic Search

    M. Raffel; J. Kompenhans; B. Stasicki; B. Bretthauer; G. E. A. Meier

    1995-01-01

    PIV-measurements of the flow above a pitching airfoil were conducted in a transonic wind tunnel. An ultra high-speed video camera was used for separate recording of two exposures. The data was analysed using the cross-correlation method. The results show the applicability of the technique in high speed flows.

  15. Improvement of olfactory video camera: gas\\/odor flow visualization system

    Microsoft Academic Search

    Hiroshi Ishida; Takafumi Tokuhiro; Takamichi Nakamoto; Toyosaka Moriizumi

    2002-01-01

    The “olfactory video camera” is a sensing system that helps locate a source of gas or odor. It consists of a portable homogeneous array of QCM gas sensors, and presents the visualized images of gas\\/odor flow reaching the array from the source location. In this paper, an improved version of the system is reported. A multichannel reciprocal frequency counter is

  16. Onboard video cameras and instruments to measure the flight behavior of birds

    Microsoft Academic Search

    J. A. Gillies; M. Bacic; A. L. R. Thomas; G. K. Taylor

    2008-01-01

    Summary We have recently developed several novel techniques to measure flight kinematic parameters on free-flying birds of prey using onboard wireless video cameras and inertial measurement systems (1). Work to date has involved captive trained raptors including a Steppe Eagle (Aquila nipalensis), Peregrine falcon (Falco peregrinus) and Gyrfalcon (Falco rusticolus). We aim to describe mathematically the dynamics of the relationship

  17. Digital Video Cameras for Brainstorming and Outlining: The Process and Potential

    ERIC Educational Resources Information Center

    Unger, John A.; Scullion, Vicki A.

    2013-01-01

    This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…

  18. Autonomous video camera system for monitoring impacts to benthic habitats from demersal fishing gear, including longlines

    Microsoft Academic Search

    Robert Kilpatrick; Graeme Ewing; Tim Lamb; Dirk Welsford; Andrew Constable

    2011-01-01

    Studies of the interactions of demersal fishing gear with the benthic environment are needed in order to manage conservation of benthic habitats. There has been limited direct assessment of these interactions through deployment of cameras on commercial fishing gear especially on demersal longlines. A compact, autonomous deep-sea video system was designed and constructed by the Australian Antarctic Division (AAD) for

  19. Observation of hydrothermal flows with acoustic video camera

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Asada, A.; Tamaki, K.; Scientific Team Of Yk09-13 Leg 1

    2010-12-01

    To evaluate hydrothermal discharging and its diffusion process along the ocean ridge is necessary for understanding balance of mass and flux in the ocean, ecosystem around hydrothermal fields and so on. However, it has been difficult for us to measure hydrothermal activities without disturbance caused by observation platform ( submersible, ROV, AUV ). We wanted to have some observational method to observe hydrothermal discharging behavior as it was. DIDSON (Dual-Frequency IDentification SONar) is acoustic lens-based sonar. It has sufficiently high resolution and rapid refresh rate that it can substitute for optical system in turbid or dark water where optical systems fail. DIDSON operates at two frequencies, 1.8MHz or 1.1MHz, and forms 96 beams spaced 0.3° apart or 48 beams spaced 0.6° apart respectively. It images out to 12m at 1.8MHz and 40m at 1.1MHz. The transmit and receive beams are formed with acoustic lenses with rectangular apertures and made of polymethylpentene plastic and FC-70 liquid. This physical beam forming allows DIDSON to consume only 30W of power. DIDSON updates its image between 20 to 1 frames/s depending on the operating frequency and the maximum range imaged. It communicates its host using Ethernet. Institute of Industrial Science, University of Tokyo ( IIS ) has understood DIDSON’s superior performance and tried to find new method for utilization of it. The observation systems that IIS has ever developed based on DIDSON are waterside surveillance system, automatic measurement system for fish length, automatic system for fish counting, diagnosis system for deterioration of underwater structure and so on. A next challenge is to develop an observation method based on DIDSON for hydrothermal discharging from seafloor vent. We expected DIDSON to reveal whole image of hydrothermal plume as well as detail inside the plume. In October 2009, we conducted seafloor reconnaissance using a manned deep-sea submersible Shinkai6500 in Central Indian Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. DIDSON was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. In this cruise, seven dives of Shinkai6500 were conducted. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. These are only a few acoustic video images of the hydrothermal plumes. Processing and analyzing the acoustic video image data are going on. We will report the overview of the acoustic video image of the hydrothermal plumes and discuss possibility of DIDSON as an observation tool for seafloor hydrothermal activity.

  20. Development of a compact fast CCD camera and resonant soft x-ray scattering endstation for time-resolved pump-probe experiments

    NASA Astrophysics Data System (ADS)

    Doering, D.; Chuang, Y.-D.; Andresen, N.; Chow, K.; Contarato, D.; Cummings, C.; Domning, E.; Joseph, J.; Pepper, J. S.; Smith, B.; Zizka, G.; Ford, C.; Lee, W. S.; Weaver, M.; Patthey, L.; Weizeorick, J.; Hussain, Z.; Denes, P.

    2011-07-01

    The designs of a compact, fast CCD (cFCCD) camera, together with a resonant soft x-ray scattering endstation, are presented. The cFCCD camera consists of a highly parallel, custom, thick, high-resistivity CCD, readout by a custom 16-channel application specific integrated circuit to reach the maximum readout rate of 200 frames per second. The camera is mounted on a virtual-axis flip stage inside the RSXS chamber. When this flip stage is coupled to a differentially pumped rotary seal, the detector assembly can rotate about 100°/360° in the vertical/horizontal scattering planes. With a six-degrees-of-freedom cryogenic sample goniometer, this endstation has the capability to detect the superlattice reflections from the electronic orderings showing up in the lower hemisphere. The complete system has been tested at the Advanced Light Source, Lawrence Berkeley National Laboratory, and has been used in multiple experiments at the Linac Coherent Light Source, SLAC National Accelerator Laboratory.

  1. Development of a compact fast CCD camera and resonant soft x-ray scattering endstation for time-resolved pump-probe experiments.

    PubMed

    Doering, D; Chuang, Y-D; Andresen, N; Chow, K; Contarato, D; Cummings, C; Domning, E; Joseph, J; Pepper, J S; Smith, B; Zizka, G; Ford, C; Lee, W S; Weaver, M; Patthey, L; Weizeorick, J; Hussain, Z; Denes, P

    2011-07-01

    The designs of a compact, fast CCD (cFCCD) camera, together with a resonant soft x-ray scattering endstation, are presented. The cFCCD camera consists of a highly parallel, custom, thick, high-resistivity CCD, readout by a custom 16-channel application specific integrated circuit to reach the maximum readout rate of 200 frames per second. The camera is mounted on a virtual-axis flip stage inside the RSXS chamber. When this flip stage is coupled to a differentially pumped rotary seal, the detector assembly can rotate about 100°/360° in the vertical/horizontal scattering planes. With a six-degrees-of-freedom cryogenic sample goniometer, this endstation has the capability to detect the superlattice reflections from the electronic orderings showing up in the lower hemisphere. The complete system has been tested at the Advanced Light Source, Lawrence Berkeley National Laboratory, and has been used in multiple experiments at the Linac Coherent Light Source, SLAC National Accelerator Laboratory. PMID:21806178

  2. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  3. Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

  4. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  5. 3D model generation using unconstrained motion of a hand-held video camera

    NASA Astrophysics Data System (ADS)

    Baker, C.; Debrunner, C.; Whitehorn, M.

    2006-02-01

    We have developed a shape and structure capture system which constructs accurate, realistic 3D models from video imagery taken with a single freely moving handheld camera. Using an inexpensive off the shelf acquisition system such as a hand-held video camera, we demonstrate the feasibility of fast and accurate generation of these 3D models at a very low cost. In our approach the operator freely moves the camera within some very simple constraints. Our process identifies and tracks high interest image features and computes the relative pose of the camera based on those tracks. Using a RANSAC-like approach we solve for the camera pose and 3D structure based on a homography or essential matrix. Once we have the pose for many frames in the sequence we perform correlation-based stereo to obtain dense point clouds. After these point clouds are computed we integrate them into an octree. By replacing the points in a particular cell with statistics representing the point distribution we can efficiently store the computed model. While being efficient, the integration technique also enables filtering based on occupancy counts which eliminates many stereo outliers and results in an aesthetic viewable 3D model. In this paper we describe our approach in detail as well as show reconstructed results of a synthetic room, an empty room, a lightly furnished room, and an experimental vehicle.

  6. A New Remote Sensing Filter Radiometer Employing a Fabry-Perot Etalon and a CCD Camera for Column Measurements of Methane in the Earth Atmosphere

    NASA Technical Reports Server (NTRS)

    Georgieva, E. M.; Huang, W.; Heaps, W. S.

    2012-01-01

    A portable remote sensing system for precision column measurements of methane has been developed, built and tested at NASA GSFC. The sensor covers the spectral range from 1.636 micrometers to 1.646 micrometers, employs an air-gapped Fabry-Perot filter and a CCD camera and has a potential to operate from a variety of platforms. The detector is an XS-1.7-320 camera unit from Xenics Infrared solutions which combines an uncooled InGaAs detector array working up to 1.7 micrometers. Custom software was developed in addition to the graphical user basic interface X-Control provided by the company to help save and process the data. The technique and setup can be used to measure other trace gases in the atmosphere with minimal changes of the etalon and the prefilter. In this paper we describe the calibration of the system using several different approaches.

  7. A Kalman Filtering Approach to 3-D IR Scene Prediction using Single-Camera Range Video

    Microsoft Academic Search

    Mehmet Celenk; James Graham; Don Venable; Mark Smearcheck

    2007-01-01

    This paper presents a Kalman filtering approach to predicting 3- D video infrared (IR) scenes as a CMOS multi-coordinate axis sensory-camera mounted on a mobile vehicle moves forward in a controlled environment. Potential applications of this research can be found in indoor\\/outdoor heat-change based range measurement, synthetic IR scene generation, rescue missions, and autonomous navigation. Experimental results reported herein dictate

  8. Video and acoustic camera techniques for studying fish under ice: a review and comparison

    Microsoft Academic Search

    Robert P. Mueller; Richard S. Brown; Haakon H. Hop; Larry Moulton

    2006-01-01

    Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates\\u000a during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures.\\u000a This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes,\\u000a rivers, and streams

  9. Reconstruction of the Pose of Uncalibrated Cameras via User-Generated Videos

    E-print Network

    Bennett, Stuart; Lasenby, Joan; Kokaram, Anil; Inguva, Sasi; Birkbeck, Neil

    2014-01-01

    on camera positions, the depth estimate of which is particularly challenging. Ballan et al. further use audio to synchronize the videos, which vastly simplifies feature matching; unfortunately in sports event scenarios the recorded audio is highly localized... . Chen, R. Grzeszczuk, and M. Pollefeys. Handling urban location recognition as a 2D homothetic problem. In 11th European Conference on Computer Vision, volume VI of ECCV 2010, pages 266–279, Sept. 2010. [3] L. Ballan, G. J. Brostow, J. Puwein, and M...

  10. High-sensitive thermal video camera with self-scanned 128 InSb linear array

    Microsoft Academic Search

    Hiroyuki Fujisada

    1991-01-01

    A compact thermal video camera with very high sensitivity has been developed by using a self-scanned 128 InSb linear array photodiode. Two-dimensional images are formed by a self- scanning function of the linear array focal plane assembly in the horizontal direction and by a vibration mirror in the vertical direction. Images with 128 X 128 pixel number are obtained every

  11. Operation and maintenance manual for the high resolution stereoscopic video camera system (HRSVS) system 6230

    SciTech Connect

    Pardini, A.F., Westinghouse Hanford

    1996-07-16

    The High Resolution Stereoscopic Video Cameral System (HRSVS),system 6230, is a stereoscopic camera system that will be used as an end effector on the LDUA to perform surveillance and inspection activities within Hanford waste tanks. It is attached to the LDUA by means of a Tool Interface Plate (TIP), which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate.

  12. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  13. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    SciTech Connect

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  14. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  15. Laser applications and other topics in quantum electronics: Laser Doppler visualisation of the fields of three-dimensional velocity vectors with the help of a minimal number of CCD cameras

    NASA Astrophysics Data System (ADS)

    Dubnishchev, Yu N.

    2010-08-01

    We discuss the possibility of laser Doppler visualisation and measurement of the field of three-dimensional velocity vectors by suppressing the multiparticle scattering influence on the measurement results, when using one CCD camera. The coordinate measuring basis is formed due to switching of the directions and the frequency of spatially combined laser sheets, the frequency being synchronised with the CCD-camera operation. The field of the velocity vectors without the contribution from the multiparticle scattering is produced from the linear combinations of normalised laser sheet images detected with a CCD camera in a frequency-demodulated scattered light. The method can find applications not only in laser diagnostics of gas and condensed media but also in the Doppler spectroscopy of light fields scattered by multiparticle dynamic structures.

  16. Video camera observation for assessing overland flow patterns during rainfall events

    NASA Astrophysics Data System (ADS)

    Silasari, Rasmiaditya; Oismüller, Markus; Blöschl, Günter

    2015-04-01

    Physically based hydrological models have been widely used in various studies to model overland flow propagation in cases such as flood inundation and dam break flow. The capability of such models to simulate the formation of overland flow by spatial and temporal discretization of the empirical equations makes it possible for hydrologists to trace the overland flow generation both spatially and temporally across surface and subsurface domains. As the upscaling methods transforming hydrological process spatial patterns from the small obrseved scale to the larger catchment scale are still being progressively developed, the physically based hydrological models become a convenient tool to assess the patterns and their behaviors crucial in determining the upscaling process. Related studies in the past had successfully used these models as well as utilizing field observation data for model verification. The common observation data used for this verification are overland flow discharge during natural rainfall events and camera observations during synthetic events (staged field experiments) while the use of camera observations during natural events are hardly discussed in publications. This study advances in exploring the potential of video camera observations of overland flow generation during natural rainfall events to support the physically based hydrological model verification and the assessment of overland flow spatial patterns. The study is conducted within a 64ha catchment located at Petzenkirchen, Lower Austria, known as HOAL (Hydrological Open Air Laboratory). The catchment land covers are dominated by arable land (87%) with small portions (13%) of forest, pasture and paved surfaces. A 600m stream is running at southeast of the catchment flowing southward and equipped with flumes and pressure transducers measuring water level in minutely basis from various inlets along the stream (i.e. drainages, surface runoffs, springs) to be calculated into flow discharge. A video camera with 10x optical zoom is installed 7m above the ground at the middle of the catchment overlooking the west hillslope area of the stream. Minutely images are taken daily during daylight while video recording is triggered by raindrop movements. The observed images and videos are analyzed in accordance to the overland flow signals captured by the assigned pressure transducers and the rainfall intensities measured by four rain gauges across the catchment. The results show that the video camera observations enable us to assess the spatial and temporal development of the overland flow generation during natural events, thus showing potentials to be used in model verification as well as in spatial patterns analysis.

  17. Flat Field Anomalies in an X-Ray CCD Camera Measured Using a Manson X-Ray Source

    SciTech Connect

    Michael Haugh

    2008-03-01

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. It determines how accurately NIF can point the laser beams and is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 ?m square pixels, and 15 ?m thick. A multi-anode Manson X-ray source, operating up to 10kV and 2mA, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/?E?12. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within ±1.5% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. The efficiency pattern follows the properties of Si. The maximum quantum efficiency is 0.71. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation was >8% at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was less than the measurement uncertainty below 4 keV. We were also able to observe debris on the CCD chip. The debris showed maximum contrast at the lowest energy used, 930 eV, and disappeared by 4 keV. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.

  18. Gain, Level, And Exposure Control For A Television Camera

    NASA Technical Reports Server (NTRS)

    Major, Geoffrey J.; Hetherington, Rolfe W.

    1992-01-01

    Automatic-level-control/automatic-gain-control (ALC/AGC) system for charge-coupled-device (CCD) color television camera prevents over-loading in bright scenes using technique for measuring brightness of scene from red, green, and blue output signals and processing these into adjustments of video amplifiers and iris on camera lens. System faster, does not distort video brightness signals, and built with smaller components.

  19. Developments of engineering model of the X-ray CCD camera of the MAXI experiment onboard the International Space Station

    Microsoft Academic Search

    Emi Miyata; Chikara Natsukari; Tomoyuki Kamazuka; Daisuke Akutsu; Hirohiko Kouno; Hiroshi Tsunemi; Masaru Matsuoka; Hiroshi Tomida; Shiro Ueno; Kenji Hamaguchi; Isao Tanaka

    2002-01-01

    MAXI, Monitor of All-sky X-ray Image, is an X-ray observatory on the Japanese Experimental Module (JEM) Exposed Facility (EF) on the International Space Station (ISS). MAXI is a slit scanning camera which consists of two kinds of X-ray detectors: one is a one-dimensional position-sensitive proportional counter with a total area of ?5000cm2, the Gas Slit Camera (GSC), and the other

  20. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    PubMed

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. PMID:22154761

  1. Applications of high-resolution still video cameras to ballistic imaging

    NASA Astrophysics Data System (ADS)

    Snyder, Donald R.; Kosel, Frank M.

    1991-01-01

    The Aeroballistic Research Facility is one of several free-flight aerodynamic research facilities in the world and has gained a reputation as one of the most accurately instrumented. This facility was developed for the exterior ballistic testing of gyroscopically stabilized fin stabilized or mass stabilized projectiles. Such testing includes bullets missiles and subscale aircraft configurations. The primary source of data for this type of facility is the trajectory information derived from orthogonal pairs of shadowgraph film cameras. The loading unloading processing digitizing and off-line analysis of the film data is extremely costly and time consuming. The unavailability of even unreduced images for subjective evaluation often means delays of days between experiments. In this paper we describe evaluation of an advanced still video system as the baseline for development of an integrated real-time electronic shadowgraph to replace the film cameras for normal range operations. 1.

  2. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    SciTech Connect

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition.

  3. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    PubMed

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607

  4. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    PubMed Central

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607

  5. Flat Field Anomalies in an X-ray CCD Camera Measured Using a Manson X-ray Source

    Microsoft Academic Search

    M. B. Schneider M. J. Haugh

    2008-01-01

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation.

  6. Development and calibration of acoustic video camera system for moving vehicles

    NASA Astrophysics Data System (ADS)

    Yang, Diange; Wang, Ziteng; Li, Bing; Lian, Xiaomin

    2011-05-01

    In this paper, a new acoustic video camera system is developed and its calibration method is established. This system is built based on binocular vision and acoustical holography technology. With binocular vision method, the spatial distance between the microphone array and the moving vehicles is obtained, and the sound reconstruction plane can be established closely to the moving vehicle surface automatically. Then the sound video is regenerated closely to the moving vehicles accurately by acoustic holography method. With this system, the moving and stationary sound sources are treated differently and automatically, which makes the sound visualization of moving vehicles much quicker, more intuitively, and accurately. To verify this system, experiments for a stationary speaker and a non-stationary speaker are carried out. Further verification experiments for outdoor moving vehicle are also conducted. Successful video visualization results not only confirm the validity of the system but also suggest that this system can be a potential useful tool in vehicle's noise identification because it allows the users to find out the noise sources by the videos easily. We believe the newly developed system will be of great potential in moving vehicles' noise identification and control.

  7. Spectral function of an optical filter for the pn-CCD camera on board the German astronomy satellite ABRIXAS

    Microsoft Academic Search

    Karl-Heinz Stephan; Frank Haberl; Jan Friedrich

    1999-01-01

    We have provided optical filters developed at the Max-Planck Institut fuer extraterrestrische Physik to the German x-ray astronomy observatory ABRIXAS. Specific Si-PN CCDs will be serving as focal plane camera. Since this detector is sensitive to radiation from the x-ray to the near IR spectral range, for observation in x-ray astronomy it must be protected from visible and UV radiation.

  8. Spatial correlations of spontaneously down-converted photon pairs detected with a single-photon-sensitive CCD camera

    Microsoft Academic Search

    Bradley M. Jost; Alexander V. Sergienko; Ayman F. Abouraddy; E. A Bahaa; Malvin Teich

    1998-01-01

    : A single-photon-sensitive intensified charge-coupled-device(ICCD) camera has been used to simultaneously detect, over a broadarea, degenerate and nondegenerate photon pairs generated by thequantum-optical process of spontaneous parametric down-conversion.We have developed a new method for determining the quantum fourthordercorrelations in spatially extended detection systems such as thisone. Our technique reveals the expected phase-matching-induced spatialcorrelations in a 2-f...

  9. Spatial correlations of spontaneously down-converted photon pairs detected with a single-photon-sensitive CCD camera

    Microsoft Academic Search

    Bradley Jost; Alexander V. Sergienko; Ayman F. Abouraddy; Bahaa A. Saleh; Malvin C. Teich

    1998-01-01

    A single-photon-sensitive intensified charge-coupled-device (ICCD) camera has been used to simultaneously detect, over a broad area, degenerate and nondegenerate photon pairs generated by the quantum-optical process of spontaneous parametric down-conversion. We have developed a new method for determining the quantum fourth- order correlations in spatially extended detection systems such as this one. Our technique reveals the expected phase-matching-induced spa- tial

  10. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  11. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  12. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    SciTech Connect

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.

  13. Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras

    USGS Publications Warehouse

    Harris, A.J.L.; Thornber, C.R.

    1999-01-01

    GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.

  14. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  15. Head-Free, Remote Eye-Gaze Detection System with Easy Calibration Using Stereo-Calibrated Two Video Cameras

    Microsoft Academic Search

    Yoshinobu Ebisawa; Kazuki Abo; Kiyotaka Fukumoto

    \\u000a The video-based, head-free, remote eye-gaze detection system based on detection of the pupil and the corneal reflection was\\u000a developed using stereocalibrated two cameras. The gaze detection theory assumed the linear relationship; ??=?k|r?|. Here, ? is the angle between the line of sight and the line connecting between the pupil and the camera, and |r’| indicates the size\\u000a of the corneal

  16. Deep imaging survey of the environment of Alpha Centauri - II. CCD imaging with the NTT-SUSI2 camera

    E-print Network

    Pierre Kervella; Frédéric Thévenin

    2006-12-08

    Context: The nearby pair of solar-type stars Alpha Centauri is a favorable target for an imaging search for extrasolar planets. Indications exist that the gravitational mass of Alpha Cen B could be higher than its modeled mass, the difference being consistent with a substellar companion of a few tens of Jupiter masses. However, Alpha Centauri usually appears in star catalogues surrounded by a large void area, due to the strong diffused light. Aims: We searched for faint comoving companions to Alpha Cen located at angular distances of the order of a few tens of arcseconds, up to 2-3 arcmin. As a secondary objective, we built a catalogue of the detected background sources. Methods: In order to complement our adaptive optics search at small angular distances (Paper I), we used atmosphere limited CCD imaging from the NTT-SUSI2 instrument in the Bessel V, R, I, and Z bands. Results: We present the results of our search in the form of a catalogue of the detected objects inside a 5.5 arcmin box around this star. A total of 4313 sources down to mV~24 and mI~22 were detected from this wide-field survey. We extracted the infrared photometry of part of the detected sources from archive images of the 2MASS survey (JHK bands). We investigate briefly the nature of the detected sources, many of them presenting extremely red color indices (V-K > 14). Conclusions: We did not detect any companion to Alpha Centauri between 100 and 300 AU, down to a maximum mass of ~15 times Jupiter. We also mostly exclude the presence of a companion more massive than 30 MJup between 50 and 100 AU.

  17. CCD TV focal plane guider development and comparison to SIRTF applications

    NASA Technical Reports Server (NTRS)

    Rank, David M.

    1989-01-01

    It is expected that the SIRTF payload will use a CCD TV focal plane fine guidance sensor to provide acquisition of sources and tracking stability of the telescope. Work has been done to develop CCD TV cameras and guiders at Lick Observatory for several years and have produced state of the art CCD TV systems for internal use. NASA decided to provide additional support so that the limits of this technology could be established and a comparison between SIRTF requirements and practical systems could be put on a more quantitative basis. The results of work carried out at Lick Observatory which was designed to characterize present CCD autoguiding technology and relate it to SIRTF applications is presented. Two different design types of CCD cameras were constructed using virtual phase and burred channel CCD sensors. A simple autoguider was built and used on the KAO, Mt. Lemon and Mt. Hamilton telescopes. A video image processing system was also constructed in order to characterize the performance of the auto guider and CCD cameras.

  18. Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera:

    SciTech Connect

    Moss, K.J.

    1990-09-01

    Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed for I-131. Plant uptake was greater when I-131 was applied at soil depths with higher root concentrations. When I-131 was applied at soil depths with lower root concentrations, plant uptake was less. However, the relationship between root concentration and plant uptake was not a direct one. When I-131 was delivered to deeper soil depths with low root concentrations, the quantity of roots there appeared to be less effective in uptake than the same quantity of roots at shallow soil depths with high root concentration. 29 refs., 6 figs., 11 tabs.

  19. Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera

    NASA Astrophysics Data System (ADS)

    Hahne, Christopher; Aggoun, Amar

    2014-03-01

    A novel and low-cost embedded hardware architecture for real-time refocusing based on a standard plenoptic camera is presented in this study. The proposed layout design synthesizes refocusing slices directly from micro images by omitting the process for the commonly used sub-aperture extraction. Therefore, intellectual property cores, containing switch controlled Finite Impulse Response (FIR) filters, are developed and applied to the Field Programmable Gate Array (FPGA) XC6SLX45 from Xilinx. Enabling the hardware design to work economically, the FIR filters are composed of stored product as well as upsampling and interpolation techniques in order to achieve an ideal relation between image resolution, delay time, power consumption and the demand of logic gates. The video output is transmitted via High-Definition Multimedia Interface (HDMI) with a resolution of 720p at a frame rate of 60 fps conforming to the HD ready standard. Examples of the synthesized refocusing slices are presented.

  20. A two camera video imaging system with application to parafoil angle of attack measurements

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Bennett, Mark S.

    1991-01-01

    This paper describes the development of a two-camera, video imaging system for the determination of three-dimensional spatial coordinates from stereo images. This system successfully measured angle of attack at several span-wise locations for large-scale parafoils tested in the NASA Ames 80- by 120-Foot Wind Tunnel. Measurement uncertainty for angle of attack was less than 0.6 deg. The stereo ranging system was the primary source for angle of attack measurements since inclinometers sewn into the fabric ribs of the parafoils had unknown angle offsets acquired during installation. This paper includes discussions of the basic theory and operation of the stereo ranging system, system measurement uncertainty, experimental set-up, calibration results, and test results. Planned improvements and enhancements to the system are also discussed.

  1. A versatile digital video engine for safeguards and security applications

    SciTech Connect

    Hale, W.R.; Johnson, C.S. [Sandia National Labs., Albuquerque, NM (United States); DeKeyser, P. [Fast Forward Video, Irvine, CA (United States)

    1996-08-01

    The capture and storage of video images have been major engineering challenges for safeguard and security applications since the video camera provided a method to observe remote operations. The problems of designing reliable video cameras were solved in the early 1980`s with the introduction of the CCD (charged couple device) camera. The first CCD cameras cost in the thousands of dollars but have now been replaced by cameras costing in the hundreds. The remaining problem of storing and viewing video images in both attended and unattended video surveillance systems and remote monitoring systems is being solved by sophisticated digital compression systems. One such system is the PC-104 three card set which is literally a ``video engine`` that can provide power for video storage systems. The use of digital images in surveillance systems makes it possible to develop remote monitoring systems, portable video surveillance units, image review stations, and authenticated camera modules. This paper discusses the video card set and how it can be used in many applications.

  2. A cooled CCD camera-based protocol provides an effective solution for in vitro monitoring of luciferase.

    PubMed

    Afshari, Amirali; Uhde-Stone, Claudia; Lu, Biao

    2015-03-13

    Luciferase assay has become an increasingly important technique to monitor a wide range of biological processes. However, the mainstay protocols require a luminometer to acquire and process the data, therefore limiting its application to specialized research labs. To overcome this limitation, we have developed an alternative protocol that utilizes a commonly available cooled charge-coupled device (CCCD), instead of a luminometer for data acquiring and processing. By measuring activities of different luciferases, we characterized their substrate specificity, assay linearity, signal-to-noise levels, and fold-changes via CCCD. Next, we defined the assay parameters that are critical for appropriate use of CCCD for different luciferases. To demonstrate the usefulness in cultured mammalian cells, we conducted a case study to examine NF?B gene activation in response to inflammatory signals in human embryonic kidney cells (HEK293 cells). We found that data collected by CCCD camera was equivalent to those acquired by luminometer, thus validating the assay protocol. In comparison, The CCCD-based protocol is readily amenable to live-cell and high-throughput applications, offering fast simultaneous data acquisition and visual and quantitative data presentation. In conclusion, the CCCD-based protocol provides a useful alternative for monitoring luciferase reporters. The wide availability of CCCD will enable more researchers to use luciferases to monitor and quantify biological processes. PMID:25677617

  3. Search for Trans-Neptunian Objects: a new MIDAS context confronted with some results obtained with the UH 8k CCD Mosaic Camera

    NASA Astrophysics Data System (ADS)

    Rousselot, P.; Lombard, F.; Moreels, G.

    1998-09-01

    We present the results obtained with a new program dedicated to the automatic detection of trans-Neptunian objects (TNOs) with standard sets of images obtained in the same field of view. This program has the key advantage, when compared to other similar softwares, of being designed to be used with one of the main astronomical data processing package; the Munich Image Data Analysis System (MIDAS) developped by The European Southern Observatory (ESO). It is available freely from the World Wide Web server of the Observatory of Besan\\c con (http://www.obs-besancon/www/ publi/philippe/tno.html). This program has been tested with observational data collected with the UH 8k CCD mosaic Camera, used during two nights, on October 25 and 26, 1997, at the prime focus of the CFH telescope (Mauna Kea, Hawaii). The purpose of these observational data was to detect new TNOs and a previous analysis conducted by the classical method of blinking, had lead to a first detection of a new TNO. This object appears close to the detection limit of the images (i.e. to the 24(th) magnitude) and presents an unsual orbital inclination (i =~ 33(deg) ). It has allowed the efficient and successful testing of the program to detect faint moving objects, demonstrating its ability to detect the objects close to the sky background noise with a very limited number of false detections.

  4. Search for Trans-Neptunian objects: an automated technique applied to images obtained with the UH 8k CCD Mosaic Camera

    NASA Astrophysics Data System (ADS)

    Rousselot, P.; Lombard, F.; Moreels, G.

    1999-08-01

    In this paper we present the results obtained with a new program dedicated to the automatic detection of trans-Neptunian objects (TNOs) with standard sets of images obtained in a same field of view. This program is available freely from the World Wide Web server of the Observatory of Besançon (http://www.obs-besancon.fr/www/ publi/philippe/tno.html) and is designed to be used with the Munich Image Data Analysis System (MIDAS) developped by The European Southern Observatory (ESO). It has been tested with observational data collected with the UH 8k CCD mosaic Camera on October 27, 1997, at the prime focus of the CFH telescope (Mauna Kea, Hawaii). These observational data had lead, by the classical method of blinking, to a first detection of a new TNO with a magnitude estimated at 23.6 and an unusually high orbital inclination (i =~ 33(deg) ). The program managed to detect this object, as well as detecting another TNO (m_R =~ 23.9), confirming its ability to detect faint moving objects.

  5. Camera Alignment using Trajectory Intersections in Unsynchronized Videos Thomas Kuo, Santhoshkumar Sunderrajan, and B.S. Manjunath

    E-print Network

    California at Santa Barbara, University of

    Camera Alignment using Trajectory Intersections in Unsynchronized Videos Thomas Kuo, Santhoshkumar that are unsynchronized by low and/or variable frame rates using object trajec- tories. Unlike existing trajectory the intersections of corresponding object trajectories to match views. To find these intersections, we introduce

  6. Autonomous video camera system for monitoring impacts to benthic habitats from demersal fishing gear, including longlines

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Robert; Ewing, Graeme; Lamb, Tim; Welsford, Dirk; Constable, Andrew

    2011-04-01

    Studies of the interactions of demersal fishing gear with the benthic environment are needed in order to manage conservation of benthic habitats. There has been limited direct assessment of these interactions through deployment of cameras on commercial fishing gear especially on demersal longlines. A compact, autonomous deep-sea video system was designed and constructed by the Australian Antarctic Division (AAD) for deployment on commercial fishing gear to observe interactions with benthos in the Southern Ocean finfish fisheries (targeting toothfish, Dissostichus spp). The Benthic Impacts Camera System (BICS) is capable of withstanding depths to 2500 m, has been successfully fitted to both longline and demersal trawl fishing gear, and is suitable for routine deployment by non-experts such as fisheries observers or crew. The system is entirely autonomous, robust, compact, easy to operate, and has minimal effect on the performance of the fishing gear it is attached to. To date, the system has successfully captured footage that demonstrates the interactions between demersal fishing gear and the benthos during routine commercial operations. It provides the first footage demonstrating the nature of the interaction between demersal longlines and benthic habitats in the Southern Ocean, as well as showing potential as a tool for rapidly assessing habitat types and presence of mobile biota such as krill ( Euphausia superba).

  7. Hand contour detection in wearable camera video using an adaptive histogram region of interest

    PubMed Central

    2013-01-01

    Background Monitoring hand function at home is needed to better evaluate the effectiveness of rehabilitation interventions. Our objective is to develop wearable computer vision systems for hand function monitoring. The specific aim of this study is to develop an algorithm that can identify hand contours in video from a wearable camera that records the user’s point of view, without the need for markers. Methods The two-step image processing approach for each frame consists of: (1) Detecting a hand in the image, and choosing one seed point that lies within the hand. This step is based on a priori models of skin colour. (2) Identifying the contour of the region containing the seed point. This is accomplished by adaptively determining, for each frame, the region within a colour histogram that corresponds to hand colours, and backprojecting the image using the reduced histogram. Results In four test videos relevant to activities of daily living, the hand detector classification accuracy was 88.3%. The contour detection results were compared to manually traced contours in 97 test frames, and the median F-score was 0.86. Conclusion This algorithm will form the basis for a wearable computer-vision system that can monitor and log the interactions of the hand with its environment. PMID:24354542

  8. Handwashing before entering the intensive care unit: what we learned from continuous video-camera surveillance.

    PubMed

    Nishimura, S; Kagehira, M; Kono, F; Nishimura, M; Taenaka, N

    1999-08-01

    Handwashing is one of the most important factors in controlling the spread of bacteria and in preventing the development of infections. This simple procedure does not have a high compliance rate. The Association for Professionals in Infection Control and Epidemiology, Inc, guideline recommends that hands must be washed before and after patient contact. In our intensive care unit (ICU), we have made it a rule that everyone should wash their hands before entering the ICU. The purpose of this study was to ascertain the handwashing compliance of all personnel and visitors to the ICU. A ceiling-mounted video camera connected to a time-lapse video cassette recorder recorded each person's actions when they entered the ICU during a 7-day period. Handwashing compliance was assessed for 3 different categories: ICU personnel, non-ICU personnel, and visitors to patients. There were 1030 entries to the ICU during the observation period. ICU personnel complied with handwashing in 71% of entries, non-ICU personnel in 74% of entries, and visitors to patients in 94% of entries. Handwashing compliance by visitors to patients was significantly higher than among personnel (P <.001). Handwashing compliance among personnel before entering the ICU was low. Continuous effort is needed to raise awareness of the handwashing issue, not only to ensure compliance with APIC recommendations but also in our facility, to ensure that health care personnel wash their hands on entry to the ICU. PMID:10433677

  9. Single event effect characterization of the mixed-signal ASIC developed for CCD camera in space use

    NASA Astrophysics Data System (ADS)

    Nakajima, Hiroshi; Fujikawa, Mari; Mori, Hideki; Kan, Hiroaki; Ueda, Shutaro; Kosugi, Hiroko; Anabuki, Naohisa; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu; Kitamura, Hisashi; Uchihori, Yukio

    2013-12-01

    We present the single event effect (SEE) tolerance of a mixed-signal application-specific integrated circuit (ASIC) developed for a charge-coupled device camera onboard a future X-ray astronomical mission. We adopted proton and heavy ion beams at HIMAC/NIRS in Japan. The particles with high linear energy transfer (LET) of 57.9 MeV cm2/mg is used to measure the single event latch-up (SEL) tolerance, which results in a sufficiently low cross-section of ?SEL<4.2×10-11 cm2/(Ion×ASIC). The single event upset (SEU) tolerance is estimated with various kinds of species with wide range of energy. Taking into account that a part of the protons creates recoiled heavy ions that have higher LET than that of the incident protons, we derived the probability of SEU event as a function of LET. Then the SEE event rate in a low-earth orbit is estimated considering a simulation result of LET spectrum. SEL rate is below once per 49 years, which satisfies the required latch-up tolerance. The upper limit of the SEU rate is derived to be 1.3×10-3 events/s. Although the SEU events cannot be distinguished from the signals of X-ray photons from astronomical objects, the derived SEU rate is below 1.3% of expected non-X-ray background rate of the detector and hence these events should not be a major component of the instrumental background.

  10. A novel video technique for visualizing flow structures in cardiovascular models

    Microsoft Academic Search

    A. P. Shortland; R. A. Black; J. C. Jarvis; S. Salmons

    1996-01-01

    We describe a video system that produces good quality images of particle trajectories in seeded fluid flows. The operation of a liquid crystal optical shutter and a modified charge-coupled device (CCD) camera were synchronized to generate images of particle trajectories which were stored in a framegrabber before being transferred to S-VHS tape. The camera system is particularly appropriate for visualizing

  11. A six-camera digital video imaging system sensitive to visible, red edge, near-infrared, and mid-infrared wavelengths

    Microsoft Academic Search

    R. S. Fletcher; J. H. Everitt

    2007-01-01

    This paper describes a six-camera multispectral digital video imaging system designed for natural resource assessment and shows its potential as a research tool. It has five visible to near-infrared light sensitive cameras, one near-infrared to mid-infrared light sensitive camera, a monitor, a computer with a multichannel digitizing board, a keyboard, a power distributor, an amplifier, and a mouse. Each camera

  12. Optimal camera exposure for video surveillance systems by predictive control of shutter speed, aperture, and gain

    NASA Astrophysics Data System (ADS)

    Torres, Juan; Menéndez, José Manuel

    2015-02-01

    This paper establishes a real-time auto-exposure method to guarantee that surveillance cameras in uncontrolled light conditions take advantage of their whole dynamic range while provide neither under nor overexposed images. State-of-the-art auto-exposure methods base their control on the brightness of the image measured in a limited region where the foreground objects are mostly located. Unlike these methods, the proposed algorithm establishes a set of indicators based on the image histogram that defines its shape and position. Furthermore, the location of the objects to be inspected is likely unknown in surveillance applications. Thus, the whole image is monitored in this approach. To control the camera settings, we defined a parameters function (Ef ) that linearly depends on the shutter speed and the electronic gain; and is inversely proportional to the square of the lens aperture diameter. When the current acquired image is not overexposed, our algorithm computes the value of Ef that would move the histogram to the maximum value that does not overexpose the capture. When the current acquired image is overexposed, it computes the value of Ef that would move the histogram to a value that does not underexpose the capture and remains close to the overexposed region. If the image is under and overexposed, the whole dynamic range of the camera is therefore used, and a default value of the Ef that does not overexpose the capture is selected. This decision follows the idea that to get underexposed images is better than to get overexposed ones, because the noise produced in the lower regions of the histogram can be removed in a post-processing step while the saturated pixels of the higher regions cannot be recovered. The proposed algorithm was tested in a video surveillance camera placed at an outdoor parking lot surrounded by buildings and trees which produce moving shadows in the ground. During the daytime of seven days, the algorithm was running alternatively together with a representative auto-exposure algorithm in the recent literature. Besides the sunrises and the nightfalls, multiple weather conditions occurred which produced light changes in the scene: sunny hours that produced sharpen shadows and highlights; cloud coverages that softened the shadows; and cloudy and rainy hours that dimmed the scene. Several indicators were used to measure the performance of the algorithms. They provided the objective quality as regards: the time that the algorithms recover from an under or over exposure, the brightness stability, and the change related to the optimal exposure. The results demonstrated that our algorithm reacts faster to all the light changes than the selected state-of-the-art algorithm. It is also capable of acquiring well exposed images and maintaining the brightness stable during more time. Summing up the results, we concluded that the proposed algorithm provides a fast and stable auto-exposure method that maintains an optimal exposure for video surveillance applications. Future work will involve the evaluation of this algorithm in robotics.

  13. Camera Animation

    NSDL National Science Digital Library

    A general discussion of the use of cameras in computer animation. This section includes principles of traditional film techniques and suggestions for the use of a camera during an architectural walkthrough. This section includes html pages, images and one video.

  14. Efficient Geometric, Photometric, and Temporal Calibration of an Array of Unsynchronized Video Cameras

    Microsoft Academic Search

    Cheng Lei; Yee-hong Yang

    2009-01-01

    Camera-arrays have become popular in many computer vision and computer graphics applications. Among all preprocessing steps, an efficient method to calibrate a large number of cameras is very much desired. The required calibration includes both the geometric and photometric calibration, which are the most common and also well studied for single camera. However, few existing efforts are devoted to camera

  15. The MMT all-sky camera

    NASA Astrophysics Data System (ADS)

    Pickering, T. E.

    2006-06-01

    The MMT all-sky camera is a low-cost, wide-angle camera system that takes images of the sky every 10 seconds, day and night. It is based on an Adirondack Video Astronomy StellaCam II video camera and utilizes an auto-iris fish-eye lens to allow safe operation under all lighting conditions, even direct sunlight. This combined with the anti-blooming characteristics of the StellaCam's detector allows useful images to be obtained during sunny days as well as brightly moonlit nights. Under dark skies the system can detect stars as faint as 6th magnitude as well as very thin cirrus and low surface brightness zodiacal features such as gegenschein. The total hardware cost of the system was less than $3500 including computer and framegrabber card, a fraction of the cost of comparable systems utilizing traditional CCD cameras.

  16. Application of video-cameras for quality control and sampling optimisation of hydrological and erosion measurements in a catchment

    NASA Astrophysics Data System (ADS)

    Lora-Millán, Julio S.; Taguas, Encarnacion V.; Gomez, Jose A.; Perez, Rafael

    2014-05-01

    Long term soil erosion studies imply substantial efforts, particularly when there is the need to maintain continuous measurements. There are high costs associated to maintenance of field equipment keeping and quality control of data collection. Energy supply and/or electronic failures, vandalism and burglary are common causes of gaps in datasets, reducing their reach in many cases. In this work, a system of three video-cameras, a recorder and a transmission modem (3G technology) has been set up in a gauging station where rainfall, runoff flow and sediment concentration are monitored. The gauging station is located in the outlet of an olive orchard catchment of 6.4 ha. Rainfall is measured with one automatic raingauge that records intensity at one minute intervals. The discharge is measured by a flume of critical flow depth, where the water is recorded by an ultrasonic sensor. When the water level rises to a predetermined level, the automatic sampler turns on and fills a bottle at different intervals according to a program depending on the antecedent precipitation. A data logger controls the instruments' functions and records the data. The purpose of the video-camera system is to improve the quality of the dataset by i) the visual analysis of the measurement conditions of flow into the flume; ii) the optimisation of the sampling programs. The cameras are positioned to record the flow at the approximation and the gorge of the flume. In order to contrast the values of ultrasonic sensor, there is a third camera recording the flow level close to a measure tape. This system is activated when the ultrasonic sensor detects a height threshold, equivalent to an electric intensity level. Thus, only when there is enough flow, video-cameras record the event. This simplifies post-processing and reduces the cost of download of recordings. The preliminary contrast analysis will be presented as well as the main improvements in the sample program.

  17. Evaluation of a 0.9- to 2.2-microns sensitive video camera with a mid-infrared filter (1.45- to 2.0-microns)

    Microsoft Academic Search

    J. H. Everitt; D. E. Escobar; P. R. Nixon; C. H. Blazquez; M. A. Hussey

    1986-01-01

    The application of 0.9- to 2.2-microns sensitive black and white IR video cameras to remote sensing is examined. Field and laboratory recordings of the upper and lower surface of peperomia leaves, succulent prickly pear, and buffelgrass are evaluated; the reflectance, phytomass, green weight, and water content for the samples were measured. The data reveal that 0.9- to 2.2-microns video cameras

  18. Method for eliminating artifacts in CCD imagers

    DOEpatents

    Turko, Bojan T. (Moraga, CA); Yates, George J. (Santa Fe, NM)

    1992-01-01

    An electronic method for eliminating artifacts in a video camera (10) employing a charge coupled device (CCD) (12) as an image sensor. The method comprises the step of initializing the camera (10) prior to normal read out and includes a first dump cycle period (76) for transferring radiation generated charge into the horizontal register (28) while the decaying image on the phosphor (39) being imaged is being integrated in the photosites, and a second dump cycle period (78), occurring after the phosphor (39) image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers (32). Image charge is then transferred from the photosites (36) and (38) to the vertical registers (32) and read out in conventional fashion. The inventive method allows the video camera (10) to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers (28) and (32), and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites (36) and (37).

  19. Method for eliminating artifacts in CCD imagers

    DOEpatents

    Turko, B.T.; Yates, G.J.

    1992-06-09

    An electronic method for eliminating artifacts in a video camera employing a charge coupled device (CCD) as an image sensor is disclosed. The method comprises the step of initializing the camera prior to normal read out and includes a first dump cycle period for transferring radiation generated charge into the horizontal register while the decaying image on the phosphor being imaged is being integrated in the photosites, and a second dump cycle period, occurring after the phosphor image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers. Image charge is then transferred from the photosites and to the vertical registers and read out in conventional fashion. The inventive method allows the video camera to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers and, and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites. 3 figs.

  20. An IEEE 1394-Firewall-Based Embedded Video System for Surveillance Applications

    Microsoft Academic Search

    Ashraf Saad; Donnie Smith

    2003-01-01

    In order to address the need for portable inexpensive systems for video surveillance, we built a computer vision system that provides digital video data transfer from a CCD camera using embedded software\\/hardware via the IEEE 1394 protocol, also known as FireWire or i.Link, and Ethernet TCP\\/IP interfaces. Controlled by an extended version of the IEEE 1394-based digital camera specification (DCAM),

  1. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  2. Lights, Camera, Action! A Guide to Using Video Production and Instruction in the Classroom.

    ERIC Educational Resources Information Center

    Limpus, Bruce

    This instructional guide offers practical ideas for incorporating video production in the classroom. Aspects of video production are presented sequentially. Strategies and suggestions are given for using video production to reinforce traditional subject content and provide interdisciplinary connections. The book is organized in two parts. After…

  3. Architecture of PAU survey camera readout electronics

    NASA Astrophysics Data System (ADS)

    Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

    2012-07-01

    PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

  4. A 1\\/3-inch 360 K pixel progressive scan CCD image sensor

    Microsoft Academic Search

    Y. Naito; A. Kobayashi; T. Ishigami; S. Nakagawa; Y. Shimohida; A. Izumi; H. Endo; H. Mizoguchi; Y. Hirotani; S. Horiuchi

    1995-01-01

    A 1\\/3-inch progressive scan CCD image sensor has been developed for image capture in computers and multi-functional digital video cameras. This device shows a high vertical resolution of 480 TV lines and 60 frames\\/sec image capture. The number of effective pixel is suitable for the digital TV standard and the horizontal driving frequency is 13.5 MHz

  5. Hand-gesture extraction and recognition from the video sequence acquired by a dynamic camera using condensation algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Ohya, Jun

    2009-01-01

    To achieve environments in which humans and mobile robots co-exist, technologies for recognizing hand gestures from the video sequence acquired by a dynamic camera could be useful for human-to-robot interface systems. Most of conventional hand gesture technologies deal with only still camera images. This paper proposes a very simple and stable method for extracting hand motion trajectories based on the Human-Following Local Coordinate System (HFLC System), which is obtained from the located human face and both hands. Then, we apply Condensation Algorithm to the extracted hand trajectories so that the hand motion is recognized. We demonstrate the effectiveness of the proposed method by conducting experiments on 35 kinds of sign language based hand gestures.

  6. Interactive 3-D Modeling System Using a Hand-Held Video Camera

    Microsoft Academic Search

    Kenji Fudono; Tomokazu Sato; Naokazu Yokoya

    2005-01-01

    Recently, a number of methods for 3-D modeling from images have been developed. However, the accuracy of a reconstructed model depends on camera positions and postures with which the images are obtained. In most of conventional methods, some skills for adequately controlling the camera movement are needed for users to obtain a good 3-D model. In this study, we propose

  7. CCD imaging systems for DEIMOS

    NASA Astrophysics Data System (ADS)

    Wright, Christopher A.; Kibrick, Robert I.; Alcott, Barry; Gilmore, David K.; Pfister, Terry; Cowley, David J.

    2003-03-01

    The DEep Imaging Multi-Object Spectrograph (DEIMOS) images with an 8K x 8K science mosaic composed of eight 2K x 4K MIT/Lincoln Lab (MIT/LL) CCDs. It also incorporates two 1200 x 600 Orbit Semiconductor CCDs for active, close-loop flexure compensation. The science mosaic CCD controller system reads out all eight science CCDs in 40 seconds while maintaining the low noise floor of the MIT/Lincoln Lab CCDs. The flexure compensation (FC) CCD controller reads out the FC CCDs several times per minute during science mosaic exposures. The science mosaic CCD controller and the FC CCD controller are located on the electronics ring of DEIMOS. Both the MIT/Lincoln Lab CCDs and the Orbit flexure compensation CCDs and their associated cabling and printed circuit boards are housed together in the same detector vessel that is approximately 10 feet away from the electronics ring. Each CCD controller has a modular hardware design and is based on the San Diego State University (SDSU) Generation 2 (SDSU-2) CCD controller. Provisions have been made to the SDSU-2 video board to accommodate external CCD preamplifiers that are located at the detector vessel. Additional circuitry has been incorporated in the CCD controllers to allow the readback of all clocks and bias voltages for up to eight CCDs, to allow up to 10 temperature monitor and control points of the mosaic, and to allow full-time monitoring of power supplies and proper power supply sequencing. Software control features of the CCD controllers are: software selection between multiple mosaic readout modes, readout speeds, selectable gains, ramped parallel clocks to eliminate spurious charge on the CCDs, constant temperature monitoring and control of each CCD within the mosaic, proper sequencing of the bias voltages of the CCD output MOSFETs, and anti-blooming operation of the science mosaic. We cover both the hardware and software highlights of both of these CCD controller systems as well as their respective performance.

  8. VIDEO COMPRESSIVE SENSING FOR SPATIAL MULTIPLEXING CAMERAS USING MOTION-FLOW MODELS

    E-print Network

    Sankaranarayanan, Aswin C.

    a spatial light modulator (e.g., a digital micro-mirror device) and a few optical sensors. This approach poor quality. In this paper, we propose the CS multi-scale video (CS-MUVI) sensing and recovery. Key words. Video compressive sensing, optical flow, multi-scale sensing matrices, spatial mul

  9. Application: Surveillance Data-Stream Compression ? Need: Continuous monitoring of scene with video camera

    E-print Network

    Kepner, Jeremy

    at the source (embedded compression) ? Solution: Shrink data storage requirements ? Reduce size of each video only these "interesting" video frames Introduction #12;2 Embedded System Strategy: Model-Based Design MATLAB Link for TI Real-Time Workshop (RTW) Embedded Target for TI C6000 Comms, Fixed-point, Stateflow

  10. Dynamics of Pulsating and Cellular Flames Using a High-Speed, High Sensitivity Camera

    Microsoft Academic Search

    Michael Gorman

    2002-01-01

    A high-speed, high sensitivity camera has been assembled to record the spatiotemporal dynamics of pulsating and cellular flames at frequencies above 15 Hz, the Nyquist frequency of standard videotape. A high-speed CCD camera has been equipped with an image intensifier to record the dynamics of low-intensity, high frequency pulsating flames and to combine both electronic and video data on each

  11. Laboratory Test of CCD #1 in BOAO

    NASA Astrophysics Data System (ADS)

    Park, Byeong-Gon; Chun, Moo Young; Kim, Seung-Lee

    1995-12-01

    An introduction to the first CCD camera system in Bohyunsan Optical Astronomy Observatory (CCD#1) is presented. The CCD camera adopts modular dewar design of IfA(Institute for Astronomy at Hawaii University) and SDSU(San Diego State University) general purpose CCD controller. The user interface is based on IfA design of easy-to-use GUI program running on the NeXT workstation. The characteristics of the CCD#1 including Gain, Charge Transfer Efficiency, rms Read-Out Noise, Linearity and Dynamic range is tested and discussed. The CCD#1 shows 6.4 electrons RON and gain of 3.49 electrons per ADU, and the optimization resulted in about 27 seconds readout time guaranteeing charge transfer efficiency of 0.99999 for both directions. Linearity test shows that non-linear coefficient is 6e-7 in the range of 0 to 30,000 ADU.

  12. Technical Note: Determining regions of interest for CCD camera-based fiber optic luminescence dosimetry by examining signal-to-noise ratio

    PubMed Central

    Klein, David M.; Therriault-Proulx, Francois; Archambault, Louis; Briere, Tina M.; Beaulieu, Luc; Beddar, A. Sam

    2011-01-01

    Purpose: The goal of this work was to develop a method for determining regions of interest (ROIs) based on signal-to-noise ratio (SNR) for the analysis of charge-coupled device (CCD) images used in luminescence-based radiation dosimetry. Methods: The ROI determination method was developed using images containing high-and low-intensity signals taken with a CCD-based, fiber optic plastic scintillation detector system. A series of threshold intensity values was defined for each signal, and ROIs were fitted around the pixels that exceeded each threshold. The SNR for each ROI was calculated and the relationship between SNR and ROI area was examined. Results: The SNR was found to increase rapidly over small ROIs for both signal levels. After reaching a maximum, the SNR of the low-intensity signal decreased steadily over larger ROIs, but the high-intensity SNR did not decrease appreciably over the ROI sizes studied. The spatial extent of the normalized images showed intensity independence, suggesting that a fixed ROI is useful for varying signal levels. Conclusions: The method described here constitutes a simple yet effective method for defining ROIs based on SNR that could enhance the low-level detection capabilities of CCD-based luminescence dosimetry systems. PMID:21520848

  13. Video acquisition between USB 2.0 CMOS camera and embedded FPGA system

    Microsoft Academic Search

    A. Abdaoui; K. Gurram; M. Singh; A. Errandani; E. Chatelet; A. Doumar; T. Elfouly

    2011-01-01

    In this paper, we introduce the hardware implementation of video acquisition in a sensor node of wireless sensor network with the help of USB 2.0 interface. The USB 2.0 video acquisition is based on the CY7C67300 controller and the DCC1545M image sensor. In this paper, we detail the hardware architecture and the application program design in a sensor node using

  14. Development of observation method for hydrothermal flows with acoustic video camera

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Asada, A.; Kinoshita, M.; Tamura, H.; Tamaki, K.

    2011-12-01

    DIDSON (Dual-Frequency IDentification SONar) is acoustic lens-based sonar. It has sufficiently high resolution and rapid refresh rate that it can substitute for optical system in turbid or dark water where optical systems fail. Institute of Industrial Science, University of Tokyo (IIS) has understood DIDSON's superior performance and tried to develop a new observation method based on DIDSON for hydrothermal discharging from seafloor vent. We expected DIDSON to reveal whole image of hydrothermal plume as well as detail inside the plume. In October 2009, we conducted seafloor reconnaissance using a manned deep-sea submersible Shinkai6500 in Central Indian Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. DIDSON was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. These are only a few acoustic video images of the hydrothermal plumes. We could identify shadings inside the acoustic video images of the hydrothermal plumes. Silhouettes of the hydrothermal plumes varied from second to second, and the shadings inside them varied their shapes, too. These variations corresponded to internal structures and flows of the plumes. We are analyzing the acoustic video images in order to deduce information of their internal structures and flows in plumes. On the other hand, we are preparing a tank experiment so that we will have acoustic video images of water flow under the control of flow rate. The purpose of the experiment is to understand relation between flow rate and acoustic video image quantitatively. Results from this experiment will support the aforementioned image analysis of the hydrothermal plume data from Central Indian Ridge. We will report the overview of the image analysis and the tank experiments, and discuss possibility of DIDSON as an observation tool for seafloor hydrothermal activity.

  15. CCD and IR array controllers

    NASA Astrophysics Data System (ADS)

    Leach, Robert W.; Low, Frank J.

    2000-08-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation at low background/low noise regimes as well as high background/high speed regimes. The CCD and IR controllers have a common digital core based on user- programmable digital signal processors that are used to generate the array clocking and signal processing signals customized for each application. A fiber optic link passes image data and commands to VME or PCI interface boards resident in a host computer to the controller. CCD signal processing is done with a dual slope integrator operating at speeds of up to one Megapixel per second per channel. Signal processing of IR arrays is done either with a dual channel video processor or a four channel video processor that has built-in image memory and a coadder to 32-bit precision for operating high background arrays. Recent developments underway include the implementation of a fast fiber optic data link operating at a speed of 12.5 Megapixels per second for fast image transfer from the controller to the host computer, and supporting image acquisition software and device drivers for the PCI interface board for the Sun Solaris, Linux and Windows 2000 operating systems.

  16. Aligning windows of live video from an imprecise pan-tilt-zoom camera into a remote panoramic display for remote nature observation

    Microsoft Academic Search

    Dezhen Song; Yiliang Xu; Ni Qin

    2010-01-01

    A pan-tilt-zoom (PTZ) robotic camera can provide a detailed live video of selected areas of interest within a large potential\\u000a viewing field. The selective coverage is ideal for nature observation applications where power and bandwidth are often limited.\\u000a To provide the spatial context for human observers, it is desirable to insert the live video into a large spherical panoramic\\u000a display

  17. Action Recognition in Videos Acquired by a Moving Camera Using Motion Decomposition of Lagrangian Particle Trajectories

    E-print Network

    Central Florida, University of

    Particle Trajectories Shandong Wu Computer Vision Lab University of Central Florida sdwu@eecs.ucf.edu Omar particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time; however, they mostly tackle stationary camera scenarios. Recently, there has Particle Trajectories

  18. Performance of compact ICU (intensified camera unit) with autogating based on video signal

    NASA Astrophysics Data System (ADS)

    de Groot, Arjan; Linotte, Peter; van Veen, Django; de Witte, Martijn; Laurent, Nicolas; Hiddema, Arend; Lalkens, Fred; van Spijker, Jan

    2007-10-01

    High quality night vision digital video is nowadays required for many observation, surveillance and targeting applications, including several of the current soldier modernization programs. We present the performance increase that is obtained when combining a state-of-the-art image intensifier with a low power consumption CMOS image sensor. Based on the content of the video signal, the gating and gain of the image intensifier are optimized for best SNR. The options of the interface with a separate laser in the application for range gated imaging are discussed.

  19. MULTIPLE BACKGROUND SPRITE GENERATION USING CAMERA MOTION CHARACTERIZATION FOR OBJECT-BASED VIDEO CODING

    E-print Network

    Wichmann, Felix

    -based video coding can pro- vide higher coding gain than common H.264/AVC for single-view and the MVC standard based on H.264 for multi-view (MVC). The use of background sprites outperformes the AVC/MVC especially

  20. A Human-Machine Collaborative Approach to Tracking Human Movement in Multi-Camera Video

    E-print Network

    Roy, Deb

    Camp MIT Media Lab 20 Ames Street, E15-441 Cambridge, Massachusetts 02139 Deb Roy MIT Media Lab 20 Ames their contents. Automatic video analyses produce low to medium accuracy for all but the simplest analysis tasks, while manual approaches are pro- hibitively expensive. In the tradeoff between accuracy and cost, human

  1. "Lights, Camera, Reflection": Using Peer Video to Promote Reflective Dialogue among Student Teachers

    ERIC Educational Resources Information Center

    Harford, Judith; MacRuairc, Gerry; McCartan, Dermot

    2010-01-01

    This paper examines the use of peer-videoing in the classroom as a means of promoting reflection among student teachers. Ten pre-service teachers participating in a teacher education programme in a university in the Republic of Ireland and ten pre-service teachers participating in a teacher education programme in a university in the North of…

  2. Lights! Camera! Action! Producing Library Instruction Video Tutorials Using Camtasia Studio

    ERIC Educational Resources Information Center

    Charnigo, Laurie

    2009-01-01

    From Web guides to online tutorials, academic librarians are increasingly experimenting with many different technologies in order to meet the needs of today's growing distance education populations. In this article, the author discusses one librarian's experience using Camtasia Studio to create subject specific video tutorials. Benefits, as well…

  3. A Motionless Camera

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  4. First demonstration of neutron resonance absorption imaging using a high-speed video camera in J-PARC

    NASA Astrophysics Data System (ADS)

    Kai, T.; Segawa, M.; Ooi, M.; Hashimoto, E.; Shinohara, T.; Harada, M.; Maekawa, F.; Oikawa, K.; Sakai, T.; Matsubayashi, M.; Kureta, M.

    2011-09-01

    The neutron resonance absorption imaging technique with a high-speed video camera was successfully demonstrated at the beam line NOBORU, J-PARC. Pulsed neutrons were observed through several kinds of metal foils as a function of neutron time-of-flight by utilizing a high-speed neutron radiography system. A set of time-dependent images was obtained for each neutron pulse, and more than a thousand sets of images were recorded in total. The images with the same time frame were summed after the measurement. Then the authors obtained a set of images having enhanced contrast of sample foils around the resonance absorption energies of cobalt (132 eV), cadmium (28 eV), tantalum (4.3 and 10 eV), gold (4.9 eV) and indium (1.5 eV).

  5. The use of a radiation sensitive CCD camera system to measure bone mineral content in the neonatal forearm: a feasibility study

    Microsoft Academic Search

    J. G. Truscott; R. Milner; S. Metcalfe; M. A. Smith

    1992-01-01

    Some preterm babies are too sick to be moved from an incubator to a measuring instrument for single photon absorptiometry. A portable hand-held instrument is needed to measure bone mineral content in the incubator. The authors describe the use of an isotope transmission device with a radiation sensitive charge coupled device camera for this purpose.

  6. Flat Field Anomalies in an X-ray CCD Camera Measured Using a Manson X-ray Source (HTPD 08 paper)

    Microsoft Academic Search

    M Haugh; M B Schneider

    2008-01-01

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation.

  7. Application of neural network to analyses of CCD colour TV-camera image for the detection of car fires in expressway tunnels

    Microsoft Academic Search

    T. Ono; H. Ishii; K. Kawamura; H. Miura; E. Momma; T. Fujisawa; J. Hozumi

    2006-01-01

    This study aims to investigate the effectiveness of the early detection of a car fire occurring in an expressway tunnel using surveillance cameras. Road and rail tunnels need to be built in mountainous areas, in order to provide the effective distribution of goods. A fire in a tunnel can endanger human life as the heat flow and smoke increase rapidly

  8. Point Counts Underestimate the Importance of Arctic Foxes as Avian Nest Predators: Evidence from Remote Video Cameras in Arctic Alaskan Oil Fields

    Microsoft Academic Search

    JOSEPH R. LIEBEZEIT; STEVE ZACK

    2008-01-01

    We used video cameras to identify nest predators at active shorebird and passerine nests and conducted point count surveys separately to determine species richness and detection frequency of potential nest predators in the Prudhoe Bay region of Alaska. From the surveys, we identified 16 potential nest predators, with glaucous gulls (Larus hyperboreus) and parasitic jaegers (Stercorarius parasiticus) making up more

  9. Combined video and laser camera for inspection of old mine shafts L. Cauvin (INERIS, Institut National de l'Environnement industriel et des RISques)

    E-print Network

    Boyer, Edmond

    1 Combined video and laser camera for inspection of old mine shafts L. Cauvin (INERIS, Institut National de l'Environnement industriel et des RISques) Abstract For the cases where the location of a shaft to face with problems of characterization of body of shafts or underground cavities. The system is able

  10. Camera Committee 

    E-print Network

    Unknown

    2011-08-17

    objects on stereoscopic still video images. Digital picture elements (pixels) were used as units of measurement. A scale model was devised to emulate low altitude videography. Camera distance was set at 1500 cm to simulate a flight altitude of 1500 feet... above ground level. Accordingly, the model was designed for rods 40 to 100 cm long to represent poles measuring 40 to 100 feet in height. Absolute orientation of each stereoscopic image was obtained by surveying each nadir, camera location and camera...

  11. Characterization and calibration of a CCD detector for light engineering

    Microsoft Academic Search

    Pietro Fiorentin; Paola Iacomussi; Giuseppe Rossi

    2005-01-01

    This paper describes the methodology developed for characterizing a commercial charge-coupled device (CCD) camera as a luminance meter for analyzing lighting systems and especially for measurements in road light plants. Today, several luminance meters based on commercial CCD cameras are on the market. They are very attractive for the lighting engineer: The availability of a high number of closely spaced

  12. Crosswind sensing from optical-turbulence-induced fluctuations measured by a video camera.

    PubMed

    Porat, Omer; Shapira, Joseph

    2010-10-01

    We present a novel method for remote sensing of crosswind using a passive imaging device, such as a video recorder. The method is based on spatial and temporal correlations of the intensity fluctuations of a naturally illuminated scene induced by atmospheric turbulence. Adaptable spatial filtering, taking into account variations of the dominant spatial scales of the turbulence (due to changes in meteorological conditions, such as turbulence strength, or imaging device performance, such as frame rate or spatial resolution), is incorporated into this method. Experimental comparison with independent wind measurement using anemometers shows good agreement. PMID:20885458

  13. Jellyfish Support High Energy Intake of Leatherback Sea Turtles (Dermochelys coriacea): Video Evidence from Animal-Borne Cameras

    PubMed Central

    Heaslip, Susan G.; Iverson, Sara J.; Bowen, W. Don; James, Michael C.

    2012-01-01

    The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km) from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate) correlate with the daytime foraging behavior of leatherbacks (n?=?19) in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08–3:38 h), and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata) was the dominant prey (83–100%), but moon jellyfish (Aurelia aurita) were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p?=?0.74, linear mixed-effects models). Handling time increased with prey size regardless of prey species (p?=?0.0001). Estimates of energy intake averaged 66,018 kJ•d?1 but were as high as 167,797 kJ•d?1 corresponding to turtles consuming an average of 330 kg wet mass•d?1 (up to 840 kg•d?1) or approximately 261 (up to 664) jellyfish•d-1. Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass•d?1 equating to an average energy intake of 3–7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to southward migration. PMID:22438906

  14. Improvement in the light sensitivity of the ultrahigh-speed high-sensitivity CCD with a microlens array

    NASA Astrophysics Data System (ADS)

    Hayashida, T.,; Yonai, J.; Kitamura, K.; Arai, T.; Kurita, T.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Kitagawa, S.; Hatade, K.; Yamaguchi, T.; Takeuchi, H.; Iida, K.

    2008-02-01

    We are advancing the development of ultrahigh-speed, high-sensitivity CCDs for broadcast use that are capable of capturing smooth slow-motion videos in vivid colors even where lighting is limited, such as at professional baseball games played at night. We have already developed a 300,000 pixel, ultrahigh-speed CCD, and a single CCD color camera that has been used for sports broadcasts and science programs using this CCD. However, there are cases where even higher sensitivity is required, such as when using a telephoto lens during a baseball broadcast or a high-magnification microscope during science programs. This paper provides a summary of our experimental development aimed at further increasing the sensitivity of CCDs using the light-collecting effects of a microlens array.

  15. Advanced camera image data acquisition system for Pi-of-the-Sky

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Maciej; Kasprowicz, Grzegorz; Pozniak, Krzysztof; Romaniuk, Ryszard; Wrochna, Grzegorz

    2008-11-01

    The paper describes a new generation of high performance, remote control, CCD cameras designed for astronomical applications. A completely new camera PCB was designed, manufactured, tested and commissioned. The CCD chip was positioned in a different way than previously resulting in better performance of the astronomical video data acquisition system. The camera was built using a low-noise, 4Mpixel CCD circuit by STA. The electronic circuit of the camera is highly parameterized and reconfigurable, as well as modular in comparison with the solution of first generation, due to application of open software solutions and FPGA circuit, Altera Cyclone EP1C6. New algorithms were implemented into the FPGA chip. There were used the following advanced electronic circuit in the camera system: microcontroller CY7C68013a (core 8051) by Cypress, image processor AD9826 by Analog Devices, GigEth interface RTL8169s by Realtec, memory SDRAM AT45DB642 by Atmel, CPU typr microprocessor ARM926EJ-S AT91SAM9260 by ARM and Atmel. Software solutions for the camera and its remote control, as well as image data acquisition are based only on the open source platform. There were used the following image interfaces ISI and API V4L2, data bus AMBA, AHB, INDI protocol. The camera will be replicated in 20 pieces and is designed for continuous on-line, wide angle observations of the sky in the research program Pi-of-the-Sky.

  16. Aerial video surveillance and exploitation

    Microsoft Academic Search

    RAKESH KUMAR; HARPREET SAWHNEY; SUPUN SAMARASEKERA; STEVE HSU; Hai Tao; Yanlin Guo; KEITH HANNA; ARTHUR POPE; RICHARD WILDES; DAVID HIRVONEN; MICHAEL HANSEN; PETER BURT

    2001-01-01

    There is growing interest in performing aerial surveillance using video cameras. Compared to traditional framing cameras, video cameras provide the capability to observe ongoing activity within a scene and to automatically control the camera to track the activity. However, the high data rates and relatively small field of view of video cameras present new technical challenges that must be overcome

  17. Bird-Borne Video-Cameras Show That Seabird Movement Patterns Relate to Previously Unrevealed Proximate Environment, Not Prey

    PubMed Central

    Tremblay, Yann; Thiebault, Andréa; Mullers, Ralf; Pistorius, Pierre

    2014-01-01

    The study of ecological and behavioral processes has been revolutionized in the last two decades with the rapid development of biologging-science. Recently, using image-capturing devices, some pilot studies demonstrated the potential of understanding marine vertebrate movement patterns in relation to their proximate, as opposed to remote sensed environmental contexts. Here, using miniaturized video cameras and GPS tracking recorders simultaneously, we show for the first time that information on the immediate visual surroundings of a foraging seabird, the Cape gannet, is fundamental in understanding the origins of its movement patterns. We found that movement patterns were related to specific stimuli which were mostly other predators such as gannets, dolphins or fishing boats. Contrary to a widely accepted idea, our data suggest that foraging seabirds are not directly looking for prey. Instead, they search for indicators of the presence of prey, the latter being targeted at the very last moment and at a very small scale. We demonstrate that movement patterns of foraging seabirds can be heavily driven by processes unobservable with conventional methodology. Except perhaps for large scale processes, local-enhancement seems to be the only ruling mechanism; this has profounds implications for ecosystem-based management of marine areas. PMID:24523892

  18. Observation of the dynamic movement of fragmentations by high-speed camera and high-speed video

    NASA Astrophysics Data System (ADS)

    Suk, Chul-Gi; Ogata, Yuji; Wada, Yuji; Katsuyama, Kunihisa

    1995-05-01

    The experiments of blastings using mortal concrete blocks and model concrete columns were carried out in order to obtain technical information on fragmentation caused by the blasting demolition. The dimensions of mortal concrete blocks were 1,000 X 1,000 X 1,000 mm. Six kinds of experimental blastings were carried out using mortal concrete blocks. In these experiments precision detonators and No. 6 electric detonators with 10 cm detonating fuse were used and discussed the control of fragmentation. As the results of experiment it was clear that the flying distance of fragmentation can be controlled using a precise blasting system. The reinforced concrete model columns for typical apartment houses in Japan were applied to the experiments. The dimension of concrete test column was 800 X 800 X 2400 mm and buried 400 mm in the ground. The specified design strength of the concrete was 210 kgf/cm2. These columns were exploded by the blasting with internal loading of dynamite. The fragmentation were observed by two kinds of high speed camera with 500 and 2000 FPS and a high speed video with 400 FPS. As one of the results in the experiments, the velocity of fragmentation, blasted 330 g of explosive with the minimum resisting length of 0.32 m, was measured as much as about 40 m/s.

  19. Bird-borne video-cameras show that seabird movement patterns relate to previously unrevealed proximate environment, not prey.

    PubMed

    Tremblay, Yann; Thiebault, Andréa; Mullers, Ralf; Pistorius, Pierre

    2014-01-01

    The study of ecological and behavioral processes has been revolutionized in the last two decades with the rapid development of biologging-science. Recently, using image-capturing devices, some pilot studies demonstrated the potential of understanding marine vertebrate movement patterns in relation to their proximate, as opposed to remote sensed environmental contexts. Here, using miniaturized video cameras and GPS tracking recorders simultaneously, we show for the first time that information on the immediate visual surroundings of a foraging seabird, the Cape gannet, is fundamental in understanding the origins of its movement patterns. We found that movement patterns were related to specific stimuli which were mostly other predators such as gannets, dolphins or fishing boats. Contrary to a widely accepted idea, our data suggest that foraging seabirds are not directly looking for prey. Instead, they search for indicators of the presence of prey, the latter being targeted at the very last moment and at a very small scale. We demonstrate that movement patterns of foraging seabirds can be heavily driven by processes unobservable with conventional methodology. Except perhaps for large scale processes, local-enhancement seems to be the only ruling mechanism; this has profounds implications for ecosystem-based management of marine areas. PMID:24523892

  20. Assessing the application of an airborne intensified multispectral video camera to measure chlorophyll a in three Florida estuaries

    SciTech Connect

    Dierberg, F.E. [DB Environmental Labs., Inc., Rockledge, FL (United States); Zaitzeff, J. [National Oceanographic and Atmospheric Adminstration, Washington, DC (United States)

    1997-08-01

    After absolute and spectral calibration, an airborne intensified, multispectral video camera was field tested for water quality assessments over three Florida estuaries (Tampa Bay, Indian River Lagoon, and the St. Lucie River Estuary). Univariate regression analysis of upwelling spectral energy vs. ground-truthed uncorrected chlorophyll a (Chl a) for each estuary yielded lower coefficients of determination (R{sup 2}) with increasing concentrations of Gelbstoff within an estuary. More predictive relationships were established by adding true color as a second independent variable in a bivariate linear regression model. These regressions successfully explained most of the variation in upwelling light energy (R{sup 2}=0.94, 0.82 and 0.74 for the Tampa Bay, Indian River Lagoon, and St. Lucie estuaries, respectively). Ratioed wavelength bands within the 625-710 nm range produced the highest correlations with ground-truthed uncorrected Chl a, and were similar to those reported as being the most predictive for Chl a in Tennessee reservoirs. However, the ratioed wavebands producing the best predictive algorithms for Chl a differed among the three estuaries due to the effects of varying concentrations of Gelbstoff on upwelling spectral signatures, which precluded combining the data into a common data set for analysis.

  1. Timothy York, timothy.york@gmail.com (A paper written under the guidance of Prof. Raj Jain) Download Image sensors are everywhere. They are present in single shot digital cameras, digital video cameras, embedded in cellular phones, and many more places. W

    E-print Network

    Jain, Raj

    of Silicon Image Sensors 1.2 Measuring Light Intensity 1.3 CCD Image Sensors 1.4 CMOS Image Sensors 2) Download Abstract: Image sensors are everywhere. They are present in single shot digital cameras, digital the fundamentals of how a digital image sensor works, focusing on how photons are converted into electrical signals

  2. An Explanation for Camera Perspective Bias in Voluntariness Judgment for Video-Recorded Confession: Suggestion of Cognitive Frame

    Microsoft Academic Search

    Kwangbai ParkJimin Pyo; Jimin Pyo

    Three experiments were conducted to test the hypothesis that difference in voluntariness judgment for a custodial confession\\u000a filmed in different camera focuses (“camera perspective bias”) could occur because a particular camera focus conveys a suggestion\\u000a of a particular cognitive frame. In Experiment 1, 146 juror eligible adults in Korea showed a camera perspective bias in voluntariness\\u000a judgment with a simulated

  3. The Dark Energy Survey CCD imager design

    SciTech Connect

    Cease, H.; DePoy, D.; Diehl, H.T.; Estrada, J.; Flaugher, B.; Guarino, V.; Kuk, K.; Kuhlmann, S.; Schultz, K.; Schmitt, R.L.; Stefanik, A.; /Fermilab /Ohio State U. /Argonne

    2008-06-01

    The Dark Energy Survey is planning to use a 3 sq. deg. camera that houses a {approx} 0.5m diameter focal plane of 62 2kx4k CCDs. The camera vessel including the optical window cell, focal plate, focal plate mounts, cooling system and thermal controls is described. As part of the development of the mechanical and cooling design, a full scale prototype camera vessel has been constructed and is now being used for multi-CCD readout tests. Results from this prototype camera are described.

  4. Evaluating the Effects of Camera Perspective in Video Modeling for Children with Autism: Point of View versus Scene Modeling

    ERIC Educational Resources Information Center

    Cotter, Courtney

    2010-01-01

    Video modeling has been used effectively to teach a variety of skills to children with autism. This body of literature is characterized by a variety of procedural variations including the characteristics of the video model (e.g., self vs. other, adult vs. peer). Traditionally, most video models have been filmed using third person perspective…

  5. EVENT-DRIVEN VIDEO CODING FOR OUTDOOR WIRELESS MONITORING CAMERAS Zichong Chen, Guillermo Barrenetxea and Martin Vetterli

    E-print Network

    Vetterli, Martin

    on batteries and a solar panel, we need to minimize the transmitted data size because the radio transceiver such as H.264 are inefficient as they ignore the "meaning" of video content and thus waste many bits as the conventional video coding ignores the "meaning" of video content, and therefore it wastes many bits to convey

  6. Vacuum Camera Cooler

    NASA Technical Reports Server (NTRS)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  7. CCD Micrometer of the Mykolayiv Axial Meridian Circle

    Microsoft Academic Search

    A. N. Kovalchuk; Yu. I. Protsyuk; A. V. Shulga

    1997-01-01

    The observations of faint stars are very effective with a CCD eyepiece micrometer. Such a micrometer was designed for the Axial Meridian Circle (AMC) of the Mykolayiv Astronomical Observatory. It includes the CCD matrix 256 × 288 with pixel size 32 × 24 m. The matrix is mounted into a vacuum camera and cooled down to -40° C with thermoelectric

  8. Large area CCD image sensors for space astronomy

    NASA Technical Reports Server (NTRS)

    Schwarzschild, M.

    1979-01-01

    The Defense Advanced Research Projects Agency (DARPA) has a substantial program to develop a 2200 x 2200 pixel CCD (Charge Coupled Device) mosaic array made up of 400 individual CCD's, 110 x 110 pixels square. This type of image sensor appeared to have application in space and ground-based astronomy. Under this grant a CCD television camera system was built which was capable of operating an array of 4 CCD's to explore the suitability of the CCD's to explore the suitability of the CCD for astronomical applications. Two individual packaged CCD's were received and evaluated. Evaluation of the basic characteristics of the best individual chips was encouraging, but the manufacturer found that their yield in manufacturing this design is two low to supply sufficient CDD's for the DARPA mosaic array. The potential utility of large mosaic arrays in astronomy is still substantial and continued monitoring of the manufacturers progress in the coming year is recommended.

  9. Statistical Calibration of the CCD Imaging Process

    Microsoft Academic Search

    Yanghai Tsin; Visvanathan Ramesh; Takeo Kanade

    2001-01-01

    Charge-Coupled Device (CCD) cameras are widely used imaging sensors in computer vision systems. Many pho- tometric algorithms, such as shape from shading, color constancy, and photometric stereo, implicitly assume that the image intensity is proportional to scene radiance. The actual image measurements deviate significantly from this assumption since the transformation from scene radiance to image intensity is non-linear and is

  10. Are traditional methods of determining nest predators and nest fates reliable? An experiment with Wood Thrushes (Hylocichla mustelina) using miniature video cameras

    USGS Publications Warehouse

    Williams, G.E.; Wood, P.B.

    2002-01-01

    We used miniature infrared video cameras to monitor Wood Thrush (Hylocichla mustelina) nests during 1998-2000. We documented nest predators and examined whether evidence at nests can be used to predict predator identities and nest fates. Fifty-six nests were monitored; 26 failed, with 3 abandoned and 23 depredated. We predicted predator class (avian, mammalian, snake) prior to review of video footage and were incorrect 57% of the time. Birds and mammals were underrepresented whereas snakes were over-represented in our predictions. We documented ???9 nest-predator species, with the southern flying squirrel (Glaucomys volans) taking the most nests (n = 8). During 2000, we predicted fate (fledge or fail) of 27 nests; 23 were classified correctly. Traditional methods of monitoring nests appear to be effective for classifying success or failure of nests, but ineffective at classifying nest predators.

  11. Upgrading a CCD camera for astronomical use

    E-print Network

    Lamecker, James Frank

    1993-01-01

    Borealis 95 Herculis Mizar 5. 2 5. 0 2. 5 3. 2 5. 1 5. 1 2. 1 5. 5 6. 1 5. 0 5. 4 6. 0 5. 2 4. 0 2 4s 2. 6 2. 8 4. 7 6. 3 6. 5 14. 0 The data in Table 2 was from Menzel . overlapped by the primary. An example of how big the brighter.... At the small separation of 2. 4 arc seconds, cz Lyrae was an ovular blob. See Figure 16. When seen on the monitor or film, ( Coronae Borealis had an ovular shape but its individual components were not distinguishable. One half of the blob was fainter than...

  12. Reliability of imaging CCD's

    NASA Technical Reports Server (NTRS)

    Beal, J. R.; Borenstein, M. D.; Homan, R. A.; Johnson, D. L.; Wilson, D. D.; Young, V. F.

    1979-01-01

    Report on reliability of imaging charge-coupled devices (CCD's) is intended to augment rather-meager existing information on CCD reliability. Study focuses on electrical and optical performance tests, packaging constraints, and failure modes of one commercially available device (Fairchild CCD121H).

  13. Digital autoradiography using room temperature CCD and CMOS imaging technology

    Microsoft Academic Search

    Jorge Cabello; Alexis Bailey; Ian Kitchen; Mark Prydderch; Andy Clark; Renato Turchetta; Kevin Wells

    2007-01-01

    CCD (charged coupled device) and CMOS imaging technologies can be applied to thin tissue autoradiography as potential imaging alternatives to using conventional film. In this work, we compare two particular devices: a CCD operating in slow scan mode and a CMOS-based active pixel sensor, operating at near video rates. Both imaging sensors have been operated at room temperature using direct

  14. An electronic pan/tilt/zoom camera system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steve; Martin, H. L.

    1992-01-01

    A small camera system is described for remote viewing applications that employs fisheye optics and electronics processing for providing pan, tilt, zoom, and rotational movements. The fisheye lens is designed to give a complete hemispherical FOV with significant peripheral distortion that is corrected with high-speed electronic circuitry. Flexible control of the viewing requirements is provided by a programmable transformation processor so that pan/tilt/rotation/zoom functions can be accomplished without mechanical movements. Images are presented that were taken with a prototype system using a CCD camera, and 5 frames/sec can be acquired from a 180-deg FOV. The image-tranformation device can provide multiple images with different magnifications and pan/tilt/rotation sequences at frame rates compatible with conventional video devices. The system is of interest to the object tracking, surveillance, and viewing in constrained environments that would require the use of several cameras.

  15. Video Visualization Gareth Daniel Min Chen

    E-print Network

    Grant, P. W.

    Video Visualization Gareth Daniel Min Chen University of Wales Swansea, UK Abstract Video data, generated by the entertainment industry, security and traffic cameras, video conferencing systems, video a novel methodology for "summarizing" video sequences using volume visualization techniques. We outline

  16. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  17. Mapping herbage biomass and nitrogen status in an Italian ryegrass (Lolium multiflorum L.) field using a digital video camera with balloon system

    NASA Astrophysics Data System (ADS)

    Kawamura, Kensuke; Sakuno, Yuji; Tanaka, Yoshikazu; Lee, Hyo-Jin; Lim, Jihyun; Kurokawa, Yuzo; Watanabe, Nariyasu

    2011-01-01

    Improving current precision nutrient management requires practical tools to aid the collection of site specific data. Recent technological developments in commercial digital video cameras and the miniaturization of systems on board low-altitude platforms offer cost effective, real time applications for efficient nutrient management. We tested the potential use of commercial digital video camera imagery acquired by a balloon system for mapping herbage biomass (BM), nitrogen (N) concentration, and herbage mass of N (Nmass) in an Italian ryegrass (Lolium multiflorum L.) meadow. The field measurements were made at the Setouchi Field Science Center, Hiroshima University, Japan on June 5 and 6, 2009. The field consists of two 1.0 ha Italian ryegrass meadows, which are located in an east-facing slope area (230 to 240 m above sea level). Plant samples were obtained at 20 sites in the field. A captive balloon was used for obtaining digital video data from a height of approximately 50 m (approximately 15 cm spatial resolution). We tested several statistical methods, including simple and multivariate regressions, using forage parameters (BM, N, and Nmass) and three visible color bands or color indices based on ratio vegetation index and normalized difference vegetation index. Of the various investigations, a multiple linear regression (MLR) model showed the best cross validated coefficients of determination (R2) and minimum root-mean-squared error (RMSECV) values between observed and predicted herbage BM (R2 = 0.56, RMSECV = 51.54), Nmass (R2 = 0.65, RMSECV = 0.93), and N concentration (R2 = 0.33, RMSECV = 0.24). Applying these MLR models on mosaic images, the spatial distributions of the herbage BM and N status within the Italian ryegrass field were successfully displayed at a high resolution. Such fine-scale maps showed higher values of BM and N status at the bottom area of the slope, with lower values at the top of the slope.

  18. Identification of Prey Captures in Australian Fur Seals (Arctocephalus pusillus doriferus) Using Head-Mounted Accelerometers: Field Validation with Animal-Borne Video Cameras

    PubMed Central

    Volpov, Beth L.; Hoskins, Andrew J.; Battaile, Brian C.; Viviant, Morgane; Wheatley, Kathryn E.; Marshall, Greg; Abernathy, Kyler; Arnould, John P. Y.

    2015-01-01

    This study investigated prey captures in free-ranging adult female Australian fur seals (Arctocephalus pusillus doriferus) using head-mounted 3-axis accelerometers and animal-borne video cameras. Acceleration data was used to identify individual attempted prey captures (APC), and video data were used to independently verify APC and prey types. Results demonstrated that head-mounted accelerometers could detect individual APC but were unable to distinguish among prey types (fish, cephalopod, stingray) or between successful captures and unsuccessful capture attempts. Mean detection rate (true positive rate) on individual animals in the testing subset ranged from 67-100%, and mean detection on the testing subset averaged across 4 animals ranged from 82-97%. Mean False positive (FP) rate ranged from 15-67% individually in the testing subset, and 26-59% averaged across 4 animals. Surge and sway had significantly greater detection rates, but also conversely greater FP rates compared to heave. Video data also indicated that some head movements recorded by the accelerometers were unrelated to APC and that a peak in acceleration variance did not always equate to an individual prey item. The results of the present study indicate that head-mounted accelerometers provide a complementary tool for investigating foraging behaviour in pinnipeds, but that detection and FP correction factors need to be applied for reliable field application. PMID:26107647

  19. Wide-Field CCD Imaging at CFHT: The MOCAM Example

    Microsoft Academic Search

    J.-C. Cuillandre; Y. Mellier; J.-P. Dupin; P. Tilloles; R. Murowinski; D. Crampton; R. Wooff; G. A. Luppino

    1996-01-01

    We describe a new 4096X4096 pixel CCD mosaic camera (MOCAM) available at the prime focus of the Canada-France-Hawaii Telescope (CFHT). The camera is a mosaic of four 2048X2048 Loral frontside-illuminated CCDs with 15 micron pixels, providing a field of view of 14' X 14' at a scale of 0.21\\

  20. AN ACTIVE CAMERA SYSTEM FOR ACQUIRING MULTI-VIEW VIDEO Robert T. Collins, Omead Amidi, and Takeo Kanade

    E-print Network

    Collins, Robert

    recogni- tion, 3D reconstruction, entertainment and sports, it is often desirable to capture a set. However, in surveillance or sports applications it is not possible to predict beforehand the precise is useful in practical situations since all cameras may not nec- essarily be installed at the same time

  1. Sensitivity analysis of camera calibration

    Microsoft Academic Search

    Jan Heikkila

    1992-01-01

    To utilize the full potential of CCD cameras a careful design must be performed. The main contribution with respect to the final precision and reliability comes from the camera calibration. Both the precision of the estimated parameters or any functions of them (e.g., object coordinates) and the sensitivity of the system with respect to the undetected model errors are of

  2. Omnidirectional video

    Microsoft Academic Search

    Christopher Geyer; Konstantinos Daniilidis

    2003-01-01

    Omnidirectional video enables direct surround immersive viewing of a scene by warping the original image into the correct perspective given a viewing direction. However, novel views from viewpoints off the camera path can only be obtained if we solve the 3D motion and calibration problem. In this paper we address the case of a parabolic catadioptric camera - a paraboloidal

  3. A geometric comparison of video camera-captured raster data to vector-parented raster data generated by the X-Y digitizing table

    NASA Technical Reports Server (NTRS)

    Swalm, C.; Pelletier, R.; Rickman, D.; Gilmore, K.

    1989-01-01

    The relative accuracy of a georeferenced raster data set captured by the Megavision 1024XM system using the Videk Megaplus CCD cameras is compared to a georeferenced raster data set generated from vector lines manually digitized through the ELAS software package on a Summagraphics X-Y digitizer table. The study also investigates the amount of time necessary to fully complete the rasterization of the two data sets, evaluating individual areas such as time necessary to generate raw data, time necessary to edit raw data, time necessary to georeference raw data, and accuracy of georeferencing against a norm. Preliminary results exhibit a high level of agreement between areas of the vector-parented data and areas of the captured file data where sufficient control points were chosen. Maps of 1:20,000 scale were digitized into raster files of 5 meter resolution per pixel and overall error in RMS was estimated at less than eight meters. Such approaches offer time and labor-saving advantages as well as increasing the efficiency of project scheduling and enabling the digitization of new types of data.

  4. Evaluating intensified camera systems

    SciTech Connect

    S. A. Baker

    2000-07-01

    This paper describes image evaluation techniques used to standardize camera system characterizations. Key areas of performance include resolution, noise, and sensitivity. This team has developed a set of analysis tools, in the form of image processing software used to evaluate camera calibration data, to aid an experimenter in measuring a set of camera performance metrics. These performance metrics identify capabilities and limitations of the camera system, while establishing a means for comparing camera systems. Analysis software is used to evaluate digital camera images recorded with charge-coupled device (CCD) cameras. Several types of intensified camera systems are used in the high-speed imaging field. Electro-optical components are used to provide precise shuttering or optical gain for a camera system. These components including microchannel plate or proximity focused diode image intensifiers, electro-static image tubes, or electron-bombarded CCDs affect system performance. It is important to quantify camera system performance in order to qualify a system as meeting experimental requirements. The camera evaluation tool is designed to provide side-by-side camera comparison and system modeling information.

  5. Video Quality for Face Detection, Recognition, and PAVEL KORSHUNOV and WEI TSANG OOI

    E-print Network

    Ooi, Wei Tsang

    Algorithm, Video Quality, Blockiness, Mutual Information, Video Surveillance 1. INTRODUCTION Humans, a video surveillance system can automatically analyze video, without alerting the human guard, until, these automated systems have more than one remote video sensor unit (surveillance camera, mobile camera

  6. A MULTI-CAMERA SURVEILLANCE SYSTEM THAT ESTIMATES QUALITY-OF-VIEW MEASUREMENT

    E-print Network

    British Columbia, University of

    a multi-camera video surveillance system with automatic camera selection. A new confidence measure-Of-View, multi-camera, camera selection 1. INTRODUCTION Multi-camera video surveillance systems have generated. In a video surveillance system that employs multiple cameras, one key problem is selecting the most

  7. Digital video.

    PubMed

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed. PMID:15123920

  8. Acquisition cameras and wavefront sensors for the GTC 10-m telescope

    NASA Astrophysics Data System (ADS)

    Kohley, Ralf; Suárez Valles, Marcos; Burley, Gregory S.; Cavaller Marqués, Lluis; Vilela, Rafael; Justribó, Tomás

    2004-09-01

    The GTC Acquisition Cameras and Wavefront Sensors are based on a modular design with remote, low-profile and lightweight CCD heads and a compact CCD controller. The cameras employ E2V Technologies Peltier cooled CCD47-20 and CCD39-01 detectors, which achieve 1Hz and 200Hz full frame readouts, respectively. The CCD controller is a modified version of the Magellan CCD controller (Greg Burley - OCIW), which is linked to the GTC control system. We present the detailed design and first performance results of the cameras.

  9. A video authentication technique

    SciTech Connect

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as long supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system.

  10. A video authentication technique

    SciTech Connect

    Johnson, C.S.

    1987-07-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system.

  11. Closed circuit color video system on the TFTR machine

    SciTech Connect

    Kolinchak, G.; Wertenbaker, J. [Princeton Plasma Physics Lab., NJ (United States)

    1995-12-31

    This paper describes the Closed Circuit Color Video System used on the TFTR machine and several of its systems. The system was installed for DT Operations to enhance surveillance, and to diagnose problems such as frost, leaks, blowoffs, arcing and many other day to day operational problems. The system consists of 23 digital color cameras with pan, tilt, and zoom capability. Three portable camera arts with optical transceivers are also provided for special cases when it is necessary to have close machine views with electrical safety breaks. Primary controlling and monitoring is provided at the Shift Supervisor Station, which plays an essential role in each day`s operations. Secondary control and monitor stations are located in the Laboratory Auditorium, for large screen projection, and the Visitor Gallery; in addition these two stations can operate a camera in the TFTR Control Room. The system has two types of switchers, a passive type for switching between control stations and a sequential type for switching between cameras. Modifications were also incorporated in the control drive circuits to double the acceptable neutron fluence. Degradation in camera and control performance due to Neutron fluence from DD/DT operations, activation levels of the Test Cell cameras, and the cool down profile are discussed in this paper. The system maintenance, repair, camera replacement frequency, and the possibility of improving camera longevity by selection of parts, shielding and CCD cooling are also discussed.

  12. Video Surveillance Unit

    SciTech Connect

    Martinez, R.L.; Johnson, C.S.

    1990-01-01

    The Video Surveillance Unit (VSU) has been designed to provide a flexible, easy to operate video surveillance and recording capability for permanent rack-mounted installations. The system consists of a single rack-mountable chassis and a camera enclosure. The chassis contains two 8 mm video recorders, a color monitor, system controller board, a video authentication verifier module (VAVM) and a universal power supply. A separate camera housing contains a solid state camera and a video authentication processor module (VAPM). Through changes in the firmware in the system, the recorders can be commanded to record at the same time, on alternate time cycle, or sequentially. Each recorder is capable of storing up to 26,000 scenes consisting of 6 to 8 video frames. The firmware can be changed to provide fewer recording with more frames per scene. The modular video authentication system provides verification of the integrity of the video transmission line between the camera and the recording chassis. 5 figs.

  13. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  14. A 9 A single particle reconstruction from CCD captured images on a 200 kV electron cryomicroscope

    E-print Network

    Jiang, Wen

    A 9 A single particle reconstruction from CCD captured images on a 200 kV electron cryomicroscope from images acquired from a 4 k  4 k Gatan CCD on a 200 kV electron cryomicroscope. The match of the a: Zhou and Chiu, 2003). Using a 1 k  1 k charge coupled device (CCD) camera, images of protein crystals

  15. Lights, camera…citizen science: assessing the effectiveness of smartphone-based video training in invasive plant identification.

    PubMed

    Starr, Jared; Schweik, Charles M; Bush, Nathan; Fletcher, Lena; Finn, Jack; Fish, Jennifer; Bargeron, Charles T

    2014-01-01

    The rapid growth and increasing popularity of smartphone technology is putting sophisticated data-collection tools in the hands of more and more citizens. This has exciting implications for the expanding field of citizen science. With smartphone-based applications (apps), it is now increasingly practical to remotely acquire high quality citizen-submitted data at a fraction of the cost of a traditional study. Yet, one impediment to citizen science projects is the question of how to train participants. The traditional "in-person" training model, while effective, can be cost prohibitive as the spatial scale of a project increases. To explore possible solutions, we analyze three training models: 1) in-person, 2) app-based video, and 3) app-based text/images in the context of invasive plant identification in Massachusetts. Encouragingly, we find that participants who received video training were as successful at invasive plant identification as those trained in-person, while those receiving just text/images were less successful. This finding has implications for a variety of citizen science projects that need alternative methods to effectively train participants when in-person training is impractical. PMID:25372597

  16. Lights, Camera…Citizen Science: Assessing the Effectiveness of Smartphone-Based Video Training in Invasive Plant Identification

    PubMed Central

    Starr, Jared; Schweik, Charles M.; Bush, Nathan; Fletcher, Lena; Finn, Jack; Fish, Jennifer; Bargeron, Charles T.

    2014-01-01

    The rapid growth and increasing popularity of smartphone technology is putting sophisticated data-collection tools in the hands of more and more citizens. This has exciting implications for the expanding field of citizen science. With smartphone-based applications (apps), it is now increasingly practical to remotely acquire high quality citizen-submitted data at a fraction of the cost of a traditional study. Yet, one impediment to citizen science projects is the question of how to train participants. The traditional “in-person” training model, while effective, can be cost prohibitive as the spatial scale of a project increases. To explore possible solutions, we analyze three training models: 1) in-person, 2) app-based video, and 3) app-based text/images in the context of invasive plant identification in Massachusetts. Encouragingly, we find that participants who received video training were as successful at invasive plant identification as those trained in-person, while those receiving just text/images were less successful. This finding has implications for a variety of citizen science projects that need alternative methods to effectively train participants when in-person training is impractical. PMID:25372597

  17. Television applications of interline-transfer CCD arrays

    NASA Technical Reports Server (NTRS)

    Hoagland, K. A.

    1976-01-01

    The design features and characteristics of interline transfer (ILT) CCD arrays with 190 x 244 and 380 x 488 image elements are reviewed, with emphasis on optional operating modes and system application considerations. It was shown that the observed horizontal resolution for a TV system using an ILT image sensor can approach the aperture response limit determined by photosensor site width, resulting in enhanced resolution for moving images. Preferred camera configurations and read out clocking modes for maximum resolution and low light sensitivity are discussed, including a very low light level intensifier CCD concept. Several camera designs utilizing ILT-CCD arrays are described. These cameras demonstrate feasibility in applications where small size, low-power/low-voltage operation, high sensitivity and extreme ruggedness are either desired or mandatory system requirements.

  18. Digital camera self-calibration

    NASA Astrophysics Data System (ADS)

    Fraser, C.

    1997-08-01

    Over the 25 years since the introduction of analytical camera self-calibration there has been a revolution in close-range photogrammetric image acquisition systems. High-resolution, large-area "digital" CCD sensors have all but replaced film cameras. Throughout the period of this transition, self-calibration models have remained essentially unchanged. This paper reviews the application of analytical self-calibration to digital cameras. Computer vision perspectives are touched upon, the quality of self-calibration is discussed, and an overview is given of each of the four main sources of departures from collinearity in CCD cameras. Practical issues are also addressed and experimental results are used to highlight important characteristics of digital camera self-calibration.

  19. Digital camera self-calibration

    NASA Astrophysics Data System (ADS)

    Fraser, Clive S.

    Over the 25 years since the introduction of analytical camera self-calibration there has been a revolution in close-range photogrammetric image acquisition systems. High-resolution, large-area 'digital' CCD sensors have all but replaced film cameras. Throughout the period of this transition, self-calibration models have remained essentially unchanged. This paper reviews the application of analytical self-calibration to digital cameras. Computer vision perspectives are touched upon, the quality of self-calibration is discussed, and an overview is given of each of the four main sources of departures from collinearity in CCD cameras. Practical issues are also addressed and experimental results are used to highlight important characteristics of digital camera self-calibration.

  20. CCD and IR array controllers

    Microsoft Academic Search

    Robert W. Leach; Frank J. Low

    2000-01-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation

  1. Camera Calibration with Known Rotation

    Microsoft Academic Search

    Jan-michael Frahm; Reinhard Koch

    2003-01-01

    We address the problem of using external rotation informa- tion with uncalibrated video sequences. The main problem addressed is, what is the benefit of the orientation infor- mation for camera calibration? It is shown that in case of a rotating camera the camera calibration problem is lin- ear even in the case that all intrinsic parameters vary. For arbitrarily moving

  2. Mirror pendulum pose measurement by camera calibration

    NASA Astrophysics Data System (ADS)

    Li, Lulu; Zhao, Wenchuan; Wu, Fan; Liu, Yong

    2014-09-01

    A simple method for planar mirror pendulum pose measurement is proposed. The method only needs a LCD screen and a CCD camera. LCD screen displays calibration patterns, and the virtual images (VIs) reflected by mirror are taken by the CCD camera. By camera calibration, the pose relationships between camera and VI coordinate systems can be determined. Thus the pendulum poses of the mirror is obtained according to coordinate transition and reflection principle. This method is simple and convenient, and has a big application potential in mirror pendulum pose measurement.

  3. NEW APPROACH FOR CALIBRATING OFF-THE-SHELF DIGITAL CAMERAS

    Microsoft Academic Search

    A. F. Habib; S. W. Shin; M. F. Morgan; Wg Iii

    ABSTRACT: Recent developments,of digital cameras in terms of size of Charged Coupled Device (CCD) arrays and reduced costs are leading to their applications to traditional as well as new photogrammetric surveying and mapping functions. Digital cameras, intended to replace conventional film based mapping cameras, are becoming available along with many smaller format digital cameras capable of precise measurement applications. All

  4. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 ?m technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and ?m-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  5. Advisory Surveillance Cameras Page 1 of 2

    E-print Network

    Liebling, Michael

    Advisory ­ Surveillance Cameras May 2008 Page 1 of 2 ADVISORY -- USE OF CAMERAS/VIDEO SURVEILLANCE, balancing the employer's right to video surveillance and the employee's privacy rights. When considering ON CAMPUS Surveillance cameras or devices may be acquired, installed, modified, replaced or removed only

  6. Megapixel imaging camera for expanded H{sup {minus}} beam measurements

    SciTech Connect

    Simmons, J.E.; Lillberg, J.W.; McKee, R.J.; Slice, R.W.; Torrez, J.H. [Los Alamos National Lab., NM (United States); McCurnin, T.W.; Sanchez, P.G. [EG and G Energy Measurements, Inc., Los Alamos, NM (United States). Los Alamos Operations

    1994-02-01

    A charge coupled device (CCD) imaging camera system has been developed as part of the Ground Test Accelerator project at the Los Alamos National Laboratory to measure the properties of a large diameter, neutral particle beam. The camera is designed to operate in the accelerator vacuum system for extended periods of time. It would normally be cooled to reduce dark current. The CCD contains 1024 {times} 1024 pixels with pixel size of 19 {times} 19 {mu}m{sup 2} and with four phase parallel clocking and two phase serial clocking. The serial clock rate is 2.5{times}10{sup 5} pixels per second. Clock sequence and timing are controlled by an external logic-word generator. The DC bias voltages are likewise located externally. The camera contains circuitry to generate the analog clocks for the CCD and also contains the output video signal amplifier. Reset switching noise is removed by an external signal processor that employs delay elements to provide noise suppression by the method of double-correlated sampling. The video signal is digitized to 12 bits in an analog to digital converter (ADC) module controlled by a central processor module. Both modules are located in a VME-type computer crate that communicates via ethernet with a separate workstation where overall control is exercised and image processing occurs. Under cooled conditions the camera shows good linearity with dynamic range of 2000 and with dark noise fluctuations of about {plus_minus}1/2 ADC count. Full well capacity is about 5{times}10{sup 5} electron charges.

  7. Application of a Two Camera Video Imaging System to Three-Dimensional Vortex Tracking in the 80- by 120-Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Bennett, Mark S.

    1993-01-01

    A description is presented of two enhancements for a two-camera, video imaging system that increase the accuracy and efficiency of the system when applied to the determination of three-dimensional locations of points along a continuous line. These enhancements increase the utility of the system when extracting quantitative data from surface and off-body flow visualizations. The first enhancement utilizes epipolar geometry to resolve the stereo "correspondence" problem. This is the problem of determining, unambiguously, corresponding points in the stereo images of objects that do not have visible reference points. The second enhancement, is a method to automatically identify and trace the core of a vortex in a digital image. This is accomplished by means of an adaptive template matching algorithm. The system was used to determine the trajectory of a vortex generated by the Leading-Edge eXtension (LEX) of a full-scale F/A-18 aircraft tested in the NASA Ames 80- by 120-Foot Wind Tunnel. The system accuracy for resolving the vortex trajectories is estimated to be +/-2 inches over distance of 60 feet. Stereo images of some of the vortex trajectories are presented. The system was also used to determine the point where the LEX vortex "bursts". The vortex burst point locations are compared with those measured in small-scale tests and in flight and found to be in good agreement.

  8. Towards fish-eye camera based in-home activity assessment.

    PubMed

    Bas, Erhan; Erdogmus, Deniz; Ozertem, Umut; Pavel, Misha

    2008-01-01

    Indoors localization, activity classification, and behavioral modeling are increasingly important for surveillance applications including independent living and remote health monitoring. In this paper, we study the suitability of fish-eye cameras (high-resolution CCD sensors with very-wide-angle lenses) for the purpose of monitoring people in indoors environments. The results indicate that these sensors are very useful for automatic activity monitoring and people tracking. We identify practical and mathematical problems related to information extraction from these video sequences and identify future directions to solve these issues. PMID:19163225

  9. The Video Guide. Second Edition.

    ERIC Educational Resources Information Center

    Bensinger, Charles

    Intended for both novice and experienced users, this guide is designed to inform and entertain the reader in unravelling the jargon surrounding video equipment and in following carefully delineated procedures for its use. Chapters include "Exploring the Video Universe,""A Grand Tour of Video Technology,""The Video System,""The Video Camera,""The…

  10. CCD Build a Table

    NSDL National Science Digital Library

    The National Center for Education Statistics has launched a new and innovative tool that allows users to create customized tables using data from the Common Core of Data (CCD). As the Department of Education's primary database on elementary and secondary US public schools, the CCD provides national statistical data in three main categories: general descriptive information on schools and school districts; data on students and staff; and fiscal data, which covers revenues and current expenditures. With the Build a Table application, users can now design their own tables of CCD public school data for states, counties, and districts, using data from multiple years. There is a comprehensive tutorial available for first time users needing step-by-step instructions on the "build a table" process.

  11. Cone penetrometer deployed in situ video microscope for characterizing sub-surface soil properties

    SciTech Connect

    Lieberman, S.H.; Knowles, D.S. [Naval Command, San Diego, CA (United States); Kertesz, J. [San Diego State Univ. Foundation, CA (United States)] [and others

    1997-12-31

    In this paper we report on the development and field testing of an in situ video microscope that has been integrated with a cone penetrometer probe in order to provide a real-time method for characterizing subsurface soil properties. The video microscope system consists of a miniature CCD color camera system coupled with an appropriate magnification and focusing optics to provide a field of view with a coverage of approximately 20 mm. The camera/optic system is mounted in a cone penetrometer probe so that the camera views the soil that is in contact with a sapphire window mounted on the side of the probe. The soil outside the window is illuminated by diffuse light provided through the window by an optical fiber illumination system connected to a white light source at the surface. The video signal from the camera is returned to the surface where it can be displayed in real-time on a video monitor, recorded on a video cassette recorder (VCR), and/or captured digitally with a frame grabber installed in a microcomputer system. In its highest resolution configuration, the in situ camera system has demonstrated a capability to resolve particle sizes as small as 10 {mu}m. By using other lens systems to increase the magnification factor, smaller particles could be resolved, however, the field of view would be reduced. Initial field tests have demonstrated the ability of the camera system to provide real-time qualitative characterization of soil particle sizes. In situ video images also reveal information on porosity of the soil matrix and the presence of water in the saturated zone. Current efforts are focused on the development of automated imaging processing techniques as a means of extracting quantitative information on soil particle size distributions. Data will be presented that compares data derived from digital images with conventional sieve/hydrometer analyses.

  12. Cameras Would Withstand High Accelerations

    NASA Technical Reports Server (NTRS)

    Meinel, Aden B.; Meinel, Marjorie P.; Macenka, Steven A.; Puerta, Antonio M.

    1992-01-01

    Very rugged cameras with all-reflective optics proposed for use in presence of high accelerations. Optics consist of four coaxial focusing mirrors in Cassegrain configuration. Mirrors are conics or aspherics. Optics achromatic,and imaging system overall passes light from extreme ultraviolet to far infrared. Charge-coupled-device video camera, film camera, or array of photodetectors placed at focal plane. Useful as portable imagers subject to rough handling, or instrumentation cameras mounted on severely vibrating or accelerating vehicles.

  13. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  14. Major Features of the CCD Photometry Software Data Acquisition and Reduction Package Distributed by Optec, Inc.

    NASA Astrophysics Data System (ADS)

    Dickinson, Jon

    The CCD Photometry Software Data Acquisition and Reduction Package, by Optec Inc., is a three-program software package designed for collecting and analyzing images made by most commercially available CCD cameras. The software runs on any IBM AT-class computer with EGA or better graphics and a hard drive. Some of the important features of the programs are discussed.

  15. Time resolution capability of the XMM EPIC pnCCD in different readout modes.

    E-print Network

    Barnstedt, Jürgen

    Time resolution capability of the XMM EPIC pn­CCD in different readout modes. M. Kuster 1 , S 1999. One of the instruments on board of XMM will be the EPIC pn­CCD. The detector consists of four of the flight spare unit of the EPIC pn camera with respect to time resolution of all observation modes

  16. Degradation behavior and damage mechanisms of CCD image sensor with deep-UV laser radiation

    Microsoft Academic Search

    Flora M. Li; A. Nathan

    2004-01-01

    As the deep-ultraviolet (DUV) laser technology continues to mature, an increasing number of industrial applications is shifting to intense DUV radiation sources. This trend necessitates the development of DUV sensitive charge-coupled device (CCD) cameras to provide imaging capability for process control and inspection purposes. In this paper, we examine the effects of DUV laser radiation on CCD image sensor characteristics

  17. STIS-08 CCD Darks

    NASA Astrophysics Data System (ADS)

    Dressel, Linda

    2001-07-01

    Obtain dark frames twice daily, as in the corresponding calibration program. Construct standard calibration refrence darks and updated hot pixel lists for use in pipeline reduction of SMOV3B data. Transition to analogous calibration program when normal science operation with the CCD begins.

  18. STIS-08 CCD Darks

    Microsoft Academic Search

    Linda Dressel

    2001-01-01

    Obtain dark frames twice daily, as in the corresponding calibration program. Construct standard calibration refrence darks and updated hot pixel lists for use in pipeline reduction of SMOV3B data. Transition to analogous calibration program when normal science operation with the CCD begins.

  19. Trench CCD image sensor

    Microsoft Academic Search

    T. Yamada; A. Fukumoto

    1989-01-01

    The authors describe and present simulation data on the device structure, process flow, and operation of the Trench CCD (charge coupled device), which is being developed to increase the resolution of solid-state image sensors. The device provides larger dynamic range, higher sensitivity, and no image lag together with great packing density. A charge transfer channel formed around a trench eliminates

  20. Comparison of modern CCD and CMOS image sensor technologies and systems for low resolution imaging

    Microsoft Academic Search

    Bradley S. Carlson

    2002-01-01

    CMOS image sensors were introduced to the market in 1995 and in the past three years have taken significant market share from CCD sensors in the low-end digital camera markets (e.g., web cams). CMOS sensors boast low power dissipation, single supply operation and camera-on-a-chip integration, and CCD sensors boast high sensitivity and low noise. The sensors are based on inherently

  1. Calibration of the CCD photonic measuring system for railway inspection

    NASA Astrophysics Data System (ADS)

    Popov, D. V.; Ryabichenko, R. B.; Krivosheina, E. A.

    2005-08-01

    Increasing of traffic speed is the most important task in Moscow Metro. Requirements for traffic safety grow up simultaneously with the speed increasing. Currently for track inspection in Moscow Metro is used track measurement car has built in 1954. The main drawbacks of this system are absence of automated data processing and low accuracy. Non-contact photonic measurement system (KSIR) is developed for solving this problem. New track inspection car will be built in several months. This car will use two different track inspection systems and car locating subsystem based on track circuit counting. The KSIR consists of four subsystems: rail wear, height and track gauge measurement (BFSM); rail slump measurement (FIP); contact rail measurement (FKR); speed, level and car locating (USI). Currently new subsystem for wheel flange wear (IRK) is developed. The KSIR carry out measurements in real-time mode. The BFSM subsystem contains 4 matrix CCD cameras and 4 infrared stripe illuminators. The FIP subsystem contains 4 line CCD cameras and 4 spot illuminators. The FKR subsystem contains 2 matrix CCD cameras and 2 stripe illuminators. The IRK subsystem contains 2 CCD cameras and 2 stripe illuminators. Each system calibration was carried out for their adjustment. On the first step KSIR obtains data from photonic sensors which is valued in internal measurement units. Due to the calibration on the second step non-contact system converts the data to metric measurement system.

  2. Optical CCD lock-in device for Raman difference spectroscopy

    Microsoft Academic Search

    Richard Heming; Helmut Herzog; Volker Deckert

    A new multi-channel lock-in technique for the detection of periodically varying Raman spectra is presented. Bacteriorhodopsin difference spectra demonstrate the capabilities of the scheme which incorporates sub-pixel precision spectra while avoiding environmentally induced drift. The modular VHDL-design enables chip- level compatibility with modern high-end liquid-cooled CCD camera devices.

  3. 2000 FPS digital airborne camera

    NASA Astrophysics Data System (ADS)

    Balch, Kris S.

    1998-11-01

    For many years 16 mm film cameras have been used in severe environments. These film cameras are used on Hy-G automotive sleds, airborne weapon testing, range tracking, and other hazardous environments. The companies and government agencies using these cameras are in need of replacing them with a more cost-effective solution. Film-based cameras still produce the best resolving capability. However, film development time, chemical disposal, non-optimal lighting conditions, recurring media cost, and faster digital analysis are factors influencing the desire for a 16 mm film camera replacement. This paper will describe a new imager from Kodak that has been designed to replace 16 mm high- speed film cameras. Also included is a detailed configuration, operational scenario, and cost analysis of Kodak's imager for airborne applications. The KODAK EKTAPRO HG Imager, Model 2000 is a high-resolution color or monochrome CCD camera especially designed for replacement of rugged high-speed film cameras. The HG Imager is a self-contained camera. It features a high-resolution [512x384], light-sensitive CCD sensor with an electronic shutter. This shutter provides blooming protection that prevents "smearing" of bright light sources, e.g., camera looking into a bright sun reflection. The HG Imager is a very rugged camera packaged in a highly integrated housing. This imager operates from +22 to 42 VDC. The HG Imager has a similar interface and form factor is that of high-speed film cameras, e.g., Photosonics 1B. However, the HG also has the digital interfaces such as 100BaseT Ethernet and RS-485 that enable control and image transfer. The HG Imager is designed to replace 16 mm film cameras that support rugged testing applications.

  4. Adaptive sensitivity CCD image sensor

    Microsoft Academic Search

    Sarit Chen; Ran Ginosar

    1995-01-01

    The design of an adaptive sensitivity CCD image sensor is described. The sensitivity of each pixel is individually controlled (by changing its exposure time) to assure that it is operating in the linear range of the CCD response, and not in the cut-off or saturation regions. Thus, even though an individual CCD sensor is limited in its dynamic range, the

  5. Video-Level Monitor

    NASA Technical Reports Server (NTRS)

    Gregory, Ray W.

    1993-01-01

    Video-level monitor developed to provide full-scene monitoring of video and indicates level of brightest portion. Circuit designed nonspecific and can be inserted in any closed-circuit camera system utilizing RS170 or RS330 synchronization and standard CCTV video levels. System made of readily available, off-the-shelf components. Several units are in service.

  6. Video for Instructors.

    ERIC Educational Resources Information Center

    Munroe, L. Duncan; Price, J.

    This manual introduces basic video production techniques for use by instructors in developing videos for classroom presentations and for "make-up" classes. The topics covered include script writing, camera movements, lighting, sound, location vs. studio production, working with talent, video editing, and the design of the overall production.…

  7. CCD controller requirements for ground-based optical astronomy

    NASA Astrophysics Data System (ADS)

    Leach, Robert W.

    1996-03-01

    Astronomical CCD controllers are being called upon to operate a wide variety of CCDs in a range of ground-based astronomical applications. These include operation of several CCDs in the same focal plane (mosaics), simultaneous readout from two or four corners of the same CCD (multiple readout), readout of only a small region or number of regions of a single CCD (sub-image or region of interest readout), continuous readout of devices for drift scan observations, differential imaging for low contrast polarimetric or spectroscopic observations and very fast readout of small devices for wavefront sensing in adaptive optics systems. These applications all require that the controller electronics not contribute significantly to the readout noise of the CCD, that the dynamic range of the CCD by fully sampled (except for wavefront sensors), that the CCD be read out as quickly as possible from one or more readout ports, and that considerably flexibility in readout modes (binning, skipping and signal sampling) and device format exist. A further requirement imposed by some institutions is that a single controller design be used for all their CCD instruments to minimize maintenance and development efforts. A controller design recently upgraded to meet these requirements is reviewed. It uses a sequencer built with a programmable DSP to provide user flexibility combined with fast 16-bit A/D converters on a programmable video processor chain to provide either fast or slow readouts.

  8. Video Parsing and Browsing Using Compressed Data

    Microsoft Academic Search

    Hongjiang Zhang; Chien Yong Low; Stephen W. Smoliar

    1995-01-01

    Parsing video content is an important first step in the video indexing process. This paper presents algorithms to automate the video parsing task, including partitioning a source video into clips and classifying those clips according to camera operations, using compressed video data. We have developed two algorithms and a hybrid approach to partitioning video data compressed according to the JPEG

  9. STIS CCD Hot Pixel Annealing

    NASA Astrophysics Data System (ADS)

    Hernandez, Svea

    2013-10-01

    This purpose of this activity is to repair radiation induced hot pixel damage to theSTIS CCD by warming the CCD to the ambient instrument temperature and annealing radiation damaged pixels. Radiation damage creates hot pixels in the STIS CCD Detector. Many of these hot pixels can be repaired by warming the CCD from its normal operating temperature near-83 C to the ambient instrument temperature { +5 C} for several hours. The number of hot pixels repaired is a function of annealing temperature. The effectiveness of the CCD hot pixel annealing process is assessed by measuring the dark current behavior before and after annealing and by searching for any window contamination effects.

  10. Zero Noise CCD

    NASA Astrophysics Data System (ADS)

    Gach, J.-L.; Darson, D.; Guillaume, C.; Goillandeau, M.; Boissin, O.; Boulesteix, J.; Cavadore, C.

    We present a completely new technique to readout CCDs, which can achieve much lower noise than classical techniques used since the 70's. This technique is based on digital analysis of the CCD's output signal instead of analog filtering coupled to an original filtering method. Despite several attempts carried out in the past to implement digital Correlated Double Sampling (CDS), this is the first time that a radical improvement in readout noise performance is shown. Developed with this noise level improvement in mind, the zero noise CCD concept is presented. This is highly interesting for low light level conditions, where the detector works in readout noise regime and not in photon noise regime. This is the case particularly when concerning carrying out medium to high resolution spectroscopy, or multiplex (scanning) observations.

  11. Wide-field CCD imaging at CFHT: the MOCAM example

    E-print Network

    J. -C. Cuillandre; Y. Mellier; J. -P. Dupin; P. Tilloles; R. Murowinski; D. Crampton; R. Wooff; G. A. Luppino

    1996-09-18

    We describe a new 4096x4096 pixel CCD mosaic camera (MOCAM) available at the prime focus of the Canada-France-Hawaii Telescope (CFHT). The camera is a mosaic of four 2048x2048$ Loral frontside-illuminated CCDs with 15 $\\mu$m pixels, providing a field of view of 14'x14' at a scale of 0.21''/pixel. MOCAM is equipped with B, V, R and I filters and has demonstrated image quality of 0.5''-0.6'' FWHM over the entire field. MOCAM will also be used with the CFHT adaptive optic bonnette and will provide a field of view of 90'' at a scale of 0.02 ''/pixel. MOCAM works within the CFHT Pegasus software environment and observers familiar with this system require no additional training to use this camera effectively. The technical details, the performance and the first images obtained on the telescope with MOCAM are presented. In particular, we discuss some important improvements with respect to the standard single-CCD FOCAM camera, such as multi-output parallel readout and dynamic anti-blooming. We also discuss critical technical issues concerning future wide-field imaging facilities at the CFHT prime focus in light of our experience with MOCAM and our recent experience with the even larger UH 8192x8192 pixel CCD mosaic camera.

  12. Formation mechanism and a universal period formula for the CCD moiré.

    PubMed

    Junfei, Li; Youqi, Zhang; Jianglong, Wang; Yang, Xiang; Zhipei, Wu; Qinwei, Ma; Shaopeng, Ma

    2014-08-25

    Moiré technique is often used to measure surface morphology and deformation fields. CCD moiré is a special kind of moiré and is produced when a digital camera is used to capture periodic grid structures, like gratings. Different from the ordinary moiré setups with two gratings, however, CCD moiré requires only one grating. But the formation mechanism is not fully understood and also, a high-quality CCD moiré pattern is hard to achieve. In this paper, the formation mechanism of a CCD moiré pattern, based on the imaging principle of a digital camera, is analyzed and a way of simulating the pattern is proposed. A universal period formula is also proposed and the validity of the simulation and formula is verified by experiments. The proposed model is shown to be an efficient guide for obtaining high-quality CCD moiré patterns. PMID:25321292

  13. Colorized linear CCD data acquisition system with automatic exposure control

    NASA Astrophysics Data System (ADS)

    Li, Xiaofan; Sui, Xiubao

    2014-11-01

    Colorized linear cameras deliver superb color fidelity at the fastest line rates in the industrial inspection. It's RGB trilinear sensor eliminates image artifacts by placing a separate row of pixels for each color on a single sensor. It's advanced design minimizes distance between rows to minimize image artifacts due to synchronization. In this paper, the high-speed colorized linear CCD data acquisition system was designed take advantages of the linear CCD sensor ?pd3728. The hardware and software design of the system based on FPGA is introduced and the design of the functional modules is performed. The all system is composed of CCD driver module, data buffering module, data processing module and computer interface module. The image data was transferred to computer by Camera link interface. The system which automatically adjusts the exposure time of linear CCD, is realized with a new method. The integral time of CCD can be controlled by the program. The method can automatically adjust the integration time for different illumination intensity under controlling of FPGA, and respond quickly to brightness changes. The data acquisition system is also offering programmable gains and offsets for each color. The quality of image can be improved after calibration in FPGA. The design has high expansibility and application value. It can be used in many application situations.

  14. Cameras for digital microscopy.

    PubMed

    Spring, Kenneth R

    2013-01-01

    This chapter reviews the fundamental characteristics of charge-coupled devices (CCDs) and related detectors, outlines the relevant parameters for their use in microscopy, and considers promising recent developments in the technology of detectors. Electronic imaging with a CCD involves three stages--interaction of a photon with the photosensitive surface, storage of the liberated charge, and readout or measurement of the stored charge. The most demanding applications in fluorescence microscopy may require as much as four orders of greater magnitude sensitivity. The image in the present-day light microscope is usually acquired with a CCD camera. The CCD is composed of a large matrix of photosensitive elements (often referred to as "pixels" shorthand for picture elements, which simultaneously capture an image over the entire detector surface. The light-intensity information for each pixel is stored as electronic charge and is converted to an analog voltage by a readout amplifier. This analog voltage is subsequently converted to a numerical value by a digitizer situated on the CCD chip, or very close to it. Several (three to six) amplifiers are required for each pixel, and to date, uniform images with a homogeneous background have been a problem because of the inherent difficulties of balancing the gain in all of the amplifiers. Complementary metal oxide semiconductor sensors also exhibit relatively high noise associated with the requisite high-speed switching. Both of these deficiencies are being addressed, and sensor performance is nearing that required for scientific imaging. PMID:23931507

  15. Development and use of an L3CCD high-cadence imaging system for Optical Astronomy

    NASA Astrophysics Data System (ADS)

    Sheehan, Brendan J.; Butler, Raymond F.

    2008-02-01

    A high cadence imaging system, based on a Low Light Level CCD (L3CCD) camera, has been developed for photometric and polarimetric applications. The camera system is an iXon DV-887 from Andor Technology, which uses a CCD97 L3CCD detector from E2V technologies. This is a back illuminated device, giving it an extended blue response, and has an active area of 512×512 pixels. The camera system allows frame-rates ranging from 30 fps (full frame) to 425 fps (windowed & binned frame). We outline the system design, concentrating on the calibration and control of the L3CCD camera. The L3CCD detector can be either triggered directly by a GPS timeserver/frequency generator or be internally triggered. A central PC remotely controls the camera computer system and timeserver. The data is saved as standard `FITS' files. The large data loads associated with high frame rates, leads to issues with gathering and storing the data effectively. To overcome such problems, a specific data management approach is used, and a Python/PYRAF data reduction pipeline was written for the Linux environment. This uses calibration data collected either on-site, or from lab based measurements, and enables a fast and reliable method for reducing images. To date, the system has been used twice on the 1.5 m Cassini Telescope in Loiano (Italy) we present the reduction methods and observations made.

  16. CCD photonic system for rail width measurement

    NASA Astrophysics Data System (ADS)

    Ryabichenko, Roman B.; Popov, Sergey V.; Smoleva, Olga S.

    1999-10-01

    At present, in Moscow metro a track inspection vehicle, defectoscope and portable measurement instruments are used to measure the rail profile and the condition of track. The track inspection vehicle measures 8 parameters, such as rail height, width, lip flow, cant, gauge and rail identification. The main drawback of the existing track control devices is a contact mode of measurement that does not provide required accuracy during the movement of the track inspection vehicle. This drawback can be eliminated using the non-contact photonic system (NPS). NPS consists of four special digital CCD-cameras and four lasers (two cameras and two lasers on each rail), rigidly connected together and mounted underneath the rail inspection vehicle in such a manner that corners of vision and distances from the cameras up to the head of the rail remain fixed during the movement. A special processor is included at the output of each camera. It performs preliminary processing of the stripe image on the appropriate side of a rail and then codes (compresses) and transfers data to central computer. The central computer executes the rail profile restoration and its comparison with the pattern of the rail on the particular section of the track.

  17. Practical performance evaluation of a 10k × 10k CCD for electron cryo-microscopy

    PubMed Central

    Bammes, Benjamin E.; Rochat, Ryan H.; Jakana, Joanita; Chiu, Wah

    2011-01-01

    Electron cryo-microscopy (cryo-EM) images are commonly collected using either charge-coupled devices (CCD) or photographic film. Both film and the current generation of 16 megapixel (4k × 4k) CCD cameras have yielded high-resolution structures. Yet, despite the many advantages of CCD cameras, more than two times as many structures of biological macromolecules have been published in recent years using photographic film. The continued preference to film, especially for subnanometer-resolution structures, may be partially influenced by the finer sampling and larger effective specimen imaging area offered by film. Large format digital cameras may finally allow them to overtake film as the preferred detector for cryo-EM. We have evaluated a 111-megapixel (10k × 10k) CCD camera with a 9 ?m pixel size. The spectral signal-to-noise ratios of low dose images of carbon film indicate that this detector is capable of providing signal up to at least 2/5 Nyquist frequency potentially retrievable for 3-D reconstructions of biological specimens, resulting in more than double the effective specimen imaging area of existing 4k × 4k CCD cameras. We verified our estimates using frozen-hydrated ?15 bacteriophage as a biological test specimen with previously determined structure, yielding a ~7 Å resolution single particle reconstruction from only 80 CCD frames. Finally, we explored the limits of current CCD technology by comparing the performance of this detector to various CCD cameras used for recording data yielding subnanometer resolution cryo-EM structures submitted to the Electron Microscopy Data Bank (http://www.emdatabank.org/). PMID:21619932

  18. Camera Obscura

    NSDL National Science Digital Library

    Mr. Engelman

    2008-10-28

    Before photography was invented there was the camera obscura, useful for studying the sun, as an aid to artists, and for general entertainment. What is a camera obscura and how does it work ??? Camera = Latin for room Obscura = Latin for dark But what is a Camera Obscura? The Magic Mirror of Life What is a camera obscura? A French drawing camera with supplies A French drawing camera with supplies Drawing Camera Obscuras with Lens at the top Drawing Camera Obscuras with Lens at the top Read the first three paragraphs of this article. Under the portion Early Observations and Use in Astronomy you will find the answers to the ...

  19. World's fastest and most sensitive astronomical camera

    NASA Astrophysics Data System (ADS)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these corrections to be done at an even higher rate, more than one thousand times a second, and this is where OCam is essential. "The quality of the adaptive optics correction strongly depends on the speed of the camera and on its sensitivity," says Philippe Feautrier from the LAOG, France, who coordinated the whole project. "But these are a priori contradictory requirements, as in general the faster a camera is, the less sensitive it is." This is why cameras normally used for very high frame-rate movies require extremely powerful illumination, which is of course not an option for astronomical cameras. OCam and its CCD220 detector, developed by the British manufacturer e2v technologies, solve this dilemma, by being not only the fastest available, but also very sensitive, making a significant jump in performance for such cameras. Because of imperfect operation of any physical electronic devices, a CCD camera suffers from so-called readout noise. OCam has a readout noise ten times smaller than the detectors currently used on the VLT, making it much more sensitive and able to take pictures of the faintest of sources. "Thanks to this technology, all the new generation instruments of ESO's Very Large Telescope will be able to produce the best possible images, with an unequalled sharpness," declares Jean-Luc Gach, from the Laboratoire d'Astrophysique de Marseille, France, who led the team that built the camera. "Plans are now underway to develop the adaptive optics detectors required for ESO's planned 42-metre European Extremely Large Telescope, together with our research partners and the industry," says Hubin. Using sensitive detectors developed in the UK, with a control system developed in France, with German and Spanish participation, OCam is truly an outcome of a European collaboration that will be widely used and commercially produced. More information The three French laboratories involved are the Laboratoire d'Astrophysique de Marseille (LAM/INSU/CNRS, Université de Provence; Observatoire Astronomique de Marseille Prov

  20. Imaging sensitivity of three kind of high-sensitivity imaging cameras under short-pulsed light illumination

    Microsoft Academic Search

    Hideyuki TAKAHASHI; Kouichi SAWADA; Koki ABE; Yoshimi TAKAO; Kazutoshi WATANABE

    *** Abstract: As a part of a development of an optical system that enables us to precisely observe negative phototactic fish in situ, characteristics of three different types of a high-sensitivity camera were investigated under short-pulsed light illuminations of different colors. These three types of a camera were Image Intensifier connected with CCD camera, EB-CCD camera, and HARP camera and

  1. A Shaped Temporal Filter Camera

    Microsoft Academic Search

    Martin Fuchs; Tongbo Chen; Oliver Wang; Ramesh Raskar; Hans-Peter Seidel; Hendrik P. A. Lensch

    2009-01-01

    Digital movie cameras only perform a discrete sam- pling of real-world imagery. While spatial sampling effects are well studied in the literature, there has not been as much work in regards to temporal sam- pling. As cameras get faster and faster, the need for conventional frame-rate video that matches the abilities of human perception remains. In this ar- ticle, we

  2. Panoramic video in video-mediated education

    NASA Astrophysics Data System (ADS)

    Ouglov, Andrei; Hjelsvold, Rune

    2005-01-01

    This paper discusses the use of panoramic video and its benefits in video mediated education. A panoramic view is generated by covering the blackboard by two or more cameras and then stitching the captured videos together. This paper describes the properties and advantages of multi-camera, panoramic video compared to single-camera approaches. One important difference between panoramic video and regular video is that the former has a wider field of view (FOV). As a result, the blackboard covers a larger part of the video screen and the information density is increased. Most importantly, the size of the letters written on the blackboard is enlarged, which improves the student"s ability to clearly read what is written on the blackboard. The panoramic view also allows students to focus their attention on different parts of the blackboard in the same way they would be able to in the classroom. This paper also discussed the results from a study among students where a panoramic view was tested against single-camera views. The study indicates that the students preferred the panoramic view. The students also suggested improvements to could make panoramic video even more beneficial.

  3. Panoramic video in video-mediated education

    NASA Astrophysics Data System (ADS)

    Ouglov, Andrei; Hjelsvold, Rune

    2004-12-01

    This paper discusses the use of panoramic video and its benefits in video mediated education. A panoramic view is generated by covering the blackboard by two or more cameras and then stitching the captured videos together. This paper describes the properties and advantages of multi-camera, panoramic video compared to single-camera approaches. One important difference between panoramic video and regular video is that the former has a wider field of view (FOV). As a result, the blackboard covers a larger part of the video screen and the information density is increased. Most importantly, the size of the letters written on the blackboard is enlarged, which improves the student"s ability to clearly read what is written on the blackboard. The panoramic view also allows students to focus their attention on different parts of the blackboard in the same way they would be able to in the classroom. This paper also discussed the results from a study among students where a panoramic view was tested against single-camera views. The study indicates that the students preferred the panoramic view. The students also suggested improvements to could make panoramic video even more beneficial.

  4. Make a Pinhole Camera

    ERIC Educational Resources Information Center

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  5. Spas color camera

    NASA Technical Reports Server (NTRS)

    Toffales, C.

    1983-01-01

    The procedures to be followed in assessing the performance of the MOS color camera are defined. Aspects considered include: horizontal and vertical resolution; value of the video signal; gray scale rendition; environmental (vibration and temperature) tests; signal to noise ratios; and white balance correction.

  6. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  7. CCD imaging sensors

    NASA Technical Reports Server (NTRS)

    Janesick, James R. (Inventor); Elliott, Stythe T. (Inventor)

    1989-01-01

    A method for promoting quantum efficiency (QE) of a CCD imaging sensor for UV, far UV and low energy x-ray wavelengths by overthinning the back side beyond the interface between the substrate and the photosensitive semiconductor material, and flooding the back side with UV prior to using the sensor for imaging. This UV flooding promotes an accumulation layer of positive states in the oxide film over the thinned sensor to greatly increase QE for either frontside or backside illumination. A permanent or semipermanent image (analog information) may be stored in a frontside SiO.sub.2 layer over the photosensitive semiconductor material using implanted ions for a permanent storage and intense photon radiation for a semipermanent storage. To read out this stored information, the gate potential of the CCD is biased more negative than that used for normal imaging, and excess charge current thus produced through the oxide is integrated in the pixel wells for subsequent readout by charge transfer from well to well in the usual manner.

  8. Design of High Resolution High Sensitivity EMCCD Camera

    Microsoft Academic Search

    Shaohua Yang; Ming'an Guo; Binkang Li; Jingtao Xia; Qunshu Wang; Fengrong Sun

    2012-01-01

    A high resolution and high sensitivity camera was developed using back illuminated, frame transfer on-chip electron multiplying gain CCD(EMCCD) with 1024x1024 pixels named CCD201 from E2V technologies. The CCD timing generated by a CPLD affords the normal EMCCD diver timing via a vertical clock driver chip EL7156. A programmable control voltage power source is adopted for the electron multiplying voltage

  9. Estimating ink density from colour camera RGB values by the local kernel ridge regression

    Microsoft Academic Search

    Antanas Verikas; Marija Bacauskiene

    2008-01-01

    We present an option for CCD colour camera based ink density measurements in newspaper printing. To solve the task, first, a reflectance spectrum is reconstructed from the CCD colour camera RGB values and then a well-known relation between ink density and the reflectance spectrum of a sample being measured is used to compute the density. To achieve an acceptable spectral

  10. Galeotti, et al., http://www.ncigt.org/pages/IGT_Workshop_2011 ProbeSight: Video Cameras on an Ultrasound Probe for

    E-print Network

    Stetten, George

    Medical ultrasound typically deals with the interior of the patient Cameras on an Ultrasound Probe for Computer Vision of the Patient's Exterior J, with the exterior left to that original medical imaging modality, direct human vision

  11. Laser-dazzling effects on TV cameras: analysis of dazzling effects and experimental parameters weight assessment

    NASA Astrophysics Data System (ADS)

    Durécu, Anne; Bourdon, Pierre; Vasseur, Olivier

    2007-10-01

    Imaging systems are widespread observation tools used to fulfil various functions such as detection, recognition, identification and video-tracking. These devices can be dazzled by using intensive light sources, e.g. lasers. In order to avoid such a disturbance, dazzling effects in TV-cameras must be better understood. In this paper we studied the influence of different parameters on laser-dazzling. The experiments were performed using a black and white TV-CCD-camera, dazzled by a nanosecond frequency doubled Nd:YAG laser. Different dazzling conditions were studied by varying for instance the laser repetition rate, the pulse energy or the settings of the camera. We proceeded in two steps. First the different dazzling effects were analyzed and classified by their mainspring. Pure optical phenomena like multiple reflections, scattering and diffraction were discriminated from electronics effects related to charge transfer processes. Interactions between the laser repetition rate and the camera frequency or the camera exposure time were also observed. In a second step, experiments were carried out for different dazzling conditions. It was then possible to assess the weight of each experimental parameter on dazzling effects. The analysis of these quantitative results contributes to the better understanding of laser-dazzling, useful to design efficient means to protect imaging systems.

  12. Performance of the Advanced Camera for Surveys CCDs after two years on orbit.

    E-print Network

    Johns Hopkins University, Department of Physics and Astonomy, Advanced Camera for Surveys Team

    : the Wide Field Camera (WFC), designed for deep near-IR survey imaging programs; the High Resolution Camera) in the visible; and the Solar Blind Camera (SBC), a far-UV imager. The WFC and HRC employ CCD detectorsPerformance of the Advanced Camera for Surveys CCDs after two years on orbit. Marco Siriannia

  13. ADAPTIVE DEBLURRING OF SURVEILLANCE VIDEO SEQUENCES THAT DETERIORATE OVER TIME

    E-print Network

    Fisher, Bob

    ADAPTIVE DEBLURRING OF SURVEILLANCE VIDEO SEQUENCES THAT DETERIORATE OVER TIME Konstantinos] to achieve deblurring in video surveillance recordings. The method uses multiple consecutive frames surveillance cameras whose quality deteriorates due to dirt or water that gathers on the camera's lens

  14. CCD evaluation for estimating measurement precision in lateral shearing interferometry

    NASA Astrophysics Data System (ADS)

    Liu, Bingcai; Li, Bing; Tian, Ailing; Li, Baopeng

    2013-06-01

    Because of larger measurement ability of wave-front deviation and no need of reference plat, the lateral shearing interferometry based on four step phase shifting has been widely used for wave-front measurement. After installation shearing interferograms are captured by CCD camera, and the actual phase data of wave-front can be calculated by four step phase shift algorithm and phase unwrapping. In this processing, the pixel resolution and gray scale of CCD camera is the vital factor for the measurement precision. In this paper, Based on the structure of lateral shearing surface interferometer with phase shifting, pixel resolution more or less for measurement precision is discussed. Also, the gray scale is 8 bit, 12 bit or 16 bit for measurement precision is illustrated by simulation.

  15. LSST camera readout chip ASPIC: test tools

    NASA Astrophysics Data System (ADS)

    Antilogus, P.; Bailly, Ph; Jeglot, J.; Juramy, C.; Lebbolo, H.; Martin, D.; Moniez, M.; Tocut, V.; Wicek, F.

    2012-02-01

    The LSST camera will have more than 3000 video-processing channels. The readout of this large focal plane requires a very compact readout chain. The correlated ''Double Sampling technique'', which is generally used for the signal readout of CCDs, is also adopted for this application and implemented with the so called ''Dual Slope integrator'' method. We have designed and implemented an ASIC for LSST: the Analog Signal Processing asIC (ASPIC). The goal is to amplify the signal close to the output, in order to maximize signal to noise ratio, and to send differential outputs to the digitization. Others requirements are that each chip should process the output of half a CCD, that is 8 channels and should operate at 173 K. A specific Back End board has been designed especially for lab test purposes. It manages the clock signals, digitizes the analog differentials outputs of ASPIC and stores data into a memory. It contains 8 ADCs (18 bits), 512 kwords memory and an USB interface. An FPGA manages all signals from/to all components on board and generates the timing sequence for ASPIC. Its firmware is written in Verilog and VHDL languages. Internals registers permit to define various tests parameters of the ASPIC. A Labview GUI allows to load or update these registers and to check a proper operation. Several series of tests, including linearity, noise and crosstalk, have been performed over the past year to characterize the ASPIC at room and cold temperature. At present, the ASPIC, Back-End board and CCD detectors are being integrated to perform a characterization of the whole readout chain.

  16. Single-chip CCD waveform generator and sequencer

    NASA Astrophysics Data System (ADS)

    French, Marcus J.; Waltham, Nicholas R.; Newton, G. M.; Wade, Richard

    1998-07-01

    There are a number of application areas of CCDs in ground and space based astronomy and earth observation, which could benefit from compact but versatile control electronics. These include visible imaging and spectroscopy, auto-guiding, star tracking and wavefront sensors. We describe here our design of an Application Specific Integrated Circuit (ASIC) which integrates the complete functionality of a dedicated programmable waveform generator and waveform sequencer to provide a CCD controller on a single chip. The ASIC can sustain waveform state changes at the clock rate of up to 40 MHz, and can implement complete sequence changes on a frame by frame basis. The ASIC is designed specifically to provide programmable CCD waveform generation and sequencing, and thus yields significant shrinkage in component count, circuit complexity, PCB circuit size, and power dissipation compared to previous DSP or general purpose microprocessor-based design solutions. More compact and light weight cameras are thus realized without compromising the ability to program any complexity of CCD waveform patterns and waveform sequences such as multiple window and/or pixel binning readout format. A second generation ASIC is to be fabricated on a radiation hard silicon process for use in Space-borne CCD camera systems. Applications of our prototype ASIC will be presented along with future development plans.

  17. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  18. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  19. NSTX Tangential Divertor Camera

    SciTech Connect

    A.L. Roquemore; Ted Biewer; D. Johnson; S.J. Zweben; Nobuhiro Nishino; V.A. Soukhanovskii

    2004-07-16

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor.

  20. CCD Monitoring of Blazar OJ287 from MIRO

    NASA Astrophysics Data System (ADS)

    Chandra, Sunil; Ganesh, Shashikiran; Baliyan, K. S.

    2012-04-01

    A sample of Blazars' are regularly monitored using CCD Camera, Near Infrared Camera & Spectrograph (NICMOS/NICS) and Photo-polarimeter as a back-end instruments on 1.2 m Telescope at Mt. Abu Infrared Observatory(MIRO). An Automated robotic optical telescope (0.5 m) at same site is dedicated to the long term monitoring of blazars and also is the part of blazar monitoring program. Following the Atel #4020 by Sanatengelo.et.al, we carried out the differential photometry of the data taken on 16 March 2012 and 26 March 2012.

  1. CCD measurements of double and multiple stars in Belgrade

    NASA Astrophysics Data System (ADS)

    Popovic, G. M.; Pavlovic, R.

    1997-06-01

    The CCD measurements for 123 double stars with ST-6 camera attached to the Zeiss 65/1055 cm Refractor of the Belgrade Observatory are communicated. Table 1 only available in electronic form, Table 2 also in electronic form at CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html}}

  2. Video monitoring system for car seat

    NASA Technical Reports Server (NTRS)

    Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

    2004-01-01

    A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.

  3. Study on CCD image compression and mass storage

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Zhai, Linpei; Xiu, Jihong; Zhao, Xiuying

    2006-01-01

    When CCD camera photographs massive images in the air, it is very important to compress and save CCD image timely and validly. This paper designs the CCD image compression and mass storage system. The one part is image compression: this paper introduces a reduced memory still image compression algorithm based on Listless Zerotree Coding (LZC). Compared with SPIHT, the approach significantly reduced memory requirement and no reducing the quality of the reconstructed image. The other part is image mass storage: this system uses a kind of special hard disk storage devices that can realize faster data transmission to SCSI (Small Computer System Interface) devices even if it separates oneself from PCs. In the faster data acquisition and storage system, data storage is a key technology. Normal approach is saving the data to mass memories, and then processing and saving the data after complete acquisition. The continuous acquisition time is restricted with the storage capacity in the normal method so that it can't receive the requirement of CCD image storage on many occasions. While its price will be geminate increasing, when we increase the storage capacity. So the approach in this paper is better one to use fast disks on data direct mass storage considering the storage capacity, read/write speed and unit cost. The result of the experiment shows that the system has compressed and saved CCD image validly, so it reached the anticipative purpose.

  4. The Dark Energy Camera (DECam)

    NASA Astrophysics Data System (ADS)

    DePoy, D. L.; Abbott, T.; Annis, J.; Antonik, M.; Barceló, M.; Bernstein, R.; Bigelow, B.; Brooks, D.; Buckley-Geer, E.; Campa, J.; Cardiel, L.; Castander, F.; Castilla, J.; Cease, H.; Chappa, S.; Dede, E.; Derylo, G.; Diehl, H. T.; Doel, P.; DeVicente, J.; Estrada, J.; Finley, D.; Flaugher, B.; Gaztanaga, E.; Gerdes, D.; Gladders, M.; Guarino, V.; Gutierrez, G.; Hamilton, J.; Haney, M.; Holland, S.; Honscheid, K.; Huffman, D.; Karliner, I.; Kau, D.; Kent, S.; Kozlovsky, M.; Kubik, D.; Kuehn, K.; Kuhlmann, S.; Kuk, K.; Leger, F.; Lin, H.; Martinez, G.; Martinez, M.; Merritt, W.; Mohr, J.; Moore, P.; Moore, T.; Nord, B.; Ogando, R.; Olsen, J.; Onal, B.; Peoples, J.; Qian, T.; Roe, N.; Sanchez, E.; Scarpine, V.; Schmidt, R.; Schmitt, R.; Schubnell, M.; Schultz, K.; Selen, M.; Shaw, T.; Simaitis, V.; Slaughter, J.; Smith, C.; Spinka, H.; Stefanik, A.; Stuermer, W.; Talaga, R.; Tarle, G.; Thaler, J.; Tucker, D.; Walker, A.; Worswick, S.; Zhao, A.

    2008-07-01

    We describe the Dark Energy Camera (DECam), which will be the primary instrument used in the Dark Energy Survey. DECam will be a 3 sq. deg. mosaic camera mounted at the prime focus of the Blanco 4m telescope at the Cerro-Tololo International Observatory (CTIO). DECam includes a large mosaic CCD focal plane, a five element optical corrector, five filters (g,r,i,z,Y), and the associated infrastructure for operation in the prime focus cage. The focal plane consists of 62 2K x 4K CCD modules (0.27"/pixel) arranged in a hexagon inscribed within the roughly 2.2 degree diameter field of view. The CCDs will be 250 micron thick fully-depleted CCDs that have been developed at the Lawrence Berkeley National Laboratory (LBNL). Production of the CCDs and fabrication of the optics, mechanical structure, mechanisms, and control system for DECam are underway; delivery of the instrument to CTIO is scheduled for 2010.

  5. Video Mosaicking for Inspection of Gas Pipelines

    NASA Technical Reports Server (NTRS)

    Magruder, Darby; Chien, Chiun-Hong

    2005-01-01

    A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.

  6. LOSSLESS AUTHENTICATION OF MPEG-2 VIDEO Jessica Fridrichb

    E-print Network

    Fridrich, Jessica

    addressed. 1. INTRODUCTION Analog surveillance video cameras, reconnaissance cameras, and industrial camerasLOSSLESS AUTHENTICATION OF MPEG-2 VIDEO Rui Dua Jessica Fridrichb a MTL Systems, Inc. Beavercreek methods originally developed for the JPEG image format to digital video in the MPEG-2 format. Two new

  7. Design and operational characteristics of a PV 001 image tube incorporated with EB CCD readout

    Microsoft Academic Search

    G. I. Bryukhnevich; Ilia N. Dalinenko; K. N. Ivanov; S. A. Kaidalov; G. A. Kuz'min; B. B. Moskalev; S. K. Naumov; E. V. Pischelin; Valdis E. Postovalov; Alexander M. Prokhorov; Mikhail Y. Schelev

    1991-01-01

    A luminescence screen was replaced with a thinned, backside-illuminated, electron bombarded (EB) CCD in a well-known PV 001 streak\\/shutter image converter tube. The tube was mounted into an experimental camera prototype for measurement of its main technical characteristics. Under EB CCD readout operation in a free-scanning, slow-speed mode, the overall system spatial resolution was higher than 40 lp\\/mm at 10%

  8. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an aerobot or a spacecraft onto a comet or asteroid. A system of 20 of these penetrators could be designed and built in a 1- to 2-kg mass envelope. Possible future modifications of the camera penetrators, such as the addition of a chemical spray device, would allow the study of simple chemical reactions of reagents sprayed at the landing site and looking at the color changes. Zoom lenses also could be added for future use.

  9. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  10. Temporally Consistent Multi-Class Video-Object Segmentation with the Video Graph-Shifts Algorithm

    E-print Network

    Corso, Jason J.

    , which could benefit problems ranging from scene understanding, video surveillance, to autonomous = z , which only works well for videos obtained by static cameras, e.g. surveillance videos; ChenTemporally Consistent Multi-Class Video-Object Segmentation with the Video Graph-Shifts Algorithm

  11. Experiental Sampling for Video Surveillance

    Microsoft Academic Search

    Jun Wang; M. S. Kankanhali; Weiqi Yan; Ramesh Jain

    2003-01-01

    Due to the decreasing costs and increasing miniaturization of video cameras, the use of digital video based surveillance as a tool for real-time monitoring is rapidly increasing. In this paper, we present a new methodology for real-time video surveillance based on . We use this framework to dynamically model the evolving attention in order to perform efficient monitoring. We exploit

  12. Forensic Video Reconstruction Larry Huston

    E-print Network

    Guestrin, Carlos

    Keywords video retrieval, active storage, interactive search 1. INTRODUCTION As surveillance cameras to surveillance video analysis rely on linear searches performed by Permission to make digital or hard copies surveillance video recorders utilize extreme temporal and data compression to reduce storage requirements

  13. Pinhole Camera For Viewing Electron Beam Materials Processing

    NASA Astrophysics Data System (ADS)

    Rushford, M. C.; Kuzmenko, P. J.

    1986-10-01

    A very rugged, compact (4x4x10 inches), gas purged "PINHOLE CAMERA" has been developed for viewing electron beam materials processing (e.g. melting or vaporizing metal). The video image is computer processed, providing dimensional and temperature measurements of objects within the field of view, using an IBM PC. The "pinhole camera" concept is similar to a TRW optics system for viewing into a coal combustor through a 2 mm hole. Gas is purged through the hole to repel particulates from optical surfaces. In our system light from the molten metal passes through the 2 mm hole "PINHOLE", reflects off an aluminum coated glass substrate and passes through a window into a vacuum tight container holding the camera and optics at atmospheric pressure. The mirror filters out X rays which pass through the AL layer and are absorbed in the glass mirror substrate. Since metallic coatings are usually reflective, the image quality is not severely degraded by small amounts of vapor that overcome the gas purge to reach the mirror. Coating thicknesses of up to 2 microns can be tolerated. The mirror is the only element needing occasional servicing. We used a telescope eyepiece as a convenient optical design, but with the traditional optical path reversed. The eyepiece images a scene through a small entrance aperture onto an image plane where a CCD camera is placed. Since the iris of the eyepiece is fixed and the scene intensity varies it was necessary to employ a variable neutral density filter for brightness control. Devices used for this purpose include PLZT light valve from Motorola, mechanically rotated linear polarizer sheets, and nematic liquid crystal light valves. These were placed after the mirror and entrance aperture but before the lens to operate as a voltage variable neutral density filter. The molten metal surface temp being viewed varies from 4000 to 1200 degrees Kelvin. The resultant intensity change (at 488 nm with 10 nm bandwidth) is seven orders of magnitude. This surface intensity variation is contrast reduced if the observation wavelength is a narrow band as far red as high intensity blooming will allow an observable picture. A three eyepiece camera allows an image plane where photo gray glass functions as a neutral density filter only over the high intensity portion of the image, thus reducing blooming. This system is enclosed in a water-cooled housing which can dissipate 15 watts/cm2, keeping the camera below 40 degrees Celsius. Single frames of video output are acquired for feature enhancement and location by a Data Translation DT2803 image processing board housed in an IBM PC.

  14. Exposing Digital Forgeries in Interlaced and De-Interlaced Video

    E-print Network

    Farid, Hany

    number of video surveillance cameras are also giving rise to an enormous amount of video data1 Exposing Digital Forgeries in Interlaced and De-Interlaced Video Weihong Wang, Student Member, IEEE, and Hany Farid, Member, IEEE Abstract With the advent of high-quality digital video cameras

  15. Exposing Digital Forgeries in Video by Detecting Double MPEG Compression

    E-print Network

    Farid, Hany

    video. In addition, an ever-growing number of video surveillance cameras is giving rise to an enormous Tampering, Digital Forensics 1. INTRODUCTION By some counts, the installation of video surveillance cam, has an esti- mated 4, 000, 000 video surveillance cameras, many of which are installed in public

  16. LSST Camera Electronics

    Microsoft Academic Search

    F. Mitchell Newcomer; S. Bailey; C. L. Britton; N. Felt; J. Geary; K. Hashimi; H. Lebbolo; Z. Ning; P. O'Connor; J. Oliver; V. Radeka; R. Sefri; V. Tocut; R. Van Berg

    2009-01-01

    The 3.2 Gpixel LSST camera will be read out by means of 189 highly segmented 4K x 4K CCDs. A total of 3024 video channels will be processed by a modular, in-cryostat electronics package based on two custom multichannel analog ASICs now in development. Performance goals of 5 electrons noise, .01% electronic crosstalk, and 80 mW power dissipation per channel

  17. Automatic fire detection system using CCD camera and Bayesian network

    Microsoft Academic Search

    Kwang-Ho Cheong; Byoung-Chul Ko; Jae-Yeal Nam

    2008-01-01

    This paper proposes a new vision-based fire detection method for real-life application. Most previous vision-based methods using color information and temporal variations of pixels produce frequent false alarms due to the use of many heuristic features. Plus, there is usually a computation delay for accurate fire detection. Thus, to overcome these problems, candidate fire regions are first detected using a

  18. Automatic fire detection system using CCD camera and Bayesian network

    NASA Astrophysics Data System (ADS)

    Cheong, Kwang-Ho; Ko, Byoung-Chul; Nam, Jae-Yeal

    2008-02-01

    This paper proposes a new vision-based fire detection method for real-life application. Most previous vision-based methods using color information and temporal variations of pixels produce frequent false alarms due to the use of many heuristic features. Plus, there is usually a computation delay for accurate fire detection. Thus, to overcome these problems, candidate fire regions are first detected using a background model and color model of fire. Probabilistic models of fire are then generated based on the fact that fire pixel values in consecutive frames change constantly and these models are applied to a Bayesian Network. This paper uses a three-level Bayesian Network that contains intermediate nodes, and uses four probability density functions for evidence at each node. The probability density functions for each node are modeled using the skewness of the color red and three high frequency components obtained from a wavelet transform. The proposed system was successfully applied to various fire-detection tasks in real-world environments and effectively distinguished fire from fire-colored moving objects.

  19. Toward Building a Robust and Intelligent Video Surveillance System: A Case Study

    E-print Network

    Wang, Yuan-Fang

    Toward Building a Robust and Intelligent Video Surveillance System: A Case Study Edward Y. Chang to construct and deploy multi-camera video surveillance systems. In line with this is the need of intelligent-camera video surveillance system. 1 Introduction Video cameras are becoming a ubiquitous feature of modern life

  20. Data acquisition and control system for high-performance large-area CCD systems

    NASA Astrophysics Data System (ADS)

    Afanasieva, I. V.

    2015-04-01

    Astronomical CCD systems based on second-generation DINACON controllers were developed at the SAO RAS Advanced Design Laboratory more than seven years ago and since then have been in constant operation at the 6-meter and Zeiss-1000 telescopes. Such systems use monolithic large-area CCDs. We describe the software developed for the control of a family of large-area CCD systems equipped with a DINACON-II controller. The software suite serves for acquisition, primary reduction, visualization, and storage of video data, and also for the control, setup, and diagnostics of the CCD system.

  1. Spectral characterization of an ophthalmic fundus camera

    NASA Astrophysics Data System (ADS)

    Miller, Clayton T.; Bassi, Carl J.; Brodsky, Dale; Holmes, Timothy

    2010-02-01

    A fundus camera is an optical system designed to illuminate and image the retina while minimizing stray light and backreflections. Modifying such a device requires characterization of the optical path in order to meet the new design goals and avoid introducing problems. This work describes the characterization of one system, the Topcon TRC-50F, necessary for converting this camera from film photography to spectral imaging with a CCD. This conversion consists of replacing the camera's original xenon flash tube with a monochromatic light source and the film back with a CCD. A critical preliminary step of this modification is determining the spectral throughput of the system, from source to sensor, and ensuring there are sufficient photons at the sensor for imaging. This was done for our system by first measuring the transmission efficiencies of the camera's illumination and imaging optical paths with a spectrophotometer. Combining these results with existing knowledge of the eye's reflectance, a relative sensitivity profile is developed for the system. Image measurements from a volunteer were then made using a few narrowband sources of known power and a calibrated CCD. With these data, a relationship between photoelectrons/pixel collected at the CCD and narrowband illumination source power is developed.

  2. Nighttime outdoor surveillance with mobile cameras Ferran Diego1,3

    E-print Network

    Paris-Sud XI, Université de

    @cvc.uab.es Keywords: video surveillance,video synchronization, video alignment, change detection. Abstract: This paper addresses the problem of video surveillance by mobile cameras. We present a method that allows online changeNight­time outdoor surveillance with mobile cameras Ferran Diego1,3 , Georgios D. Evangelidis2

  3. Data Acquisition & Transfer for Remote Video Surveillance

    E-print Network

    Abidi, Mongi A.

    Data Acquisition & Transfer for Remote Video Surveillance Project in Lieu of Thesis For Master and surveillance". Video surveillance can be a potent tool for law enforcement. A very basic video surveillance components in the system. The PTZ camera, placed at ideal locations for surveillance, captures live video

  4. High-speed video analysis system using multiple shuttered charge-coupled device imagers and digital storage

    NASA Astrophysics Data System (ADS)

    Racca, Roberto G.; Stephenson, Owen; Clements, Reginald M.

    1992-06-01

    A fully solid state high-speed video analysis system is presented. It is based on the use of several independent charge-coupled device (CCD) imagers, each shuttered by a liquid crystal light valve. The imagers are exposed in rapid succession and are then read out sequentially at standard video rate into digital memory, generating a time-resolved sequence with as many frames as there are imagers. This design allows the use of inexpensive, consumer-grade camera modules and electronics. A microprocessor-based controller, designed to accept up to ten imagers, handles all phases of the recording from exposure timing to image capture and storage to playback on a standard video monitor. A prototype with three CCD imagers and shutters has been built. It has allowed successful three-image video recordings of phenomena such as the action of an air rifle pellet shattering a piece of glass, using a high-intensity pulsed light emitting diode as the light source. For slower phenomena, recordings in continuous light are also possible by using the shutters themselves to control the exposure time. The system records full-screen black and white images with spatial resolution approaching that of standard television, at rates up to 5000 images per second.

  5. The Crimean CCD Telescope for the asteroid observations

    NASA Astrophysics Data System (ADS)

    Chernykh, N. S.; Rumyantsev, V. V.

    2002-09-01

    The old 64-cm Richter-Slefogt telescope (F=90 cm) of the Crimean Astrophysical Observatory was reconstructed and equipped with the SBIG ST-8 CCD camera received from the Planetary Society for Eugene Shoemaker's Near Earth Object Grant. First observations of minor planets and comets were made with it. The CCD matrix of the St-8 camera in the focus of our telescope covers a field of 52'.7 x 35'.1. The 120 - second exposure yields stars up to the limiting magnitude of 20.5 for S/N=3. According to preliminary estimations, the telescope of today state enables us to cover, during the year, the sky area of not more than 600 sq. deg. with threefold overlaps. An automation of the telescope can increase the productivity up to 20000 sq. deg. per year. The software for object localization, image parameters determination, stars identification, astrometric reduction, identification and catalogue of asteroids is worked up. The first results obtained with the Crimean CCD 64-cm telescope are discussed.

  6. The Crimean CCD telescope for the asteroid observations

    NASA Astrophysics Data System (ADS)

    Chernykh, Nikolaj; Rumyantsev, Vasilij

    2002-11-01

    The old 64-cm Richter-Slefogt telescope (F=90 cm) of the Crimean Astrophysical Observatory was reconstructed and equipped with the St-8 CCD camera supplied by the Planetary Society as the Eugene Shoemaker Near Earth Object Grant. The first observations of minor planets and comets were made with the telescope in 2000. The CCD matrix of St-8 camera in the focus of our telescope covers field of 52'.7×35'.1. With 120-second exposure we obtain the images of stars up to the limiting magnitude of 20.5 mag within S/N=3. The first phase of automation of the telescope was completed in May of 2002. According to our estimations, the telescope will be able to cover the sky area of 20 square deg with threefold overlapping during the night. The software for object localization, image parameters determination, stars identification, astrometric reduction, identification and cataloguing of asteroids is worked up. The first observation results obtained with the 64-cm CCD telescope are discussed.

  7. Candid Camera: Catch 'Em in Action.

    ERIC Educational Resources Information Center

    Raschke, Donna; And Others

    1985-01-01

    A concealed video camera can record learning disabled students' behavior and provide a nonjudgmental way for them to see how they appear to others. Such an approach can include a positive emphasis on redirecting energy as well. (CL)

  8. Digitized video subject positioning and surveillance system for PET

    SciTech Connect

    Picard, Y.; Thompson, C.J. [McGill Univ., Montreal, Quebec (Canada)] [McGill Univ., Montreal, Quebec (Canada)

    1995-08-01

    Head motion is a significant contribution to the degradation of image quality of Positron Emission Tomography (PET) studies. Images from different studies must also be realigned digitally to be correlated when the subject position has changed. These constraints could be eliminated if the subject`s head position could be monitored accurately. The authors have developed a video camera-based surveillance system to monitor the head position and motion of subjects undergoing PET studies. The system consists of two CCD (charge-coupled device) cameras placed orthogonally such that both face and profile views of the subject`s head are displayed side by side on an RGB video monitor. Digitized images overlay the live images in contrasting colors on the monitor. Such a system can be used to (1) position the subject in the field of view (FOV) by displaying the position of the scanner`s slices on the monitor along with the current subject position, (2) monitor head motion and alert the operator of any motion during the study and (3) reposition the subject accurately for subsequent studies by displaying the previous position along with the current position in a contrasting color.

  9. CCD Photometry of bright stars using objective wire mesh

    SciTech Connect

    Kami?ski, Krzysztof; Zgórz, Marika [Astronomical Observatory Institute, Faculty of Physics, A. Mickiewicz University, S?oneczna 36, 60-286 Pozna? (Poland); Schwarzenberg-Czerny, Aleksander, E-mail: chrisk@amu.edu.pl [Copernicus Astronomical Centre, ul. Bartycka 18, PL 00-716 Warsaw (Poland)

    2014-06-01

    Obtaining accurate photometry of bright stars from the ground remains problematic due to the danger of overexposing the target and/or the lack of suitable nearby comparison stars. The century-old method of using objective wire mesh to produce multiple stellar images seems promising for the precise CCD photometry of such stars. Furthermore, our tests on ? Cep and its comparison star, differing by 5 mag, are very encouraging. Using a CCD camera and a 20 cm telescope with the objective covered by a plastic wire mesh, in poor weather conditions, we obtained differential photometry with a precision of 4.5 mmag per two minute exposure. Our technique is flexible and may be tuned to cover a range as big as 6-8 mag. We discuss the possibility of installing a wire mesh directly in the filter wheel.

  10. CALIBRATION OF A PMD-CAMERA USING A PLANAR CALIBRATION PATTERN TOGETHER WITH A MULTI-CAMERA SETUP

    Microsoft Academic Search

    Ingo Schiller; Christian Beder; Reinhard Koch

    We discuss the joint calibration of novel 3D range cameras based on the time-of-flight principle with the Photonic Mixing Device (PMD) and standard 2D CCD cameras. Due to the small field-of-view (fov) and low pixel resolution, PMD-cameras are difficult to calibrate with traditional calibration methods. In addition, the 3D range data contains systematic errors that need to be compensated. Therefore,

  11. Fundus Camera Guided Photoacoustic Ophthalmoscopy

    PubMed Central

    Liu, Tan; Li, Hao; Song, Wei; Jiao, Shuliang; Zhang, Hao F.

    2014-01-01

    Purpose To demonstrate the feasibility of fundus camera guided photoacoustic ophthalmoscopy (PAOM) system and its multimodal imaging capabilities. Methods We integrated PAOM and a fundus camera consisting of a white-light illuminator and a high-sensitivity, high-speed CCD. The fundus camera captures both retinal anatomy and PAOM illumination at the same time to provide a real-time feedback when we position the PAOM illuminating light. We applied the integrated system to image rat eyes in vivo and used full-spectrum, visible (VIS), and near infrared (NIR) illuminations in fundus photography. Results Both albino and pigmented rat eyes were imaged in vivo. During alignment, different trajectories of PAOM laser scanning were successfully visualized by the fundus camera, which reduced the PAOM alignment time from several minutes to 30 s. In albino eyes, in addition to retinal vessels, main choroidal vessels were observed using VIS-illumination, which is similar to PAOM images. In pigmented eyes, the radial striations of retinal nerve fiber layer were visualized by fundus photography using full-spectrum illumination; meanwhile, PAOM imaged both retinal vessels and the retinal pigmented epithelium melanin distribution. Conclusions The results demonstrated that PAOM can be well-integrated with fundus camera without affecting its functionality. The fundus camera guidance is faster and easier comparing with our previous work. The integrated system also set the stage for the next-step verification between oximetry methods based on PAOM and fundus photography. PMID:24131226

  12. Preventing Blooming In CCD Images

    NASA Technical Reports Server (NTRS)

    Janesick, James

    1992-01-01

    Clocking scheme for charge-coupled-device (CCD) imaging photodetector prevents smearing of bright spots and eliminates residual images. Also imposes charge-collecting electric field of optimum-full-well configuration, minimizes nonuniformities among picture elements, and keeps dark current low. Works under almost any lighting conditions.

  13. The research and development of CCD-based slab continuous casting mold copper surface imaging system

    NASA Astrophysics Data System (ADS)

    Wang, Xingdong; Zhang, Liugang; Xie, Haihua; Long, Liaosha; Yu, Wenyong

    2011-11-01

    An imaging system for the continuous casting mold copper surface is researched and developed, to replace the on-line manual measuring method, which is used to checking Copper defects such as wearing, scratches and coating loss and other phenomena. Method: The imaging system proposes a special optical loop formed by three Mirrors, selects light source, CCD camera and lens type, designs mechanical transmission system and installation platform. Result: the optical loop and light source can insure imaging large-format object in narrow space. The CCD camera and lens determine the accuracy of horizontal scanning, and the mechanical transmission system ensures accuracy of the vertical scan. The installation platform supplies base and platform for the system. Conclusions: CCD-based copper surface imaging system effectively prevent defects such as missed measuring and low efficiency, etc. It can automatically and accurately shoot copper surface images on-line, and supply basis for image processing, defects identification and copper changing in the late.

  14. CCD Imagers at CTIO: From 0.16 to 520 Megapixels

    NASA Astrophysics Data System (ADS)

    Walker, A. R.

    2015-05-01

    The three-decade period that started in 1982 with an RCA CCD with 163840 pixels at the Blanco prime focus and today has DECam with 62 science CCDs and 520,093,696 pixels is a journey where one encounters six cameras, three optical correctors and, pre-DECam, 25 CCDs of six different types. I'll briefly describe a few highlights.

  15. Wide field imaging of solar system objects with an 8192 x 8192 CCD mosaic

    NASA Technical Reports Server (NTRS)

    Hall, Donald N. B.

    1995-01-01

    As part of this program, we successfully completed the construction of the world's largest CCD camera, an 8192 x 8192 CCD mosaic. The system employs 8 2K x 4K 3-edge buttable CCDs arranged in a 2 x 4 chip mosaic. The focal plane has small gaps (less than 1 mm) between mosaic elements and measures over 120 mm x 120 mm. The initial set of frontside illuminated CCDs were developed with Loral-Fairchild in a custom foundry run. The initial lots yielded of order 20 to 25 functional devices, of which we selected the best eight for inclusion for the camera. We have designed a custom 3-edge-buttable package that ensures the CCD dies are mounted flat to plus or minus 10 microns over the entire area of the mosaic. The mosaic camera system consists of eight separate readout signal chains controlled by two independent DSP microcontrollers. These are in turn interfaced to a Sun Sparc-10 workstation through two high speed fiber optic interfaces. The system saw first-light on the Canada-France-Hawaii Telescope on Mauna Kea in March 1995. First-light on the University of Hawaii 2.2-M Telescope on Mauna Kea was in July 1995. Both runs were quite successful. A sample of some of the early science from the first light run is reported in the publication, 'Observations of Weak Lensing in Clusters with an 8192 x 8192 CCD Mosaic Camera'.

  16. Making a room-sized camera obscura

    NASA Astrophysics Data System (ADS)

    Flynt, Halima; Ruiz, Michael J.

    2015-01-01

    We describe how to convert a room into a camera obscura as a project for introductory geometrical optics. The view for our camera obscura is a busy street scene set against a beautiful mountain skyline. We include a short video with project instructions, ray diagrams and delightful moving images of cars driving on the road outside.

  17. CCD sensors for interplanetary spacecraft navigation and pointing control

    NASA Astrophysics Data System (ADS)

    Eisenman, A. R.; Armstrong, R. W.

    1981-01-01

    The applications of practical optical image sensing CCDs to interplanetary spacecraft precision science payload delivery are discussed. The increasing demand for precision and flexibility in spacecraft for proposed new missions is forcing a transition to autonomous onboard navigation and pointing control. Onboard processing of video data is a crucial element in this transition and is made possible by CCD imagers and microprocessors. The pending Halley's comet close flyby is used as an example to examine details of the sensor design and the data processing which will meet the requirements. Target body centerfinding for optical navigation, target body tracking signal processing, and CCD sensor heads are examined, and flow charts and block diagrams are given for each.

  18. Stationary Camera Aims And Zooms Electronically

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steven D.

    1994-01-01

    Microprocessors select, correct, and orient portions of hemispherical field of view. Video camera pans, tilts, zooms, and provides rotations of images of objects of field of view, all without moving parts. Used for surveillance in areas where movement of camera conspicuous or constrained by obstructions. Also used for closeup tracking of multiple objects in field of view or to break image into sectors for simultaneous viewing, thereby replacing several cameras.

  19. ADDING PRIVACY CONSTRAINTS TO VIDEO-BASED APPLICATIONS

    E-print Network

    Cavallaro, Andrea

    to video- based behaviour modelling, and from interactive games to video surveillance. MoreoverADDING PRIVACY CONSTRAINTS TO VIDEO-BASED APPLICATIONS A. CAVALLARO Multimedia and Vision Lab Queen.cavallaro@elec.qmul.ac.uk Remote accessibility to video cameras, face recognition software, and searchable image and video

  20. Exposing Digital Forgeries in Video by Detecting Duplication

    E-print Network

    Farid, Hany

    on a surveillance video, the characters in the movie Speed create a doctored video to conceal activity on the bus particularly in a video taken from a stationary surveillance camera. Given a video sequence, f(x, y, t), t [0Exposing Digital Forgeries in Video by Detecting Duplication Weihong Wang Department of Computer

  1. Modular integrated video system

    SciTech Connect

    Gaertner, K.J.; Heaysman, B.; Holt, R.; Sonnier, C.

    1986-01-01

    The Modular Integrated Video System (MIVS) is intended to provide a simple, highly reliable closed circuit television (CCTV) system capable of replacing the IAEA Twin Minolta Film Camera Systems in those safeguards facilities where mains power is readily available, and situations where it is desired to have the CCTV camera separated from the CCTV recording console. This paper describes the MIVS and the Program Plan which is presently being followed for the development, testing, and implementation of the system.

  2. Video Toroid Cavity Imager

    DOEpatents

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  3. Vehicle parameterization and tracking from traffic videos

    Microsoft Academic Search

    Anh Vu; K. Boriboonsomsin; M. Barth

    2010-01-01

    The popularity of surveillance cameras used in traffic management systems have produced large quantities of video data that cannot be processed easily by humans. We present a method used on high resolution traffic surveillance videos to track and estimate vehicles' state when the cameras are mounted on moderate height structures typically less than 10 meters. This tracking method enables a

  4. Patterned Video Sensors For Low Vision

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  5. Demosaicing: Image Reconstruction from Color CCD Samples

    E-print Network

    Salvaggio, Carl

    Demosaicing: Image Reconstruction from Color CCD Samples Ron Kimmel1 Computer Science Department an algorithm for image reconstruction from CCD sensors samples. The proposed method involves two successive-Verlag Berlin Heidelberg 1998 #12;Demosaicing: Image Reconstruction from Color CCD Samples 611 G G GG G G GG R R

  6. Measuring Tracking Accuracy of CCD Imagers

    NASA Technical Reports Server (NTRS)

    Stanton, R. H.; Dennison, E. W.

    1985-01-01

    Tracking accuracy and resolution of charge-coupled device (CCD) imaging arrays measured by instrument originally developed for measuring performance of star-tracking telescope. Operates by projecting one or more artifical star images on surface of CCD array, moving stars in controlled patterns, and comparing star locations computed from CCD outputs with those calculated from step coordinates of micropositioner.

  7. Multi-camera networks Andrea Cavallaro

    E-print Network

    Cavallaro, Andrea

    video3D world A. Cavallaro #12;Desired solution · Current trend ­ from centralised to distributed medicine Multi-camera networks: scale Personal video conferencing gaming Home energy efficiency domotics-target observation and state: example · Observations · State )(,1, ,, kMkkk xxX hwyyyyx ,,,,, 2121 )(,1, ,, k

  8. Improving Radar Snowfall Measurements Using a Video Disdrometer

    NASA Astrophysics Data System (ADS)

    Newman, A. J.; Kucera, P. A.

    2005-05-01

    A video disdrometer has been recently developed at NASA/Wallops Flight Facility in an effort to improve surface precipitation measurements. The recent upgrade of the UND C-band weather radar to dual-polarimetric capabilities along with the development of the UND Glacial Ridge intensive atmospheric observation site has presented a valuable opportunity to attempt to improve radar estimates of snowfall. The video disdrometer, referred to as the Rain Imaging System (RIS), has been deployed at the Glacial Ridge site for most of the 2004-2005 winter season to measure size distributions, precipitation rate, and density estimates of snowfall. The RIS uses CCD grayscale video camera with a zoom lens to observe hydrometers in a sample volume located 2 meters from end of the lens and approximately 1.5 meters away from an independent light source. The design of the RIS may eliminate sampling errors from wind flow around the instrument. The RIS has proven its ability to operate continuously in the adverse conditions often observed in the Northern Plains. The RIS is able to provide crystal habit information, variability of particle size distributions for the lifecycle of the storm, snowfall rates, and estimates of snow density. This information, in conjunction with hand measurements of density and crystal habit, will be used to build a database for comparisons with polarimetric data from the UND radar. This database will serve as the basis for improving snowfall estimates using polarimetric radar observations. Preliminary results from several case studies will be presented.

  9. EXTENSIVE METRIC PERFORMANCE EVALUATION OF A 3D RANGE CAMERA

    Microsoft Academic Search

    Christoph A. Weyer; Kwang-Ho Bae; Kwanthar Lim; Derek D. Lichti

    ABSTRACT: Three dimensional,(3D) range cameras measure,3D point clouds and intensity ,information of objects using,time-of-flight methods with a CMOS\\/CCD array. Their emerging applications are security, surveillance, bio-mechanics and so on. Since they comprise a CMOS\\/CCD array, a nearest-neighbor search for individual points is not necessary, which can increase the efficiency in estimating the geometric properties of objects. This fact leads us

  10. Design of high speed camera based on CMOS technology

    NASA Astrophysics Data System (ADS)

    Park, Sei-Hun; An, Jun-Sick; Oh, Tae-Seok; Kim, Il-Hwan

    2007-12-01

    The capacity of a high speed camera in taking high speed images has been evaluated using CMOS image sensors. There are 2 types of image sensors, namely, CCD and CMOS sensors. CMOS sensor consumes less power than CCD sensor and can take images more rapidly. High speed camera with built-in CMOS sensor is widely used in vehicle crash tests and airbag controls, golf training aids, and in bullet direction measurement in the military. The High Speed Camera System made in this study has the following components: CMOS image sensor that can take about 500 frames per second at a resolution of 1280*1024; FPGA and DDR2 memory that control the image sensor and save images; Camera Link Module that transmits saved data to PC; and RS-422 communication function that enables control of the camera from a PC.

  11. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  12. Real-time full-field photoacoustic imaging using an ultrasonic camera.

    PubMed

    Balogun, Oluwaseyi; Regez, Brad; Zhang, Hao F; Krishnaswamy, Sridhar

    2010-01-01

    A photoacoustic imaging system that incorporates a commercial ultrasonic camera for real-time imaging of two-dimensional (2-D) projection planes in tissue at video rate (30 Hz) is presented. The system uses a Q-switched frequency-doubled Nd:YAG pulsed laser for photoacoustic generation. The ultrasonic camera consists of a 2-D 12 x 12 mm CCD chip with 120 x 120 piezoelectric sensing elements used for detecting the photoacoustic pressure distribution radiated from the target. An ultrasonic lens system is placed in front of the chip to collect the incoming photoacoustic waves, providing the ability for focusing and imaging at different depths. Compared with other existing photoacoustic imaging techniques, the camera-based system is attractive because it is relatively inexpensive and compact, and it can be tailored for real-time clinical imaging applications. Experimental results detailing the real-time photoacoustic imaging of rubber strings and buried absorbing targets in chicken breast tissue are presented, and the spatial resolution of the system is quantified. PMID:20459240

  13. Assessment of laser-dazzling effects on TV cameras by means of pattern recognition algorithms

    NASA Astrophysics Data System (ADS)

    Durécu, Anne; Vasseur, Olivier; Bourdon, Pierre; Eberle, Bernd; Bürsing, Helge; Dellinger, Jean; Duchateau, Nicolas

    2007-10-01

    Imaging systems are widespread observation tools used to fulfil various functions such as detection, recognition, identification and video-tracking. These devices can be dazzled by using intensive light sources, e.g. lasers. In order to avoid such a disturbance, dazzling effects in TV-cameras must be better understood. In this paper we studied the influence of laser-dazzling on the performance of pattern recognition algorithms. The experiments were performed using a black and white TV-CCD-camera, dazzled by a nanosecond frequency doubled Nd:YAG laser. The camera observed a scene comprising different geometrical forms which had to be recognized by the algorithm. Different dazzling conditions were studied by varying the laser repetition rate, the pulse energy and the position of the geometrical forms relative to the laser spot. The algorithm is based on edge detection and locates areas with forms similar to a reference symbol. As a measure of correspondence it computes the degree of correlation of the different areas. The experiments show that dazzling can highly affect the performance of the used pattern recognition algorithms by generating lots of spurious edges which mimic the reference symbol. As a consequence dazzling results in detrimental effects, since it not only prevents the recognizing of well defined symbols, but it also creates many false alarms.

  14. Theoretical analysis and optimization of CDS signal processing method for CCD image sensors

    Microsoft Academic Search

    Jaroslav Hynecek

    1992-01-01

    The correlated double sampling (CDS) signal processing method used in processing of video signals from CCD image sensors is theoretically analyzed. The CDS signal processing is frequency used to remove noise, which is generated by the reset operation of the floating diffusion charge detection node, from the signal. The derived formulas for the noise power spectral density provide an invaluable

  15. Video-to-3D

    Microsoft Academic Search

    Marc Pollefeys; Luc Van Gool; Maarten Vergauwen; Kurt Cornelis; Frank Verbiest; Jan Tops

    In this contribution we intend to present a complete system that takes a video sequence of a static scene as input and outputs a 3D model. The system can deal with images acquired by an uncalibrated hand-held camera, with intrinsic camera parameters possibly varying during the acquisition. In a st stage features are extracted and tracked throughout the sequence. Us-

  16. Snowfall Retrivals Using a Video Disdrometer

    NASA Astrophysics Data System (ADS)

    Newman, A. J.; Kucera, P. A.

    2004-12-01

    A video disdrometer has been recently developed at NASA/Wallops Flight Facility in an effort to improve surface precipitation measurements. One of the goals of the upcoming Global Precipitation Measurement (GPM) mission is to provide improved satellite-based measurements of snowfall in mid-latitudes. Also, with the planned dual-polarization upgrade of US National Weather Service weather radars, there is potential for significant improvements in radar-based estimates of snowfall. The video disdrometer, referred to as the Rain Imaging System (RIS), was deployed in Eastern North Dakota during the 2003-2004 winter season to measure size distributions, precipitation rate, and density estimates of snowfall. The RIS uses CCD grayscale video camera with a zoom lens to observe hydrometers in a sample volume located 2 meters from end of the lens and approximately 1.5 meters away from an independent light source. The design of the RIS may eliminate sampling errors from wind flow around the instrument. The RIS operated almost continuously in the adverse conditions often observed in the Northern Plains. Preliminary analysis of an extended winter snowstorm has shown encouraging results. The RIS was able to provide crystal habit information, variability of particle size distributions for the lifecycle of the storm, snowfall rates, and estimates of snow density. Comparisons with coincident snow core samples and measurements from the nearby NWS Forecast Office indicate the RIS provides reasonable snowfall measurements. WSR-88D radar observations over the RIS were used to generate a snowfall-reflectivity relationship from the storm. These results along with several other cases will be shown during the presentation.

  17. A Video Analysis Framework for Soft Biometry Security Surveillance

    E-print Network

    Wang, Yuan-Fang

    A Video Analysis Framework for Soft Biometry Security Surveillance Yuan-Fang Wang Department propose a distributed, multi-camera video analysis paradigm for aiport security surveillance. We propose video analysis problems in a distributed, multi-camera surveillance network: sensor network calibration

  18. Grounding Spatial Language for Video Search Stefanie Tellex

    E-print Network

    Roy, Deb

    enable intuitive search of large databases of surveillance video. We present a mech- anism for connecting- lion surveillance cameras installed, which record four billion hours of video per week [14]. Analyzing cameras. The system takes as input a natural lan- guage query, a database of surveillance video from

  19. Hyper Suprime-Cam: performance of the CCD readout electronics

    NASA Astrophysics Data System (ADS)

    Nakaya, Hidehiko; Miyatake, Hironao; Uchida, Tomohisa; Fujimori, Hiroki; Mineo, Sogo; Aihara, Hiroaki; Furusawa, Hisanori; Kamata, Yukiko; Karoji, Hiroshi; Kawanomoto, Satoshi; Komiyama, Yutaka; Miyazaki, Satoshi; Morokuma, Tomoki; Obuchi, Yoshiyuki; Okura, Yuki; Tanaka, Manobu; Tanaka, Yoko; Uraguchi, Fumihiro; Utsumi, Yousuke

    2012-07-01

    Hyper Suprime-Cam (HSC) employs 116 pieces of 2k×4k fully-depleted CCD with a total of 464 signal outputs to cover the 1.5 degrees diameter field of view. The readout electronics was designed to achieve ~5 e of the readout noise and 150000 e of the fullwell capacity with 20 seconds readout time. Although the image size exceeds 2G Bytes, the readout electronics supports the 10 seconds readout time for the entire CCDs continuously. All of the readout electronics and the CCDs have already been installed in the camera dewar. The camera has been built with equipment such as coolers and an ion pump. We report the readout performance of all channels of the electronics extracted from the recent test data.

  20. Method to implement the CCD timing generator based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  1. Real-Time Free Viewpoint from Multiple Moving Cameras

    Microsoft Academic Search

    Vincent Nozick; Hideo Saito

    2007-01-01

    In recent years, some Video-Based Rendering methods have advanced from o-line rendering to on-line rendering. However very few of them can handle moving cameras while recording. Moving cameras enable to follow an actor in a scene, come closer to get more details or just adjust the framing of the cameras. In this paper, we propose a new Video-Based Rendering method

  2. Characterization of a direct detection device imaging camera for transmission electron microscopy.

    PubMed

    Milazzo, Anna-Clare; Moldovan, Grigore; Lanman, Jason; Jin, Liang; Bouwer, James C; Klienfelder, Stuart; Peltier, Steven T; Ellisman, Mark H; Kirkland, Angus I; Xuong, Nguyen-Huu

    2010-06-01

    The complete characterization of a novel direct detection device (DDD) camera for transmission electron microscopy is reported, for the first time at primary electron energies of 120 and 200 keV. Unlike a standard charge coupled device (CCD) camera, this device does not require a scintillator. The DDD transfers signal up to 65 lines/mm providing the basis for a high-performance platform for a new generation of wide field-of-view high-resolution cameras. An image of a thin section of virus particles is presented to illustrate the substantially improved performance of this sensor over current indirectly coupled CCD cameras. PMID:20382479

  3. Content Description Servers for Networked Video Surveillance Jeffrey E. Boyd Maxwell Sayles Luke Olsen

    E-print Network

    Boyd, Jeffrey E.

    Content Description Servers for Networked Video Surveillance Jeffrey E. Boyd Maxwell Sayles Luke performed by a video surveillance system into cameras. The result is meta- data cameras, like MPEG-7 cameras demonstrate the use of these video information servers in some simple surveillance applica- tions. Dynamic

  4. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  5. Distributed multi-camera synchronization for smart-intruder detection Markus Spindler Fabio Pasqualetti Francesco Bullo

    E-print Network

    Bullo, Francesco

    of cameras for video surveillance. We consider a chain of cameras installed in an environment. These cameras for video surveillance, and we focus on the development of distributed and autonomous surveillance of border changes [3], and the patrol (surveillance) of an environment [4]. The surveillance of an area

  6. Composite Spatio-Temporal Event Detection in Multi-Camera Surveillance Networks

    E-print Network

    Paris-Sud XI, Université de

    (DVS), large scale surveillance networks along with video analytic features become more practical detection in surveillance videos from single camera views; multi-camera event detection still remainsComposite Spatio-Temporal Event Detection in Multi-Camera Surveillance Networks Yun Zhai, Ying

  7. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  8. The Development of the Spanish Fireball Network Using a New All-Sky CCD System

    NASA Astrophysics Data System (ADS)

    Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.; Llorca, J.; Fabregat, J.; Martínez, V. J.; Reglero, V.; Jelínek, M.; Kubánek, P.; Mateo, T.; Postigo, A. De Ugarte

    2004-12-01

    We have developed an all-sky charge coupled devices (CCD) automatic system for detecting meteors and fireballs that will be operative in four stations in Spain during 2005. The cameras were developed following the BOOTES-1 prototype installed at the El Arenosillo Observatory in 2002, which is based on a CCD detector of 4096 × 4096 pixels with a fish-eye lens that provides an all-sky image with enough resolution to make accurate astrometric measurements. Since late 2004, a couple of cameras at two of the four stations operate for 30 s in alternate exposures, allowing 100% time coverage. The stellar limiting magnitude of the images is +10 in the zenith, and +8 below ~ 65° of zenithal angle. As a result, the images provide enough comparison stars to make astrometric measurements of faint meteors and fireballs with an accuracy of ~ 2°arcminutes. Using this prototype, four automatic all-sky CCD stations have been developed, two in Andalusia and two in the Valencian Community, to start full operation of the Spanish Fireball Network. In addition to all-sky coverage, we are developing a fireball spectroscopy program using medium field lenses with additional CCD cameras. Here we present the first images obtained from the El Arenosillo and La Mayora stations in Andalusia during their first months of activity. The detection of the Jan 27, 2003 superbolide of ± 17 ± 1 absolute magnitude that overflew Algeria and Morocco is an example of the detection capability of our prototype.

  9. Online camera-gyroscope autocalibration for cell phones.

    PubMed

    Jia, Chao; Evans, Brian L

    2014-12-01

    The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values. PMID:25265608

  10. Matching a curved focal plane with CCD's - Wide field imaging of glancing incidence X-ray telescopes

    NASA Technical Reports Server (NTRS)

    Nousek, J. A.; Garmire, G. P.; Ricker, G. R.; Bautz, M. W.; Levine, A. M.; Collins, S. A.

    1987-01-01

    The design of a wide field imaging camera suitable for use with a glancing incidence X-ray telescope is complicated by the sharply concave nature of the optimum focal surface of such a telescope. Such a camera made up of a mosaic of CCDs is being designed which is intended for flight aboard the Advanced X-ray Astrophysics Facility (AXAF). The design rationale and tradeoffs are discussed, and the layout for the imaging CCD array is presented. The related issue of optimizing performance of transmission objective gratings is also discussed, and the array of CCD orientations suitable for this problem is presented.

  11. Caught on Video

    ERIC Educational Resources Information Center

    Sprankle, Bob

    2008-01-01

    When cheaper video cameras with built-in USB connectors were first introduced, the author relates that he pined for one so he introduced the technology into the classroom. The author believes that it would not only be a great tool for students to capture their own learning, but also make his job of collecting authentic assessment more streamlined…

  12. 1 Introduction Surround video

    E-print Network

    Nielsen, Frank

    1 Introduction Surround video: a multihead camera approach Frank Nielsen Sony Computer Science Laboratories, Tokyo, Japan E-mail: Frank.Nielsen@acm.org Published online: 3 February 2005 c Springer to painters. Renaissance painting is seen as a key period where many pro- jection methods were discovered

  13. Why camera cars do not fix the problem: A study of technology use and racial profiling

    Microsoft Academic Search

    Jennifer J Dierickx

    2008-01-01

    Over the last decade, reports of police and citizen misconduct have continued to surface through mainstream media channels. This behavior is able to be displayed publicly because of the video footage either taken by televisions crews or by in-car video cameras. The in-car video camera is a relatively new form of technology for police organizations. As such, it has been

  14. Synchronizing A Television Camera With An External Reference

    NASA Technical Reports Server (NTRS)

    Rentsch, Edward M.

    1993-01-01

    Improvement in genlock subsystem consists in incorporation of controllable delay circuit into path of composite synchronization signal obtained from external video source. Delay circuit helps to eliminate potential jitter in video display and ensures setup requirements for digital timing circuits of video camera satisfied.

  15. MultiCam permits use of two or more webcams simultaneously for video chat

    E-print Network

    MacCormick, John

    · MultiCam permits use of two or more webcams simultaneously for video chat in existing chat individual camera): 1. Main tool for this investigation: MultiCam, a new video chat plugin Video Chat previous column) 1. Is multiple-camera video chat useful and/or desirable? · Answer: Yes, for certain

  16. STS-134 Launch Composite Video Comparison - Duration: 56 seconds.

    NASA Video Gallery

    A side-by-side comparison video shows a one-camera view of the STS-134 launch (left) with the six-camera composited view (right). Imaging experts funded by the Space Shuttle Program and located at ...

  17. Mechanical design of the LSST camera

    Microsoft Academic Search

    Martin Nordby; Gordon Bowden; Mike Foss; Gary Guiffre; John Ku; Rafe Schindler

    2008-01-01

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand

  18. Towards intelligent networked video surveillance for the detection of suspicious behaviours

    E-print Network

    Brooks, Mike

    Towards intelligent networked video surveillance for the detection of suspicious behaviours M, AUSTRALIA ABSTRACT: Video camera surveillance is becoming increasingly pervasive in both outdoor city in video surveillance and suggest that graph-based representations will prove valuable in taking

  19. The Dark Energy Camera (DECam)

    E-print Network

    Honscheid, K; Abbott, T; Annis, J; Antonik, M; Barcel, M; Bernstein, R; Bigelow, B; Brooks, D; Buckley-Geer, E; Campa, J; Cardiel, L; Castander, F; Castilla, J; Cease, H; Chappa, S; Dede, E; Derylo, G; Diehl, T; Doel, P; De Vicente, J; Eiting, J; Estrada, J; Finley, D; Flaugher, B; Gaztañaga, E; Gerdes, D; Gladders, M; Guarino, V; Gutíerrez, G; Hamilton, J; Haney, M; Holland, S; Huffman, D; Karliner, I; Kau, D; Kent, S; Kozlovsky, M; Kubik, D; Kühn, K; Kuhlmann, S; Kuk, K; Leger, F; Lin, H; Martínez, G; Martínez, M; Merritt, W; Mohr, J; Moore, P; Moore, T; Nord, B; Ogando, R; Olsen, J; Onal, B; Peoples, J; Qian, T; Roe, N; Sánchez, E; Scarpine, V; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Selen, M; Shaw, T; Simaitis, V; Slaughter, J; Smith, C; Spinka, H; Stefanik, A; Stuermer, W; Talaga, R; Tarle, G; Thaler, J; Tucker, D; Walker, A; Worswick, S; Zhao, A

    2008-01-01

    In this paper we describe the Dark Energy Camera (DECam), which will be the primary instrument used in the Dark Energy Survey. DECam will be a 3 sq. deg. mosaic camera mounted at the prime focus of the Blanco 4m telescope at the Cerro-Tololo International Observatory (CTIO). It consists of a large mosaic CCD focal plane, a five element optical corrector, five filters (g,r,i,z,Y), a modern data acquisition and control system and the associated infrastructure for operation in the prime focus cage. The focal plane includes of 62 2K x 4K CCD modules (0.27"/pixel) arranged in a hexagon inscribed within the roughly 2.2 degree diameter field of view and 12 smaller 2K x 2K CCDs for guiding, focus and alignment. The CCDs will be 250 micron thick fully-depleted CCDs that have been developed at the Lawrence Berkeley National Laboratory (LBNL). Production of the CCDs and fabrication of the optics, mechanical structure, mechanisms, and control system for DECam are underway; delivery of the instrument to CTIO is scheduled ...

  20. Video rate bioluminescence imaging of secretory proteins in living cells: localization, secretory frequency, and quantification.

    PubMed

    Suzuki, Takahiro; Kondo, Chihiro; Kanamori, Takao; Inouye, Satoshi

    2011-08-15

    We have developed a method of video rate bioluminescence imaging to investigate protein secretion from a single mammalian cell and analyzed the localization, secretory frequency, and quantification of secreted protein. By detecting the luminescence signals from the Gaussia luciferase (GLase) reaction using a high-speed electron-multiplying charge-coupled device (EM-CCD) camera, video rate imaging was performed with a time resolution within 500 ms/image over 30 min in living cells. As a model study, we applied the method to visualize the glucose-stimulated insulin secretion from clustered pancreatic MIN6 ? cells using the fused protein of GLase with preproinsulin. High-quality video images clearly showed that the glucose-stimulated insulin secretion from the clustered MIN6 ? cells oscillated within a period of a few minutes over 10 min. In addition, the glibenclamide-induced insulin secretion from the clustered MIN6 ? cells was visualized, suggesting that bioluminescence video rate imaging is a useful method for validating drug action in living cells. PMID:21477579