These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Video camera signal processing IC with CCD delay lines  

Microsoft Academic Search

A video camera signal-processing IC that includes CCD (charge coupled device) delay lines (DLs) and lowpass filters has been developed using a Bi-CMOS\\/CCD process. The IC greatly contributes to reduced size and improved reliability in video movie circuits. The IC incorporates four 1H CCD-DLs and five lowpass filters, thereby making the circuit board space more compact. Automatic adjustment of CCD-DL

T. Kiyofuji; M. Yoshida; M. Aso; N. Murakami; S. Miyazaki

1990-01-01

2

CCD Luminescence Camera  

NASA Technical Reports Server (NTRS)

New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

Janesick, James R.; Elliott, Tom

1987-01-01

3

Operating Manual CCD Camera Models  

E-print Network

Operating Manual CCD Camera Models ST-7E, ST-8E, ST-9E, ST-10E and ST-1001E Santa Barbara ....................................................................................2 1.2.2. CCD Camera .............................................................................................3 2. Introduction to CCD Cameras

Natelson, Douglas

4

Operating Manual CCD Camera Models  

E-print Network

Operating Manual CCD Camera Models ST-7XE, ST-8XE, ST-9XE, ST-10XE, ST-10XME and ST-2000XM:.......................................................17 1.2.7. Capturing Images with the CCD Camera.........................................................18 2. Introduction to CCD Cameras

5

Omnifocus video camera  

NASA Astrophysics Data System (ADS)

The omnifocus video camera takes videos, in which objects at different distances are all in focus in a single video display. The omnifocus video camera consists of an array of color video cameras combined with a unique distance mapping camera called the Divcam. The color video cameras are all aimed at the same scene, but each is focused at a different distance. The Divcam provides real-time distance information for every pixel in the scene. A pixel selection utility uses the distance information to select individual pixels from the multiple video outputs focused at different distances, in order to generate the final single video display that is everywhere in focus. This paper presents principle of operation, design consideration, detailed construction, and over all performance of the omnifocus video camera. The major emphasis of the paper is the proof of concept, but the prototype has been developed enough to demonstrate the superiority of this video camera over a conventional video camera. The resolution of the prototype is high, capturing even fine details such as fingerprints in the image. Just as the movie camera was a significant advance over the still camera, the omnifocus video camera represents a significant advance over all-focus cameras for still images.

Iizuka, Keigo

2011-04-01

6

Multiple Sensor Camera for Enhanced Video Capturing  

NASA Astrophysics Data System (ADS)

A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

7

Calibrating and characterizing intensified video cameras radiometrically  

Microsoft Academic Search

Multispectral, hyperspectral, and polarization filters have been shown to provide additional discriminants when searching for mines and other obstacles, but they demand more illumination for the sensing system. Conventional CCD video cameras, when used through such filters, fail at sunset or soon after. It is tempting to employ an automatic-gain intensified camera to push this time deeper into the night

Harold R. Suiter; Chuong N. Pham; Kenneth R. Tinsley

2003-01-01

8

Protocols conversion in remote controlling for CCD camera  

NASA Astrophysics Data System (ADS)

In the industrial and network monitoring field, there are several protocols such as Pelco D/P used for remote operation and widely applied to control the pan/tilt/zoom (PTZ) camera systems. But for universal CCD camera, a lot of incompatible communication protocols have be developed by different manufacturers. To extend these cameras' application in remote monitoring field and improving its compatibility with controlling terminal, it's necessary to design a reliable protocols conversion module. This paper aimed at realizing the conversion and recognize of different protocols for CCD camera. Protocol conversion principle and algorithm are analyzed to implement instruction transformation for any camera protocols. An example is demonstrated by converting Protocol Pelco D/P into Protocol 54G30 using Micro Controller Unit (MCU). High performance hardware and rapid software algorithm was designed for high efficient conversion process. By means of serial communication assistant, Video Server and PTZ controlling keyboard, the stability and reliability of this module were finally validated.

Lin, Jiaming; Liu, Jinhua; Wang, Yanqin; Yang, Longrong

2008-03-01

9

Application of the CCD camera in medical imaging  

NASA Astrophysics Data System (ADS)

Medical fluoroscopy is a set of radiological procedures used in medical imaging for functional and dynamic studies of digestive system. Major components in the imaging chain include image intensifier that converts x-ray information into an intensity pattern on its output screen and a CCTV camera that converts the output screen intensity pattern into video information to be displayed on a TV monitor. To properly respond to such a wide dynamic range on a real-time basis, such as fluoroscopy procedure, are very challenging. Also, similar to all other medical imaging studies, detail resolution is of great importance. Without proper contrast, spatial resolution is compromised. The many inherent advantages of CCD make it a suitable choice for dynamic studies. Recently, CCD camera are introduced as the camera of choice for medical fluoroscopy imaging system. The objective of our project was to investigate a newly installed CCD fluoroscopy system in areas of contrast resolution, details, and radiation dose.

Chu, Wei-Kom; Smith, Chuck; Bunting, Ralph; Knoll, Paul; Wobig, Randy; Thacker, Rod

1999-04-01

10

Solid state television camera (CCD-buried channel)  

NASA Technical Reports Server (NTRS)

The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

1976-01-01

11

Solid state television camera (CCD-buried channel), revision 1  

NASA Technical Reports Server (NTRS)

An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

1977-01-01

12

Vacuum compatible miniature CCD camera head  

DOEpatents

A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

Conder, Alan D. (Tracy, CA)

2000-01-01

13

Jack & the Video Camera  

ERIC Educational Resources Information Center

This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

Charlan, Nathan

2010-01-01

14

The QUEST Large Area CCD Camera  

E-print Network

We have designed, constructed and put into operation a very large area CCD camera that covers the field of view of the 1.2 m Samuel Oschin Schmidt Telescope at the Palomar Observatory. The camera consists of 112 CCDs arranged in a mosaic of four rows with 28 CCDs each. The CCDs are 600 x 2400 pixel Sarnoff thinned, back illuminated devices with 13 um x 13 um pixels. The camera covers an area of 4.6 deg x 3.6 deg on the sky with an active area of 9.6 square degrees. This camera has been installed at the prime focus of the telescope, commissioned, and scientific quality observations on the Palomar-QUEST Variability Sky Survey were started in September of 2003. The design considerations, construction features, and performance parameters of this camera are described in this paper.

Charlie Baltay; David Rabinowitz; Peter Andrews; Anne Bauer; Nancy Ellman; William Emmet; Rebecca Hudson; Thomas Hurteau; Jonathan Jerke; Rochelle Lauer; Julia Silge; Andrew Szymkowiak; Brice Adams; Mark Gebhard; James Musser; Michael Doyle; Harold Petrie; Roger Smith; Robert Thicksten; John Geary

2007-02-21

15

Validation of Global Illumination Simulations through CCD Camera Measurements  

E-print Network

Validation of Global Illumination Simulations through CCD Camera Measurements Sumanta N. Pattanaik University Ithaca, NY-14853, USA Abstract In this paper we present a technique for calibrating a CCD camera which includes a calibrated integrating sphere light source and a scientific grade CCD camera for mea

Ferwerda, James A.

16

Calibration of CCD cameras for field and frame capture modes  

Microsoft Academic Search

ABSTRACT Small format, medium resolution CCD cameras are at present widely used for industrial metrology applications because they are readily available and relatively inexpensive. The ,calibration of CCD cameras ,is necessary ,in order ,to characterise the geometry of the sensors and lenses. In cases where a static or slowly moving object is to be imaged, frame capture mode,is most often

Mark R. Shortis; Walter L. Snow

17

Calibration of CCD cameras for field and frame capture modes  

Microsoft Academic Search

Small format, medium resolution CCD cameras are at present widely used for industrial metrology applications because they are readily available and relatively inexpensive. The calibration of CCD cameras is necessary in order to characterize the geometry of the sensors and lenses. In cases where a static or slowly moving object is to be imaged, frame capture mode is most often

Mark R. Shortis; Walter L. Snow

1995-01-01

18

Apogee Imaging Systems Alta F42 CCD Camera with Back Illuminated EV2 CCD42-The Apogee Alta F42 CCD Camera has a back-illuminated full frame 4 megapixel EV2 CCD42-40  

E-print Network

Apogee Imaging Systems Alta F42 CCD Camera with Back Illuminated EV2 CCD42- 40 Sensor The Apogee Alta F42 CCD Camera has a back-illuminated full frame 4 megapixel EV2 CCD42-40 sensor with very high quantum efficiency. Midband, broadband, and UV-enhanced versions of the CCD are available. Ideal

Kleinfeld, David

19

CCD Camera Operating Manual Model ST-4X, ST-5 and ST-6  

E-print Network

CCD Camera Operating Manual for the Model ST-4X, ST-5 and ST-6 Santa Barbara Instrument Group 1482 .................................................................................1 1.2.2. CCD Camera..........................................................................................2 2. Introduction to CCD Cameras

20

Signal-Processing ICs Employed in A Single-Chip CCD Color Camera  

Microsoft Academic Search

As home videotape recorders have come into wide use, a need for a reduction in the size, weight, and power consumption of color video cameras has become obvious. This need can be met using a solid-state imager such as a CCD, but only if at the same time system ICs are used for the circuits. For this purpose signal-processing ICs

Makoto Onga; Toshimichi Nishimura; Shigeo Komuro; Tetsuya lizuka; Masaaki Tsuruta; Toshiharu Kondo; Seisuke Yamanaka

1984-01-01

21

High-speed camera with a back-thinned 16-port frame transfer CCD sensor  

Microsoft Academic Search

A high frame rate CCD camera is described, based on the a new, back-thinned, 512×512 pixel frame transfer sensor with 16 video ports. The sensor allows for imaging beyond the visible range within a large dynamic range. Circuits for testing and evaluation of the sensor over a large range of charge transfer clock frequencies and the measured data are presented.

B. T. Turko; G. J. Yates; K. L. Albright; C. R. Pena

1994-01-01

22

The SXI: CCD camera onboard the NeXT mission  

E-print Network

The Soft X-ray Imager (SXI) is the X-ray CCD camera on board the NeXT mission that is to be launched around 2013. We are going to employ the CCD chips developed at Hamamatsu Photonics, K.K. We have been developing two types ...

Bautz, Marshall W.

23

Intensified\\/shuttered cooled CCD camera for dynamic proton radiography  

Microsoft Academic Search

An intensified\\/shuttered cooled PC-based CCD camera system was designed and successfully fielded on proton radiography experiments at the Los Alamos National Laboratory ALNSCE facility using 800-MeV protons. The four camera detector system used front-illuminated full-frame CCD arrays fiber optically coupled to either 25-mm diameter planar diode or microchannel plate image intensifiers which provided optical shuttering for time resolved imaging of

George J. Yates; Kevin L. Albright; K. R. Alrick; Robert A. Gallegos; J. Galyardt; Norman T. Gray; Gary E. Hogan; Vanner H. Holmes; Steven A. Jaramillo; Nicholas S. King; Thomas E. McDonald; Kevin B. Morley; Christopher L. Morris; Dustin Numkena; Peter D. Pazuchanics; C. M. Riedel; J. S. Sarracino; Hans-Joachim Ziock; John Zumbro

1998-01-01

24

Ultrahigh-speed, high-sensitivity color camera with 300,000-pixel single CCD  

NASA Astrophysics Data System (ADS)

We have developed an ultrahigh-speed, high-sensitivity portable color camera with a new 300,000-pixel single CCD. The 300,000-pixel CCD, which has four times the number of pixels of our initial model, was developed by seamlessly joining two 150,000-pixel CCDs. A green-red-green-blue (GRGB) Bayer filter is used to realize a color camera with the single-chip CCD. The camera is capable of ultrahigh-speed video recording at up to 1,000,000 frames/sec, and small enough to be handheld. We also developed a technology for dividing the CCD output signal to enable parallel, highspeed readout and recording in external memory; this makes possible long, continuous shots up to 1,000 frames/second. As a result of an experiment, video footage was imaged at an athletics meet. Because of high-speed shooting, even detailed movements of athletes' muscles were captured. This camera can capture clear slow-motion videos, so it enables previously impossible live footage to be imaged for various TV broadcasting programs.

Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Ohtake, H.; Kurita, T.; Tanioka, K.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Etoh, T. G.

2007-01-01

25

Video camera use at nuclear power plants  

SciTech Connect

A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs.

Estabrook, M.L.; Langan, M.O.; Owen, D.E. (ENCORE Technical Resources, Inc., Middletown, PA (USA))

1990-08-01

26

Solid-State Video Camera for the Accelerator Environment  

SciTech Connect

Solid-State video cameras employing CMOS technology have been developed and tested for several years in the SLAC accelerator, notably the PEPII (BaBar) injection lines. They have proven much more robust than their CCD counterparts in radiation areas. Repair is simple, inexpensive, and generates very little radioactive waste.

Brown, R.L.V.; Roster, B.H.; Yee, C.K. [Stanford Linear Accelerator Center, Menlo Park, CA 94309 (United States)

2004-11-10

27

Printed circuit board for a CCD camera head  

DOEpatents

A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

Conder, Alan D. (Tracy, CA)

2002-01-01

28

Large Format, Dual Head,Triple Sensor, Self-Guiding CCD Cameras  

E-print Network

STL-1001E Large Format, Dual Head,Triple Sensor, Self-Guiding CCD Cameras The Research Line to the main imaging CCD. One guiding CCD is located next to the imaging CCD in our patented design, similar to the self-guiding arrangement in our ST line of cameras.The remote guiding CCD is in a small separate head

Walter, Frederick M.

29

New 1000x1000 frame transfer CCD camera for high-speed high-fidelity digital imaging  

NASA Astrophysics Data System (ADS)

DALSA's new CA-D4-1024 camera is the state-of-the-art in high-resolution, high speed, high fidelity, CCD area array image capture camera technology. Its features marry the best of both high frame rate cameras and slow-scan scientific cameras. These include ultra-low dark current at room temperature operation, low noise, high signal capacity, high fill-factor, electronic shuttering, low smear, anti- blooming, 1000 X 1000 resolution, progressive scan, user controlled frame rates up to 40 f.p.s., in-camera digitization, compact camera size, low camera power dissipation,and both 8 to 12 bit video digitization options.

Litwiller, David J.

1997-04-01

30

High-resolution, low-light, image-intensified CCD camera  

Microsoft Academic Search

The maturing camera industry has been employing a RS 170 formatted Image Intensified CCD Camera. This left some users with limited image resolutions. In a typical RS 170 camera the CCD sensor's resolving pixel density is in the order of 500 X 400 squared pixels. To surmount this limitation for a particular user, a high resolution image intensified CCD camera

Satoru C. Tanaka; Tom Silvey; Greg Long; Bill Braze

1991-01-01

31

Quantitative Comparison of Commercial CCD and Custom-Designed CMOS Camera for Biological  

E-print Network

Quantitative Comparison of Commercial CCD and Custom-Designed CMOS Camera for Biological and systems where even the smallest details have a meaning, CCD cameras are mostly preferred and they hold camera to compete with the default CCD camera of an inverted microscope for fluorescence imaging

De Micheli, Giovanni

32

CCD-camera-based head-up display test station  

Microsoft Academic Search

Measurement of the parameters required to calibrate and Acceptance Test a Head Up Display have traditionally been made using a Photometer and Theodolite. As HUD complexity has increased it has become an increasingly lengthy process subject to error. The concept of using a calibrated CCD Camera to combine the functions in a highly automated system is not new but creating

Christopher T. Bartlett; Mark H. Handley

2002-01-01

33

An ASIC for Delta Sigma digitization of technical CCD video  

NASA Astrophysics Data System (ADS)

Delta Sigma digitizers generally have excellent linearity, precision and noise rejection. They are especially well suited for implementation as integrated circuits. However, they are rarely used for time bounded signals like CCD pixels. We are developing a CCD video digitizer chip incorporating a novel variant of the Delta Sigma architecture that is especially well suited for this application. This architecture allows us to incorporate video filtering and correlated double sampling into the digitizer itself, eliminating the complex analog video processing usually needed before digitization. We will present details of a multichannel ASIC design that will achieve spectroscopic precision and linearity while using much less energy than previous CCD digitizers for technical applications such as imaging X-ray spectroscopy. The low conversion energy requirement together with the ability to integrate many channels will enable us to construct fast CCD systems that require no cooling and can handle a much wider range of X-ray intensity than existing X-ray CCD systems.

Doty, John P.; Matsuura, D.; Ozawa, H.; Miyata, E.; Tsunemi, H.; Ikeda, H.

2006-06-01

34

A novel calibration method of CCD camera for LAMOST  

NASA Astrophysics Data System (ADS)

Large Sky Area Multi-object Fiber Spectroscopic Telescope - LAMOST, with a 1.75m-diameter focal plane on which 4000 optical fibers are arranged, is one of major scientific projects in China. During the surveying process of LAMOST, the optical imaging system makes the astrometric objects be imaged in the focal plane, and the optical fiber positioning system controls the 4000 fibers to be aligned with these objects and obtain their spectrum. In order to correct the positioning error of these optical fibers, the CCD camera is used to detect these fibers’ position in the way of close-range photogrammetry. As we all know, the calibration quality of the CCD camera is one of the most important factors for detection precision. However, the camera calibration has two following problems in the field work of LAMOST. First, the camera parameters are not stable due to the changes of on-site work environment and the vibration during movement. So, the CCD camera must be on-line calibrated. Second, a large-size high-precision calibration target is needed to calibrate the camera, for the focal plane is very big. Making such a calibration target, it is very difficult and costly. Meanwhile, the large calibration target is hard to be fixed on LAMOST because of the space constraint. In this paper, an improved bundle adjustment self-calibration method is proposed to solve the two problems above. The results of experiment indicate that this novel calibration method needs only a few control points while the traditional calibration methods need much more control points to get the same accuracy. So the method could realize the on-line high-precision calibration of CCD camera for LAMOST.

Gu, Yonggang; Jin, Yi; Zhai, Chao

2012-09-01

35

Digital video camera workshop Sony VX2000  

E-print Network

Digital video camera workshop Sony VX2000 Sony DSR-PDX10 #12;Borrowing Eligibility · Currently are excellent for all types of video work · Both use MiniDV format (60min SP/90min LP) #12;Camcorder Kits ­ Firewire #12;Video Camera Operation Installing the Battery Sony VX2000 Insert the battery with the arrow

36

Developments in the EM-CCD camera for OGRE  

NASA Astrophysics Data System (ADS)

The Off-plane Grating Rocket Experiment (OGRE) is a sub-orbital rocket payload designed to advance the development of several emerging technologies for use on space missions. The payload consists of a high resolution soft X-ray spectrometer based around an optic made from precision cut and ground, single crystal silicon mirrors, a module of off-plane gratings and a camera array based around Electron Multiplying CCD (EM-CCD) technology. This paper gives an overview of OGRE with emphasis on the detector array; specifically this paper will address the reasons that EM-CCDs are the detector of choice and the advantages and disadvantages that this technology offers.

Tutt, James H.; McEntaffer, Randall L.; DeRoo, Casey; Schultz, Ted; Miles, Drew M.; Zhang, William; Murray, Neil J.; Holland, Andrew D.; Cash, Webster; Rogers, Thomas; O'Dell, Steve; Gaskin, Jessica; Kolodziejczak, Jeff; Evagora, Anthony M.; Holland, Karen; Colebrook, David

2014-07-01

37

System Synchronizes Recordings from Separated Video Cameras  

NASA Technical Reports Server (NTRS)

A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

2009-01-01

38

Optics Laboratory #1 Session #1: Geometrical Optics and Calibration of the CCD Camera  

E-print Network

Optics Laboratory #1 Session #1: Geometrical Optics and Calibration of the CCD Camera Goals the spatial scale of the CCD camera. Observe the Gaussian intensity profile of a laser beam I into the CCD camera. Increase the attenuation with OD (optical density) filters, in combination

Braun, Paul

39

High-speed CCD movie camera with random pixel selection, for neurobiology research  

E-print Network

High-speed CCD movie camera with random pixel selection, for neurobiology research Steve M. Potter1 have designed and built a CCD camera capable of producing movies at over 1000 frames per second the gap between fast, small photodiode arrays, and slow, high-resolution scientific CCD cameras

40

Engineer reconnaissance with a video camera: feasibility study  

E-print Network

campaign from Field Marshal Montgomery's failed seizure of a "bridge too far" in Arnheim, Holland to the Soviet Union's unique use of ice bridges to resupply the defenders of Stalingrad. From a military standpoint, bridges either exist or need... OBJECTIVES This study will evaluate the feasibility of the acquisition of remote sensed engineer reconnaissance data (through the use of a Charged-Coupled Device !CCD) video camera, laser rangefinder and angle measuring capability) and subsequent...

Bergner, Kirk Michael

2012-06-07

41

Television camera video level control system  

NASA Technical Reports Server (NTRS)

A video level control system is provided which generates a normalized video signal for a camera processing circuit. The video level control system includes a lens iris which provides a controlled light signal to a camera tube. The camera tube converts the light signal provided by the lens iris into electrical signals. A feedback circuit in response to the electrical signals generated by the camera tube, provides feedback signals to the lens iris and the camera tube. This assures that a normalized video signal is provided in a first illumination range. An automatic gain control loop, which is also responsive to the electrical signals generated by the camera tube 4, operates in tandem with the feedback circuit. This assures that the normalized video signal is maintained in a second illumination range.

Kravitz, M.; Freedman, L. A.; Fredd, E. H.; Denef, D. E. (inventors)

1985-01-01

42

Wind dynamic range video camera  

NASA Technical Reports Server (NTRS)

A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

Craig, G. D. (inventor)

1985-01-01

43

MOIR PATTERNS FROM A CCD CAMERA -Are they annoying artifacts or can they be useful?  

E-print Network

MOIR� PATTERNS FROM A CCD CAMERA - Are they annoying artifacts or can they be useful? Tong Tu, Wooi. Abstract: When repetitive high frequency patterns appear in the view of a charge-coupled device (CCD is imaged using a grey scaled CCD camera. The characteristics of the observed Moiré patterns are described

Goh, Wooi Boon

44

Color Measurement of Printed Textile using CCD Cameras Harro Stokman Theo Gevers  

E-print Network

Color Measurement of Printed Textile using CCD Cameras Harro Stokman Theo Gevers Intelligent the overall print quality. In this article, we investigate whether all printed colors can be detected by CCD to inspect textile color printing with cyan (C), magenta (M), yellow (Y) and black (K) inks using CCD cameras

Gevers, Theo

45

Modeling of automobile part based on CCD camera system  

NASA Astrophysics Data System (ADS)

This paper discusses the acquisition method of digital image with CCD camera and tried to find a process of image enhancement in accordance with the characteristics of the CCD. And we also developed the method of transforming BIN image file to BMP file for the application on the windows program. Digital image was acquired suitable to coordinate measurment after image enhancement as histogram analysis to BMP image, equalization, brightness, contrast, noise elimination, and sharpening. Also we acquired pixel coordinate for target point on the object and tranformed to image coordinate. Next, we selected a part of automobile, executed bundle adjustment by method of this study and existing photogrammetric surveying, and consequently examined 3D accuracy and efficiency. In addition, we suggested 3D measurement method applicable to the whole range of industry by accomplishing fast and efficient modeling technique by means of digital photogrammetry.

Han, Seung-Hee

1995-09-01

46

The Ccd Camera Testing Instrument For The Bigboss Fiber Positioner  

NASA Astrophysics Data System (ADS)

Throughput of a fiber-robot-based multi-object spectrograph depends on the accuracy and precision of the fiber position system. An efficient and accurate method of quantifying the performance of an actuator is necessary during the design iteration process, final design, and for post-production characterization. A CCD camera-based optical setup was developed at the Lawrence Berkeley National Laboratory to test these parameters of fiber robot positioners. The setup is described, as well as tests used to quantify distortion and cross-check measurement accuracy. Accuracy of the measurement was found to be better than three microns rms for lateral position error measurements.

Zhou, Zengxiang; Sholl, M.; Bebek, C.

2012-01-01

47

Absolute calibration of a CCD camera with twin beams  

E-print Network

We report on the absolute calibration of a CCD camera by exploiting quantum correlation. This novel method exploits a certain number of spatial pairwise quantum correlated modes produced by spontaneous parametric-down-conversion. We develop a measurement model taking into account all the possible source of losses and noise that are not related to the quantum efficiency,accounting for all the uncertainty contributions, and we reach the relative uncertainty of 0.3% in low photon flux regime. This represents a significant step forward for the characterizaion of (scientific) CCDs used in mesoscopic light regime.

I. Ruo-Berchera; A. Meda; I. P. Degiovanni; G. Brida; M. L. Rastello; M. Genovese

2014-05-07

48

Close-range photogrammetry with video cameras  

NASA Technical Reports Server (NTRS)

Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

Burner, A. W.; Snow, W. L.; Goad, W. K.

1985-01-01

49

Close-Range Photogrammetry with Video Cameras  

NASA Technical Reports Server (NTRS)

Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

Burner, A. W.; Snow, W. L.; Goad, W. K.

1983-01-01

50

Development of high-speed video cameras  

NASA Astrophysics Data System (ADS)

Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

2001-04-01

51

Research of fiber position measurement by multi CCD cameras  

NASA Astrophysics Data System (ADS)

Parallel controlled fiber positioner as an efficiency observation system, has been used in LAMOST for four years, and will be proposed in ngCFHT and rebuilt telescope Mayall. The fiber positioner research group in USTC have designed a new generation prototype by a close-packed module robotic positioner mechanisms. The prototype includes about 150 groups fiber positioning module plugged in 1 meter diameter honeycombed focal plane. Each module has 37 12mm diameter fiber positioners. Furthermore the new system promotes the accuracy from 40 um in LAMOST to 10um in MSDESI. That's a new challenge for measurement. Close-loop control system are to be used in new system. The CCD camera captures the photo of fiber tip position covered the focal plane, calculates the precise position information and feeds back to control system. After the positioner rotated several loops, the accuracy of all positioners will be confined to less than 10um. We report our component development and performance measurement program of new measuring system by using multi CCD cameras. With the stereo vision and image processing method, we precisely measure the 3-demension position of fiber tip carried by fiber positioner. Finally we present baseline parameters for the fiber positioner measurement as a reference of next generation survey telescope design.

Zhou, Zengxiang; Hu, Hongzhuan; Wang, Jianping; Zhai, Chao; Chu, Jiaru; Liu, Zhigang

2014-07-01

52

Advanced High-Definition Video Cameras  

NASA Technical Reports Server (NTRS)

A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

Glenn, William

2007-01-01

53

Aug 7, 2008 Researchers in the US unveil a silicon-based CCD camera that mimics the  

E-print Network

optics.org NEWS Aug 7, 2008 Researchers in the US unveil a silicon-based CCD camera that mimics compared with planar CCD cameras that use simple, single-component imaging lenses." Conventional cameras

Rogers, John A.

54

Laboratory calibration and characterization of video cameras  

NASA Astrophysics Data System (ADS)

Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of nonperpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitably aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

1990-08-01

55

Video Analysis with a Web Camera  

ERIC Educational Resources Information Center

Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

Wyrembeck, Edward P.

2009-01-01

56

High Performance Cooled CCD Camera System 1020 Sundown Way, Ste 150  

E-print Network

High Performance Cooled CCD Camera System ALTA U42 1020 Sundown Way, Ste 150 Roseville CA 95661 USA tel 916-218-7450 fax 916-218-7451 http://www.ccd.com · 2048 x 2048 array, 13.5 x 13.5 micron pixels-illuminated full frame 4-megapixel CCD with exceptionally high quantum efficiency. The standard midband coating has

Kleinfeld, David

57

Photogrammetric Applications of Immersive Video Cameras  

NASA Astrophysics Data System (ADS)

The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

Kwiatek, K.; Tokarczyk, R.

2014-05-01

58

High-frame-rate CCD cameras with fast optical shutters for military and medical imaging applications  

Microsoft Academic Search

Los Alamos National Laboratory (LANL) has designed and prototyped high-frame rate intensified\\/shuttered Charge-Coupled-Device (CCD) cameras capable of operating at Kilohertz frame rates (non-interfaced mode) with optical shutters capable of acquiring nanosecond-to- microsecond exposures each frame. These cameras utilize an Interline Transfer CCD, Loral Fairchild CCD-222 with 244 (vertical) X 380 (horizontal) pixels operated at pixel rates approaching 100 Mhz. Initial

Nicholas S. King; Kevin L. Albright; Steven A. Jaramillo; Thomas E. McDonald; George J. Yates; Bojan T. Turko

1994-01-01

59

The in-flight spectroscopic performance of the Swift XRT CCD camera  

NASA Astrophysics Data System (ADS)

The Swift X-ray Telescope (XRT) focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 144 eV FWHM at 6.5 keV. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Loss of temperature control motivated a laboratory program to re-optimize the CCD substrate voltage, we describe the small changes in the CCD response that would result from use of a substrate voltage of 6V.

Osborne, Julian P.; Beardmore, A. P.; Godet, O.; Abbey, A. F.; Goad, M. R.; Page, K. L.; Wells, A. A.; Angelini, L.; Burrows, D. N.; Campana, S.; Chincarini, G.; Citterio, O.; Cusumano, G.; Giommi, P.; Hill, J. E.; Kennea, J.; LaParola, V.; Mangano, V.; Mineo, T.; Moretti, A.; Nousek, J. A.; Pagani, C.; Perri, M.; Romano, P.; Tagliaferri, G.; Tamburelli, F.

2005-08-01

60

The in-flight spectroscopic performance of the Swift XRT CCD camera  

E-print Network

The Swift X-ray Telescope (XRT) focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 144 eV FWHM at 6.5 keV. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Loss of temperature control motivated a laboratory program to re-optimize the CCD substrate voltage, we describe the small changes in the CCD response that would result from use of a substrate voltage of 6V.

J. P. Osborne; A. P. Beardmore; O. Godet; A. F. Abbey; M. R. Goad; K. L. Page; A. A. Wells; L Angelini; D. N. Burrows; S. Campana; G. Chincarini; O. Citterio; G. Cusumano; P. Giommi; J. E. Hill; J. Kennea; V. La Parola; V. Mangano; T. Mineo; A. Moretti; J. A. Nousek; C. Pagani; M. Perri; P. Romano; G. Tagliaferri; F. Tamburelli

2005-10-17

61

Synchronization and Rolling Shutter Compensation for Consumer Video Camera Arrays  

E-print Network

of rolling shutter cameras to record high-speed video. The camera array is closely spaced and groupsSynchronization and Rolling Shutter Compensation for Consumer Video Camera Arrays Derek Bradley, and the use of a "rolling" shutter, which in- troduces a temporal shear in the video volume. We present two

Heidrich, Wolfgang

62

Applying CCD Cameras in Stereo Panorama Systems for 3d Environment Reconstruction  

NASA Astrophysics Data System (ADS)

Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder) box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera) can be achieved.

Ashamini, A. Sh.; Varshosaz, M.; Saadatseresht, M.

2012-07-01

63

Vitual Camera Control System for Cinematographic 3D Video Rendering  

Microsoft Academic Search

We propose a virtual camera control system that creates attractive videos from 3D models generated with a virtualized reality system. The proposed camera control system helps the user to generate final videos from the 3D model by referring to the grammar of film language. Many kinds of camera shots and principal camera actions are stored in the system as expertise.

Hansung Kim; Ryuuki Sakamoto; Itaru Kitahara; Tomoji Toriyama; Kiyoshi Kogure

2007-01-01

64

A CCD camera for guidance of 100-cm balloon-borne far-infrared telescope  

Microsoft Academic Search

A charge coupled device (CCD) camera using the 488 x 380 element Fairchild CCD 222 imaging device has been developed for guidance of the 100 cm balloonborne far infrared telescope. The hardware consists of an imaging device along with its associated optics, a clock generating circuitry, the clock drivers, an 8086 microprocessor-based system, and the power supplies. The software processes

S. L. D'Costa; S. K. Ghosh; S. N. Tandon

1991-01-01

65

An unmanned watching system using video cameras  

SciTech Connect

Techniques for detecting intruders at a remote location, such as a power plant or substation, or in an unmanned building at night, are significant in the field of unmanned watching systems. This article describes an unmanned watching system to detect trespassers in real time, applicable both indoors and outdoors, based on image processing. The main part of the proposed system consists of a video camera, an image processor and a microprocessor. Images are input from the video camera to the image processor every 1/60 second, and objects which enter the image are detected by measuring changes of intensity level in selected sensor areas. This article discusses the system configuration and the detection method. Experimental results under a range of environmental conditions are given.

Kaneda, K.; Nakamae, E. (Hiroshima Univ. (Japan)); Takahashi, E. (Tokyo Electric Power Co., Inc. (Japan)); Yazawa, K. (Toko Electric Corp. (JP))

1990-04-01

66

Measuring neutron fluences and gamma/x-ray fluxes with CCD cameras  

SciTech Connect

The capability to measure bursts of neutron fluences and gamma/x-ray fluxes directly with charge coupled device (CCD) cameras while being able to distinguish between the video signals produced by these two types of radiation, even when they occur simultaneously, has been demonstrated. Volume and area measurements of transient radiation-induced pixel charge in English Electric Valve (EEV) Frame Transfer (FT) charge coupled devices (CCDs) from irradiation with pulsed neutrons (14 MeV) and Bremsstrahlung photons (4--12 MeV endpoint) are utilized to calibrate the devices as radiometric imaging sensors capable of distinguishing between the two types of ionizing radiation. Measurements indicate {approx}.05 V/rad responsivity with {ge}1 rad required for saturation from photon irradiation. Neutron-generated localized charge centers or peaks'' binned by area and amplitude as functions of fluence in the 10{sup 5} to 10{sup 7} n/cm{sup 2} range indicate smearing over {approx}1 to 10% of CCD array with charge per pixel ranging between noise and saturation levels.

Yates, G.J. (Los Alamos National Lab., NM (United States)); Smith, G.W. (Ministry of Defense, Aldermaston (United Kingdom). Atomic Weapons Establishment); Zagarino, P.; Thomas, M.C. (EG and G Energy Measurements, Inc., Goleta, CA (United States). Santa Barbara Operations)

1991-01-01

67

Measuring neutron fluences and gamma/x-ray fluxes with CCD cameras  

SciTech Connect

The capability to measure bursts of neutron fluences and gamma/x-ray fluxes directly with charge coupled device (CCD) cameras while being able to distinguish between the video signals produced by these two types of radiation, even when they occur simultaneously, has been demonstrated. Volume and area measurements of transient radiation-induced pixel charge in English Electric Valve (EEV) Frame Transfer (FT) charge coupled devices (CCDs) from irradiation with pulsed neutrons (14 MeV) and Bremsstrahlung photons (4--12 MeV endpoint) are utilized to calibrate the devices as radiometric imaging sensors capable of distinguishing between the two types of ionizing radiation. Measurements indicate {approx}.05 V/rad responsivity with {ge}1 rad required for saturation from photon irradiation. Neutron-generated localized charge centers or ``peaks`` binned by area and amplitude as functions of fluence in the 10{sup 5} to 10{sup 7} n/cm{sup 2} range indicate smearing over {approx}1 to 10% of CCD array with charge per pixel ranging between noise and saturation levels.

Yates, G.J. [Los Alamos National Lab., NM (United States); Smith, G.W. [Ministry of Defense, Aldermaston (United Kingdom). Atomic Weapons Establishment; Zagarino, P.; Thomas, M.C. [EG and G Energy Measurements, Inc., Goleta, CA (United States). Santa Barbara Operations

1991-12-01

68

Development of filter exchangeable 3CCD camera for multispectral imaging acquisition  

NASA Astrophysics Data System (ADS)

There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallel image signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.

Lee, Hoyoung; Park, Soo Hyun; Kim, Moon S.; Noh, Sang Ha

2012-05-01

69

Photometric Calibration of Consumer Video Cameras  

NASA Technical Reports Server (NTRS)

Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

Suggs, Robert; Swift, Wesley, Jr.

2007-01-01

70

RS170 to 700 frame per second CCD camera  

Microsoft Academic Search

A versatile new camera, the Los Alamos National Laboratory (LANL) model GY6, is described. It operates at a wide variety of frame rates, from RS-170 to 700 frames per second. The camera operates as an NTSC compatible black and white camera when operating at RS- 170 rates. When used for variable high-frame rates, a simple substitution is made of the

Kevin L. Albright; Nicholas S. King; George J. Yates; Thomas E. McDonald; Bojan T. Turko

1993-01-01

71

CCD-camera system for stereoscopic optical observations of the aurora  

NASA Astrophysics Data System (ADS)

A system of three identical CCD-cameras was developed enabling stereoscopic auroral observations. An image intensifier allows for real-time imaging of auroral arcs with interference or broad-band filters. The combination of a small-angle optics with a CCD-chip of 756 by 580 pixels provides spatial resolutions of auroral small-scale structures down to 20 m. The cameras are controlled by personal computers with integrated global positioning (GPS) modules enabling time synchronization of the cameras and providing the exact geographical position for the portable cameras. Calibration with a standard light source is the basis for quantitative evaluation of images by image processing techniques. The current technical development is the combination with local operating networks (LON) for monitoring camera parameters like voltage and temperature and remote control of parameters like filter positions, mounting tilt angles and camera gain.

Frey, Harald U.; Lieb, Werner; Bauer, Otto H.; Hoefner, Herwig; Haerendel, Gerhard

1996-11-01

72

Mobile phone camera-based video scanning of paper documents  

E-print Network

Mobile phone camera-based video scanning of paper documents Muhammad Muzzamil Luqman, Petra Gomez, France {muhammad_muzzamil.luqman,petra.gomez,jean-marc.ogier}@univ-lr.fr Abstract--Mobile phone camera research on mobile phone camera-based document image mosaic reconstruction method for video scanning

Paris-Sud XI, Université de

73

Flux Calibration of the ACS CCD Cameras I. CTE Correction  

NASA Astrophysics Data System (ADS)

The flux calibration of HST instruments is normally specified after removal of artifacts such as a decline in charge transfer efficiency (CTE) for CCD detectors and optical throughput degradation. This ISR deals with ACS/WFC CTE losses, which had been considered negligible for bright stars prior to the demise of the ACS CCD channels on 2007 Jan. 27. Following the revival of ACS WFC during the Servicing Mission 4 (SM4) in 2009 May, CTE corrections are now typically several tenths of a percent and should be included, even for our bright standard star observations that utilize a standard reference point which is only 512 rows from the CCD amplifier B readout corner. For such bright standard stars with negligible background signal, a simple correction algorithm with an accuracy of better than 0.1% is derived, which eliminates the need to execute the CTE correction code for the complete image.

Bohlin, Ralph C.; Anderson, J.

2011-01-01

74

High-speed video recording system using multiple CCD imagers and digital storage  

NASA Astrophysics Data System (ADS)

This paper describes a fully solid state high speed video recording system. Its principle of operation is based on the use of several independent CCD imagers and an array of liquid crystal light valves that control which imager receives the light from the subject. The imagers are exposed in rapid succession and are then read out sequentially at standard video rate into digital memory, generating a time-resolved sequence with as many frames as there are imagers. This design allows the use of inexpensive, consumer-grade camera modules and electronics. A microprocessor-based controller, designed to accept up to ten imagers, handles all phases of the recording: exposure timing, image digitization and storage, and sequential playback onto a standard video monitor. The system is capable of recording full screen black and white images with spatial resolution similar to that of standard television, at rates of about 10,000 images per second in pulsed illumination mode. We have designed and built two optical configurations for the imager multiplexing system. The first one involves permanently splitting the subject light into multiple channels and placing a liquid crystal shutter in front of each imager. A prototype with three CCD imagers and shutters based on this configuration has allowed successful three-image video recordings of phenomena such as the action of an air rifle pellet shattering a piece of glass, using a high-intensity pulsed light emitting diode as the light source. The second configuration is more light-efficient in that it routes the entire subject light to each individual imager in sequence by using the liquid crystal cells as selectable binary switches. Despite some operational limitations, this method offers a solution when the available light, if subdivided among all the imagers, would not allow a sufficiently short exposure time.

Racca, Roberto G.; Clements, Reginald M.

1995-05-01

75

An RS-170 to 700 frame per second CCD camera  

SciTech Connect

A versatile new camera, the Los Alamos National Laboratory (LANL) model GY6, is described. It operates at a wide variety of frame rates, from RS-170 to 700 frames per second. The camera operates as an NTSC compatible black and white camera when operating at RS-170 rates. When used for variable high-frame rates, a simple substitution is made of the RS-170 sync/clock generator circuit card with a high speed emitter-coupled logic (ECL) circuit card.

Albright, K.L.; King, N.S.P.; Yates, G.J.; McDonald, T.E. [Los Alamos National Lab., NM (United States); Turko, B.T. [Lawrence Berkeley Lab., CA (United States)

1993-08-01

76

Automated CCD camera characterization. 1998 summer research program for high school juniors at the University of Rochester`s Laboratory for Laser Energetics: Student research reports  

SciTech Connect

The OMEGA system uses CCD cameras for a broad range of applications. Over 100 video rate CCD cameras are used for such purposes as targeting, aligning, and monitoring areas such as the target chamber, laser bay, and viewing gallery. There are approximately 14 scientific grade CCD cameras on the system which are used to obtain precise photometric results from the laser beam as well as target diagnostics. It is very important that these scientific grade CCDs are properly characterized so that the results received from them can be evaluated appropriately. Currently characterization is a tedious process done by hand. The operator must manually operate the camera and light source simultaneously. Because more exposures means more accurate information on the camera, the characterization tests can become very length affairs. Sometimes it takes an entire day to complete just a single plot. Characterization requires the testing of many aspects of the camera`s operation. Such aspects include the following: variance vs. mean signal level--this should be proportional due to Poisson statistics of the incident photon flux; linearity--the ability of the CCD to produce signals proportional to the light it received; signal-to-noise ratio--the relative magnitude of the signal vs. the uncertainty in that signal; dark current--the amount of noise due to thermal generation of electrons (cooling lowers this noise contribution significantly). These tests, as well as many others, must be conducted in order to properly understand a CCD camera. The goal of this project was to construct an apparatus that could characterize a camera automatically.

Silbermann, J. [Penfield High School, NY (United States)

1999-03-01

77

A method for measuring modulation transfer function of CCD device in remote camera with grating pattern  

NASA Astrophysics Data System (ADS)

The remote camera that is developed by us is the exclusive functional load of a micro-satellite. Modulation transfer function (MTF) is a direct and accurate parameter to evaluate the system performance of a remote camera, and the MTF of a camera is jointly decided by the MTF of camera lens and its CCD device. The MTF of the camera lens can be tested directly with commercial optical system testing instrument, but it is indispensable to measure the MTF of the CCD device accurately before setting up the whole camera to evaluate the performance of the whole camera in advance. Compared with other existed MTF measuring methods, this method using grating pattern requires less equipment and simpler arithmetic. Only one complete scan of the grating pattern and later data processing and interpolation are needed to get the continuous MTF curves of the whole camera and its CCD device. High-precision optical system testing instrument guarantees the precision of this indirect measuring method. This indirect method to measure MTF is of reference use for the method of testing MTF of electronic device and for gaining MTF indirectly from corresponding CTF.

Chen, Yuheng; Chen, Xinhua; Shen, Weimin

2008-03-01

78

The in-flight spectroscopic performance of the Swift XRT CCD camera during 2006-2007  

NASA Astrophysics Data System (ADS)

The Swift X-ray Telescope focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 135 eV FWHM at 5.9 keV as measured before launch. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Comparison of observed spectra with models folded through the instrument response produces negative residuals around and below the Oxygen edge. We discuss several possible causes for such residuals. Traps created by proton damage on the CCD increase the charge transfer inefficiency (CTI) over time. We describe the evolution of the CTI since the launch and its effect on the CCD spectral resolution and the gain.

Godet, O.; Beardmore, A. P.; Abbey, A. F.; Osborne, J. P.; Page, K. L.; Tyler, L.; Burrows, D. N.; Evans, P.; Starling, R.; Wells, A. A.; Angelini, L.; Campana, S.; Chincarini, G.; Citterio, O.; Cusamano, G.; Giommi, P.; Hill, J. E.; Kennea, J.; LaParola, V.; Mangano, V.; Mineo, T.; Moretti, A.; Nousek, J. A.; Pagani, C.; Perri, M.; Capalbi, M.; Romano, P.; Tagliaferri, G.; Tamburelli, F.

2007-09-01

79

The in-flight spectroscopic performance of the Swift XRT CCD camera during 2006-2007  

E-print Network

The Swift X-ray Telescope focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 135 eV FWHM at 5.9 keV as measured before launch. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Comparison of observed spectra with models folded through the instrument response produces negative residuals around and below the Oxygen edge. We discuss several possible causes for such residuals. Traps created by proton damage on the CCD increase the charge transfer inefficiency (CTI) over time. We describe the evolution of the CTI since the launch and its effect on the CCD spectral resolution and the gain.

O. Godet; A. P. Beardmore; A. F. Abbey; J. P. Osborne; K. L. Page; L. Tyler; D. N. Burrows; P. Evans; R. Starling; A. A. Wells; L. Angelini; S. Campana; G. Chincarini; O. Citterio; G. Cusumano; P. Giommi; J. E. Hill; J. Kennea; V. LaParola; V. Mangano; T. Mineo; A. Moretti; J. A. Nousek; C. Pagani; M. Perri; M. Capalbi; P. Romano; G. Tagliaferri; F. Tamburelli

2007-08-22

80

Auto-measurement system of aerial camera lens' resolution based on orthogonal linear CCD  

NASA Astrophysics Data System (ADS)

The resolution of aerial camera lens is one of the most important camera's performance indexes. The measurement and calibration of resolution are important test items in in maintenance of camera. The traditional method that is observing resolution panel of collimator rely on human's eyes using microscope and doing some computing. The method is of low efficiency and susceptible to artificial factors. The measurement results are unstable, too. An auto-measurement system of aerial camera lens' resolution, which uses orthogonal linear CCD sensor as the detector to replace reading microscope, is introduced. The system can measure automatically and show result real-timely. In order to measure the smallest diameter of resolution panel which could be identified, two orthogonal linear CCD is laid on the imaging plane of measured lens and four intersection points are formed on the orthogonal linear CCD. A coordinate system is determined by origin point of the linear CCD. And a circle is determined by four intersection points. In order to obtain the circle's radius, firstly, the image of resolution panel is transformed to pulse width of electric signal which is send to computer through amplifying circuit and threshold comparator and counter. Secondly, the smallest circle would be extracted to do measurement. The circle extraction made using of wavelet transform which has character of localization in the domain of time and frequency and has capability of multi-scale analysis. Lastly, according to the solution formula of lens' resolution, we could obtain the resolution of measured lens. The measuring precision on practical measurement is analyzed, and the result indicated that the precision will be improved when using linear CCD instead of reading microscope. Moreover, the improvement of system error is determined by the pixel's size of CCD. With the technique of CCD developed, the pixel's size will smaller, the system error will be reduced greatly too. So the auto-measuring system has high practical value and wide application prospect.

Zhao, Yu-liang; Zhang, Yu-ye; Ding, Hong-yi

2010-10-01

81

Research on detecting heterogeneous fibre from cotton based on linear CCD camera  

NASA Astrophysics Data System (ADS)

The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton, advance the quality of cotton textile and reduce production cost. There are favorable market value and future development for this technology. An optical detecting system obtains the widespread application. In this system, we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first, then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.

Zhang, Xian-bin; Cao, Bing; Zhang, Xin-peng; Shi, Wei

2009-07-01

82

Evaluating stereoscopic CCD still video imagery for determining object height in forestry applications  

E-print Network

EVALUATING STEREOSCOPIC CCD STILL VIDEO IMAGERY FOR DETERMINING OBJECT HEIGHT IN FORESTRY APPLICATIONS A Thesis by DENNIS MURRAY JACOBS Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment... of the requirements for the degree of MASTER OF SCIENCE August 1990 Major Subject: Forestry EVALUATING STEREOSCOPIC CCD STILL VIDEO IMAGERY FOR DETERMINING OBJECT HEIGHT IN FORESTRY APPLICATIONS A Thesis by DENNIS MURRAY JACOBS Approved as to style...

Jacobs, Dennis Murray

2012-06-07

83

Measurement of charge of heavy ions in emulsion using a CCD camera  

Microsoft Academic Search

A system has been developed for semi-automated determination of the charges of heavy ions recorded in nuclear emulsions. The profiles of various heavy ion tracks in emulsion, both accelerator beam ions and fragments of heavy projectiles, were obtained with a CCD camera mounted on a microscope. The dependence of track profiles on illumination, emulsion grain size and density, background in

D. Kudzia; M. L. Cherry; P. Deines-Jones; R. Holynski; A. Olszewski; B. S. Nilsen; K. Sengupta; M. Szarska; A. Trzupek; C. J. Waddington; J. P. Wefel; B. Wilczynska; H. Wilczynski; W. Wolter; B. Wosiek; K. Wozniak

1999-01-01

84

A Multi-Camera Framework for Interactive Video Games  

Microsoft Academic Search

We present a framework that allows for a straightforward development of multi-camera controlled interactive video games. Compared to traditional gaming input devices, cameras provide players with many degrees of freedom and a natural kind of interaction. The use of cameras can even obsolete the need for special clothing or other tracking devices. This partly accounted for the success of the

Tom Cuypers; Cedric Vanaken; Yannick Francken; Frank Van Reeth; Philippe Bekaert

2008-01-01

85

Panoramic Video Capturing and Compressed Domain Virtual Camera Control  

E-print Network

Panoramic Video Capturing and Compressed Domain Virtual Camera Control Xinding Sun*, Jonathan Foote Avenue Palo Alto, CA 94304 {foote, kimber}@pal.xerox.com ABSTRACT A system for capturing panoramic video applications such as classroom lectures and video conferencing. The proposed method is based on the Fly

California at Santa Barbara, University of

86

Time-resolved spectra of dense plasma focus using spectrometer, streak camera, and CCD combinationa)  

NASA Astrophysics Data System (ADS)

A time-resolving spectrographic instrument has been assembled with the primary components of a spectrometer, image-converting streak camera, and CCD recording camera, for the primary purpose of diagnosing highly dynamic plasmas. A collection lens defines the sampled region and couples light from the plasma into a step index, multimode fiber which leads to the spectrometer. The output spectrum is focused onto the photocathode of the streak camera, the output of which is proximity-coupled to the CCD. The spectrometer configuration is essentially Czerny-Turner, but off-the-shelf Nikon refraction lenses, rather than mirrors, are used for practicality and flexibility. Only recently assembled, the instrument requires significant refinement, but has now taken data on both bridge wire and dense plasma focus experiments.

Goldin, F. J.; Meehan, B. T.; Hagen, E. C.; Wilkins, P. R.

2010-10-01

87

Time-resolved spectra of dense plasma focus using spectrometer, streak camera, and CCD combination.  

PubMed

A time-resolving spectrographic instrument has been assembled with the primary components of a spectrometer, image-converting streak camera, and CCD recording camera, for the primary purpose of diagnosing highly dynamic plasmas. A collection lens defines the sampled region and couples light from the plasma into a step index, multimode fiber which leads to the spectrometer. The output spectrum is focused onto the photocathode of the streak camera, the output of which is proximity-coupled to the CCD. The spectrometer configuration is essentially Czerny-Turner, but off-the-shelf Nikon refraction lenses, rather than mirrors, are used for practicality and flexibility. Only recently assembled, the instrument requires significant refinement, but has now taken data on both bridge wire and dense plasma focus experiments. PMID:21034059

Goldin, F J; Meehan, B T; Hagen, E C; Wilkins, P R

2010-10-01

88

Curved CCD detector devices and arrays for multispectral astrophysical applications and terrestrial stereo panoramic cameras  

NASA Astrophysics Data System (ADS)

The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.

Swain, Pradyumna; Mark, David

2004-09-01

89

Development of DMD reflection-type CCD camera for phase analysis and shape measurement  

NASA Astrophysics Data System (ADS)

DMD (Digital Micro-mirror Device) is a new device, which has hundreds of thousands of micro-mirrors in one chip. This paper presents results of the development of a camera system based on DMD technology for phase analysis and shape measurement that we call "DMD reflection-type CCD camera" or "DMD camera". Incorporation of DMD technology enables accurate control of the intensity reaching the imaging detector of a camera. In order to perform accurate pixel-to-pixel correspondence adjustment with high accuracy, we use a moire technique. In addition, we introduce a high-speed controllable DMD operation board and improve the software to control each DMD mirror with high-speed. As the results, each DMD mirror works as a high-speed controllable shutter for the corresponding CCD pixel. Furthermore, as an application using the DMD camera, we perform an experiment by "DMD-type integrated phase-shifting method using correlations," which can analyze the phase distributions of projected grating from one image taken by the DMD camera. These principles and experimental results in dynamic condition are shown.

Ri, Shien; Matsunaga, Yasuhiro; Fujigaki, Motoharu; Matui, Toru; Morimoto, Yoshiharu

2005-12-01

90

Station Cameras Capture New Videos of Hurricane Katia  

NASA Video Gallery

Aboard the International Space Station, external cameras captured new video of Hurricane Katia as it moved northwest across the western Atlantic north of Puerto Rico at 10:35 a.m. EDT on September ...

91

Fused Six-Camera Video of STS-134 Launch  

NASA Video Gallery

Imaging experts funded by the Space Shuttle Program and located at NASA's Ames Research Center prepared this video by merging nearly 20,000 photographs taken by a set of six cameras capturing 250 i...

92

DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

93

DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

94

The development of a high-speed 100 fps CCD camera  

SciTech Connect

This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512x512 pixel CCD as its sensor which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergoes correlated double sampling after which, they are digitized into 12 bits. The throughput of the system translates into 60 MB/second which is either stored directly in a PC or transferred to a custom designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for x-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed x-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from I MHz to 15 MHz. The noise was measure to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and x-ray photons.

Hoffberg, M.; Laird, R.; Lenkzsus, F. Liu, Chuande; Rodricks, B. [Argonne National Lab., IL (United States); Gelbart, A. [Rochester Institute of Technology, Rochester, NY (United States)

1996-09-01

95

Measuring high-resolution sky luminance distributions with a CCD camera.  

PubMed

We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively. PMID:23478758

Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther

2013-03-10

96

Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples.  

PubMed

Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S

2014-01-01

97

Development and characterization of a CCD camera system for use on six-inch manipulator systems  

SciTech Connect

The Lawrence Livermore National Laboratory has designed, constructed, and fielded a compact CCD camera system for use on the Six Inch Manipulator (SIM) at the Nova laser facility. The camera system has been designed to directly replace the 35 mm film packages on all active SIM-based diagnostics. The unit`s electronic package is constructed for small size and high thermal conductivity using proprietary printed circuit board technology, thus reducing the size of the overall camera and improving its performance when operated within the vacuum environment of the Nova laser target chamber. The camera has been calibrated and found to yield a linear response, with superior dynamic range and signal-to-noise levels as compared to T-Max 3200 optic film, while providing real-time access to the data. Limiting factors related to fielding such devices on Nova will be discussed, in addition to planned improvements of the current design.

Logory, L.M.; Bell, P.M.; Conder, A.D.; Lee, F.D.

1996-05-03

98

Cramer-Rao lower bound optimization of an EM-CCD-based scintillation gamma camera  

NASA Astrophysics Data System (ADS)

Scintillation gamma cameras based on low-noise electron multiplication (EM-)CCDs can reach high spatial resolutions. For further improvement of these gamma cameras, more insight is needed into how various parameters that characterize these devices influence their performance. Here, we use the Cramer-Rao lower bound (CRLB) to investigate the sensitivity of the energy and spatial resolution of an EM-CCD-based gamma camera to several parameters. The gamma camera setup consists of a 3 mm thick CsI(Tl) scintillator optically coupled by a fiber optic plate to the E2V CCD97 EM-CCD. For this setup, the position and energy of incoming gamma photons are determined with a maximum-likelihood detection algorithm. To serve as the basis for the CRLB calculations, accurate models for the depth-dependent scintillation light distribution are derived and combined with a previously validated statistical response model for the EM-CCD. The sensitivity of the lower bounds for energy and spatial resolution to the EM gain and the depth-of-interaction (DOI) are calculated and compared to experimentally obtained values. Furthermore, calculations of the influence of the number of detected optical photons and noise sources in the image area on the energy and spatial resolution are presented. Trends predicted by CRLB calculations agree with experiments, although experimental values for spatial and energy resolution are typically a factor of 1.5 above the calculated lower bounds. Calculations and experiments both show that an intermediate EM gain setting results in the best possible spatial or energy resolution and that the spatial resolution of the gamma camera degrades rapidly as a function of the DOI. Furthermore, calculations suggest that a large improvement in gamma camera performance is achieved by an increase in the number of detected photons or a reduction of noise in the image area. A large noise reduction, as is possible with a new generation of EM-CCD electronics, may improve the energy and spatial resolution by a factor of 1.5.

Korevaar, Marc A. N.; Goorden, Marlies C.; Beekman, Freek J.

2013-04-01

99

Cramer-Rao lower bound optimization of an EM-CCD-based scintillation gamma camera.  

PubMed

Scintillation gamma cameras based on low-noise electron multiplication (EM-)CCDs can reach high spatial resolutions. For further improvement of these gamma cameras, more insight is needed into how various parameters that characterize these devices influence their performance. Here, we use the Cramer-Rao lower bound (CRLB) to investigate the sensitivity of the energy and spatial resolution of an EM-CCD-based gamma camera to several parameters. The gamma camera setup consists of a 3 mm thick CsI(Tl) scintillator optically coupled by a fiber optic plate to the E2V CCD97 EM-CCD. For this setup, the position and energy of incoming gamma photons are determined with a maximum-likelihood detection algorithm. To serve as the basis for the CRLB calculations, accurate models for the depth-dependent scintillation light distribution are derived and combined with a previously validated statistical response model for the EM-CCD. The sensitivity of the lower bounds for energy and spatial resolution to the EM gain and the depth-of-interaction (DOI) are calculated and compared to experimentally obtained values. Furthermore, calculations of the influence of the number of detected optical photons and noise sources in the image area on the energy and spatial resolution are presented. Trends predicted by CRLB calculations agree with experiments, although experimental values for spatial and energy resolution are typically a factor of 1.5 above the calculated lower bounds. Calculations and experiments both show that an intermediate EM gain setting results in the best possible spatial or energy resolution and that the spatial resolution of the gamma camera degrades rapidly as a function of the DOI. Furthermore, calculations suggest that a large improvement in gamma camera performance is achieved by an increase in the number of detected photons or a reduction of noise in the image area. A large noise reduction, as is possible with a new generation of EM-CCD electronics, may improve the energy and spatial resolution by a factor of 1.5. PMID:23552717

Korevaar, Marc A N; Goorden, Marlies C; Beekman, Freek J

2013-04-21

100

Video camera system for locating bullet holes in targets at a ballistics tunnel  

NASA Astrophysics Data System (ADS)

A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

Burner, A. W.; Rummler, D. R.; Goad, W. K.

1990-08-01

101

The ARGOS wavefront sensor pnCCD camera for an ELT: characteristics, limitations and applications  

NASA Astrophysics Data System (ADS)

From low-order to high-order AO, future wave front sensors on ELTs require large, fast, and low-noise detectors with high quantum efficiency and low dark current. While a detector for a high-order Shack-Hartmann WFS does not exist yet, the current CCD technology pushed to its limits already provides several solutions for the ELT AO detector requirements. One of these devices is the new WFS pnCCD camera of ARGOS, the Ground-Layer Adaptive Optics system (GLAO) for LUCIFER at LBT. Indeed, with its 264x264 pixels, 48 mu m pixel size and 1kHz frame rate, this camera provides a technological solution to different needs of the AO systems for ELTs, such as low-order but as well possibly higher order correction using pyramid wavefront sensing. In this contribution, we present the newly developped WFS pnCCD camera of ARGOS and how it fulfills future detector needs of AO on ELTs.

de Xivry, G. Orban; Ihle, S.; Ziegleder, J.; Barl, L.; Hartmann, R.; Rabien, S.; Soltau, H.; Strueder, L.

2011-09-01

102

Performance of a slow-scan CCD camera for macromolecular imaging in a 400 kV electron cryomicroscope  

Microsoft Academic Search

The feasibility and limitations of a 1024 × 1024 slow-scan charge-coupled device (CCD) camera were evaluated for imaging in a 400kV electron cryomicroscope. Catalase crystals and amorphous carbon film were used as test specimens. Using catalase crystals, it was found that the finite (24 ?m) pixel size of the slow-scan CCD camera governs the ultimate resolution in the acquired images.

Michael B. Sherman; Jacob Brink; Wah Chiu

1996-01-01

103

Proton radiation damage experiment on P-Channel CCD for an X-ray CCD camera onboard the ASTRO-H satellite  

NASA Astrophysics Data System (ADS)

We report on a proton radiation damage experiment on P-channel CCD newly developed for an X-ray CCD camera onboard the ASTRO-H satellite. The device was exposed up to 109 protons cm-2 at 6.7 MeV. The charge transfer inefficiency (CTI) was measured as a function of radiation dose. In comparison with the CTI currently measured in the CCD camera onboard the Suzaku satellite for 6 years, we confirmed that the new type of P-channel CCD is radiation tolerant enough for space use. We also confirmed that a charge-injection technique and lowering the operating temperature efficiently work to reduce the CTI for our device. A comparison with other P-channel CCD experiments is also discussed. We performed a proton radiation damage experiment on a new P-channel CCD. The device was exposed up to 109 protons cm-2 at 6.7 MeV. We confirmed that it is radiation tolerant enough for space use. We confirmed that a charge-injection technique reduces the CTI. We confirmed that lowering the operating temperature also reduces the CTI.

Mori, Koji; Nishioka, Yusuke; Ohura, Satoshi; Koura, Yoshiaki; Yamauchi, Makoto; Nakajima, Hiroshi; Ueda, Shutaro; Kan, Hiroaki; Anabuki, Naohisa; Nagino, Ryo; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Kohmura, Takayoshi; Ikeda, Shoma; Murakami, Hiroshi; Ozaki, Masanobu; Dotani, Tadayasu; Maeda, Yukie; Sagara, Kenshi

2013-12-01

104

Applications of visible CCD cameras on the Alcator C-Mod C. J. Boswell, J. L. Terry, B. Lipschultz, J. Stillerman  

E-print Network

Applications of visible CCD cameras on the Alcator C-Mod tokamak C. J. Boswell, J. L. Terry, B diameter remote-head visible charge-coupled device (CCD) cam- eras are being used on Alcator C color framegrabber cards. Two CCD cameras are used typically to generate two-dimensional emissivity

Boswell, Christopher

105

Development of the analog ASIC for multi-channel readout X-ray CCD camera  

NASA Astrophysics Data System (ADS)

We report on the performance of an analog application-specific integrated circuit (ASIC) developed aiming for the front-end electronics of the X-ray CCD camera system onboard the next X-ray astronomical satellite, ASTRO-H. It has four identical channels that simultaneously process the CCD signals. Distinctive capability of analog-to-digital conversion enables us to construct a CCD camera body that outputs only digital signals. As the result of the front-end electronics test, it works properly with low input noise of ?30?V at the pixel rate below 100 kHz. The power consumption is sufficiently low of ˜150mW/chip. The input signal range of ±20 mV covers the effective energy range of the typical X-ray photon counting CCD (up to 20 keV). The integrated non-linearity is 0.2% that is similar as those of the conventional CCDs in orbit. We also performed a radiation tolerance test against the total ionizing dose (TID) effect and the single event effect. The irradiation test using 60Co and proton beam showed that the ASIC has the sufficient tolerance against TID up to 200 krad, which absolutely exceeds the expected amount of dose during the period of operating in a low-inclination low-earth orbit. The irradiation of Fe ions with the fluence of 5.2×108 Ion/cm2 resulted in no single event latchup (SEL), although there were some possible single event upsets. The threshold against SEL is higher than 1.68 MeV cm2/mg, which is sufficiently high enough that the SEL event should not be one of major causes of instrument downtime in orbit.

Nakajima, Hiroshi; Matsuura, Daisuke; Idehara, Toshihiro; Anabuki, Naohisa; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu; Katayama, Haruyoshi; Kitamura, Hisashi; Uchihori, Yukio

2011-03-01

106

Digital monochrome CCD camera for robust pixel correspondant, data compression, and preprocessing in an integrated PC-based image-processing environment  

NASA Astrophysics Data System (ADS)

This paper describes the development of a compact digital CCD camera which contains image digitization and processing which interfaces to a personal computer (PC) via a standard enhanced parallel port. Digitizing of precise pixel samples coupled with the provision of putting a single chip FPGA for data processing, became the main digital components of the camera prior to sending the data to the PC. A form of compression scheme is applied so that the digital images may be transferred within the existing parallel port bandwidth. The data is decompressed in the PC environment for a real- time display of the video images using purely native processor resources. Frame capture is built into the camera so that a full uncompressed digital image could be sent for special processing.

Arshad, Norhashim M.; Harvey, David M.; Hobson, Clifford A.

1996-12-01

107

Ball lightning observation: an objective video-camera analysis report  

E-print Network

In this paper we describe a video-camera recording of a (probable) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization. The results strongly support the BL nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible BL event for further analyses. Some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper.

Sello, Stefano; Paganini, Enrico

2011-01-01

108

Ball lightning observation: an objective video-camera analysis report  

E-print Network

In this paper we describe a video-camera recording of a (probable) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization. The results strongly support the BL nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible BL event for further analyses. Some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper.

Stefano Sello; Paolo Viviani; Enrico Paganini

2011-02-04

109

Image\\/video deblurring using a hybrid camera  

Microsoft Academic Search

We propose a novel approach to reduce spatially varying motion blur using a hybrid camera system that simultane- ously captures high-resolution video at a low-frame rate to- gether with low-resolution video at a high-frame rate. Our work is inspired by Ben-Ezra and Nayar (3) who introduced thehybridcameraideaforcorrectingglobalmotionblurfor a single still image. We broaden the scope of the problem to address

Yu-wing Tai; Hao Du; Michael S. Brown; Stephen Lin

2008-01-01

110

LAIWO: a new wide-field CCD camera for Wise Observatory  

NASA Astrophysics Data System (ADS)

LAIWO is a new CCD wide-field camera for the 40-inch Ritchey-Chretien telescope at Wise Observatory in Mitzpe Ramon/Israel. The telescope is identical to the 40-in. telescope at Las Campanas Observatory, Chile, which is described in [2]. LAIWO was designed and built at Max-Planck-Institute for Astronomy in Heidelberg, Germany. The scientific aim of the instrument is to detect Jupiter-sized extra-solar planets around I=14-15 magnitude stars with the transit method, which relies on the temporary drop in brightness of the parent star harboring the planet. LAIWO can observe a 1.4 x 1.4 degree field-of-view and has four CCDs with 4096*4096 pixels each The Fairchild Imaging CCDs have a pixel size of 15 microns. Since they are not 2-side buttable, they are arranged with spacings between the chips that is equal to the size of a single CCD minus a small overlap. The CCDs are cooled by liquid nitrogen to a temperature of about -100 °C. The four science CCDs and the guider CCD are mounted on a common cryogenic plate which can be adjusted in three degrees of freedom. Each of these detectors can also be adjusted independently by a similar mechanism. The instrument contains large shutter and filter mechanisms, both designed in a modular way for fast exchange and easy maintenance.

Baumeister, Harald; Afonso, Cristina; Marien, Karl-Heinz; Klein, Ralf

2006-06-01

111

HERSCHEL/SCORE, imaging the solar corona in visible and EUV light: CCD camera characterization.  

PubMed

The HERSCHEL (helium resonant scattering in the corona and heliosphere) experiment is a rocket mission that was successfully launched last September from White Sands Missile Range, New Mexico, USA. HERSCHEL was conceived to investigate the solar corona in the extreme UV (EUV) and in the visible broadband polarized brightness and provided, for the first time, a global map of helium in the solar environment. The HERSCHEL payload consisted of a telescope, HERSCHEL EUV Imaging Telescope (HEIT), and two coronagraphs, HECOR (helium coronagraph) and SCORE (sounding coronagraph experiment). The SCORE instrument was designed and developed mainly by Italian research institutes and it is an imaging coronagraph to observe the solar corona from 1.4 to 4 solar radii. SCORE has two detectors for the EUV lines at 121.6 nm (HI) and 30.4 nm (HeII) and the visible broadband polarized brightness. The SCORE UV detector is an intensified CCD with a microchannel plate coupled to a CCD through a fiber-optic bundle. The SCORE visible light detector is a frame-transfer CCD coupled to a polarimeter based on a liquid crystal variable retarder plate. The SCORE coronagraph is described together with the performances of the cameras for imaging the solar corona. PMID:20428852

Pancrazzi, M; Focardi, M; Landini, F; Romoli, M; Fineschi, S; Gherardi, A; Pace, E; Massone, G; Antonucci, E; Moses, D; Newmark, J; Wang, D; Rossi, G

2010-07-01

112

Performance of a slow-scan CCD camera for macromolecular imaging in a 400 kV electron cryomicroscope.  

PubMed

The feasibility and limitations of a 1024 x 1024 slow-scan charge-coupled device (CCD) camera were evaluated for imaging in a 400kV electron cryomicroscope. Catalase crystals and amorphous carbon film were used as test specimens. Using catalase crystals, it was found that the finite (24 microns) pixel size of the slow-scan CCD camera governs the ultimate resolution in the acquired images. For instance, spot-scan images of ice-embedded catalase crystals showed resolutions of 8 A and 4 A at effective magnifications of 67,000 x and 132,000 x, respectively. Using an amorphous carbon film, the damping effect of the modulation transfer function (MTF) of the slow-scan CCD camera on the specimen's Fourier spectrum relative to that of the photographic film was evaluated. The MTF of the slow-scan CCD camera fell off more rapidly compared to that of the photographic film and reached the value of 0.2 at the Nyquist frequency. Despite this attenuation, the signal-to-noise ratio of the CCD data, as determined from reflections of negatively-stained catalase crystals, was found to decrease to approximately 50% of that of photographic film data. The phases computed from images of the same negatively-stained catalase crystals recorded consecutively on both the slow-scan CCD camera and photographic film were found to be comparable to each other within 12 degrees. Ways of minimizing the effect of the MTF of the slow-scan CCD camera on the acquired images are also presented. PMID:8858867

Sherman, M B; Brink, J; Chiu, W

1996-04-01

113

CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models  

NASA Astrophysics Data System (ADS)

Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

2011-02-01

114

A toolkit for the characterization of CCD cameras for transmission electron microscopy.  

PubMed

Charge-coupled devices (CCD) are nowadays commonly utilized in transmission electron microscopy (TEM) for applications in life sciences. Direct access to digitized images has revolutionized the use of electron microscopy, sparking developments such as automated collection of tomographic data, focal series, random conical tilt pairs and ultralarge single-particle data sets. Nevertheless, for ultrahigh-resolution work photographic plates are often still preferred. In the ideal case, the quality of the recorded image of a vitrified biological sample would solely be determined by the counting statistics of the limited electron dose the sample can withstand before beam-induced alterations dominate. Unfortunately, the image is degraded by the non-ideal point-spread function of the detector, as a result of a scintillator coupled by fibre optics to a CCD, and the addition of several inherent noise components. Different detector manufacturers provide different types of figures of merit when advertising the quality of their detector. It is hard for most laboratories to verify whether all of the anticipated specifications are met. In this report, a set of algorithms is presented to characterize on-axis slow-scan large-area CCD-based TEM detectors. These tools have been added to a publicly available image-processing toolbox for MATLAB. Three in-house CCD cameras were carefully characterized, yielding, among others, statistics for hot and bad pixels, the modulation transfer function, the conversion factor, the effective gain and the detective quantum efficiency. These statistics will aid data-collection strategy programs and provide prior information for quantitative imaging. The relative performance of the characterized detectors is discussed and a comparison is made with similar detectors that are used in the field of X-ray crystallography. PMID:20057054

Vulovic, M; Rieger, B; van Vliet, L J; Koster, A J; Ravelli, R B G

2010-01-01

115

67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST OF ASSISTANT LAUNCH CONDUCTOR PANEL SHOWN IN CA-133-1-A-66 - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

116

Automated Technology for Video Surveillance Vast numbers of surveillance cameras  

E-print Network

Automated Technology for Video Surveillance Vast numbers of surveillance cameras monitor public in on suspicious things. In combination, real-time detection and intelligent zooming capabilities could immediately recognition and gait recognition. In research funded by the Department of Defense, the National Science

Hill, Wendell T.

117

Characterization of a commercial, front-illuminated interline transfer CCD camera for use as a guide camera on a balloon-borne telescope  

E-print Network

We report results obtained during the characterization of a commercial front-illuminated progressive scan interline transfer CCD camera. We demonstrate that the unmodified camera operates successfully in temperature and pressure conditions (-40C, 4mBar) representative of a high altitude balloon mission. We further demonstrate that the centroid of a well-sampled star can be determined to better than 2% of a pixel, even though the CCD is equipped with a microlens array. This device has been selected for use in a closed-loop star-guiding and tip-tilt correction system in the BIT-STABLE balloon mission.

Clark, Paul; Chang, Herrick L; Galloway, Mathew; Israel, Holger; Jones, Laura L; Li, Lun; Mandic, Milan; Morris, Tim; Netterfield, Barth; Peacock, John; Sharples, Ray; Susca, Sara

2014-01-01

118

Performance of the low light level CCD camera for speckle imaging  

E-print Network

A new generation CCD detector called low light level CCD (L3CCD) that performs like an intensified CCD without incorporating a micro channel plate (MCP) for light amplification was procured and tested. A series of short exposure images with millisecond integration time has been obtained. The L3CCD is cooled to about $-80^\\circ$ C by Peltier cooling.

S. K. Saha; V. Chinnappan

2002-09-20

119

Designing a Video Data Management System for Monitoring Cameras with Intuitive Interface  

Microsoft Academic Search

In case of emergency, we need to grasp the situation and make correct assessment quickly. The video data taken from monitoring cameras are important information in the emergent situation. In our research, we built a support system for identifying video data of monitoring cameras. The system collects video data from the net-cameras through the Internet and deals with them as

Yiqun Wang; Yoshinori Hijikata; Shogo Nishida

2004-01-01

120

Free-Viewpoint Video from Depth Cameras Alexander Bogomjakov Craig Gotsman Marcus Magnor  

E-print Network

Braunschweig Abstract Depth cameras, which provide color and depth in- formation per pixel at video rates from those acquired by the physical cameras. A free-viewpoint video system then enables the userFree-Viewpoint Video from Depth Cameras Alexander Bogomjakov Craig Gotsman Marcus Magnor Computer

Gotsman, Craig

121

Performance of front-end mixed-signal ASIC for onboard CCD cameras  

NASA Astrophysics Data System (ADS)

We report on the development status of the readout ASIC for an onboard X-ray CCD camera. The quick low- noise readout is essential for the pile-up free imaging spectroscopy with the future highly sensitive telescope. The dedicated ASIC for ASTRO-H/SXI has sufficient noise performance only at the slow pixel rate of 68 kHz. Then we have been developing the upgraded ASIC with the fourth-order ?? modulators. Upgrading the order of the modulator enables us to oversample the CCD signals less times so that we. The digitized pulse height is a serial bit stream that is decrypted with a decimation filter. The weighting coefficient of the filter is optimized to maximize the signal-to-noise ratio by a simulation. We present the performances such as the input equivalent noise (IEN), gain, effective signal range. The digitized pulse height data are successfully obtained in the first functional test up to 625 kHz. IEN is almost the same as that obtained with the chip for ASTRO-H/SXI. The residuals from the gain function is about 0.1%, which is better than that of the conventional ASIC by a factor of two. Assuming that the gain of the CCD is the same as that for ASTRO-H, the effective range is 30 keV in the case of the maximum gain. By changing the gain it can manage the signal charges of 100 ke-. These results will be fed back to the optimization of the pulse height decrypting filter.

Nakajima, Hiroshi; Inoue, Shota; Nagino, Ryo; Anabuki, Naohisa; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu

2014-07-01

122

A reflectance model for non-contact mapping of venous oxygen saturation using a CCD camera  

NASA Astrophysics Data System (ADS)

A method of non-contact mapping of venous oxygen saturation (SvO2) is presented. A CCD camera is used to image skin tissue illuminated alternately by a red (660 nm) and an infrared (800 nm) LED light source. Low cuff pressures of 30-40 mmHg are applied to induce a venous blood volume change with negligible change in the arterial blood volume. A hybrid model combining the Beer-Lambert law and the light diffusion model is developed and used to convert the change in the light intensity to the change in skin tissue absorption coefficient. A simulation study incorporating the full light diffusion model is used to verify the hybrid model and to correct a calculation bias. SvO2 in the fingers, palm, and forearm for five volunteers are presented and compared with results in the published literature. Two-dimensional maps of venous oxygen saturation are given for the three anatomical regions.

Li, Jun; Dunmire, Barbrina; Beach, Kirk W.; Leotta, Daniel F.

2013-11-01

123

Improvement of relief algorithm to prevent inpatient's downfall accident with night-vision CCD camera  

NASA Astrophysics Data System (ADS)

"ROSAI" hospital, Wakayama City in Japan, reported that inpatient's bed-downfall is one of the most serious accidents in hospital at night. Many inpatients have been having serious damages from downfall accidents from a bed. To prevent accidents, the hospital tested several sensors in a sickroom to send warning-signal of inpatient's downfall accidents to a nurse. However, it sent too much inadequate wrong warning about inpatients' sleeping situation. To send a nurse useful information, precise automatic detection for an inpatient's sleeping situation is necessary. In this paper, we focus on a clustering-algorithm which evaluates inpatient's situation from multiple angles by several kinds of sensor including night-vision CCD camera. This paper indicates new relief algorithm to improve the weakness about exceptional cases.

Matsuda, Noriyuki; Yamamoto, Takeshi; Miwa, Masafumi; Nukumi, Shinobu; Mori, Kumiko; Kuinose, Yuko; Maeda, Etuko; Miura, Hirokazu; Taki, Hirokazu; Hori, Satoshi; Abe, Norihiro

2005-12-01

124

Radiometric calibration of frame transfer CCD camera with uniform source system  

NASA Astrophysics Data System (ADS)

This paper presents a radiometric calibration method based on visibility function and uniform source system. The uniform system is mainly comprised of an integrating sphere and a monitoring silicon detector. The current of the silicon detector with a visibility function filter corresponds to the luminance at the exit port of integrating sphere through standard luminance meter transfer. The radiance at the camera entrance pupil is calculated for different solar zenith angles and Earth surface albedos by the MODTRAN atmospheric code. To simplify the calibration process, the radiance at its entrance pupil is integrated by visibility function. The shift smear of the frame transfer CCD is removed by the radiometric calibration and the amending ratio factor is introduced in the retrieving methods. The imaging experiment verifies the reliability of the calibration method and retrieves good quality image.

Zhou, Jiankang; Shi, Rongbao; Chen, Yuheng; Zhou, Yuying; Shen, Weimin

2010-08-01

125

Advantages of the CCD camera measurements for profile and wear of cutting tools  

NASA Astrophysics Data System (ADS)

In our paper we prepared an evaluating study of which conclusions draw mainly two directions for our fields of research. On the one hand, this means the measuring of fix, standing workpieces, on the other hand this means geometrical measurement of moving tools. The first case seems to be solved in many respects (in general cases), but the second one is not completely worked out according to the relevant literature. The monitoring of tool wear, the determination of geometrical parameters (this is mainly in case of gear-generating tools) is not really widespread yet, mainly, if optical parameters have influence on the evaluating procedure (e.g. examination of profiles of grinding wheels). We show the elaboration of a process for the practical application of measuring techniques performed by image processing CCD cameras on the basis of wearing criteria of different cutting tools (drilling tool, turning tool). We have made a profile and cutting tool wear measuring program.

Varga, G.; Balajti, Z.; Dudás, I.

2005-01-01

126

Imaging of blood vessels with CCD-camera based three-dimensional photoacoustic tomography  

NASA Astrophysics Data System (ADS)

An optical phase contrast full field detection setup in combination with a CCD-camera is presented to record acoustic fields for real-time projection and fast three-dimensional imaging. When recording projection images of the wave pattern around the imaging object, the three-dimensional photoacoustic imaging problem is reduced to a set of two-dimensional reconstructions and the measurement setup requires only a single axis of rotation. Using a 10 Hz pulse laser system for photoacoustic excitation a three dimensional image can be obtained in less than 1 min. The sensitivity and resolution of the detection system was estimated experimentally with 5 kPa mm and 75?m, respectively. Experiments on biological samples show the applicability of this technique for the imaging of blood vessel distributions.

Nuster, Robert; Slezak, Paul; Paltauf, Guenther

2014-03-01

127

A real-time single-camera, stereoscopic video device  

NASA Astrophysics Data System (ADS)

This patent application discloses a real-time, single-camera, stereoscopic video imaging device for use with a standard 60 Hz camera and monitor is being developed The device uses a single objective lens to focus disparate views of the object at the focal plane of the lens. Each view is represented by a set of parallel rays emanating from the object at a specific angle. The lens focuses these parallel rays to a single point at the focal plane These views are then shuttered at the focal plane using a Liquid-Crystal Device (LCD) shutter such that one view at a time is passed to the camera. The camera then transmits alternating video fields (individual TV images) to the monitor, such that alternate fields display stereoscopically-related views of the object being imaged. The user views the monitor using off-the-shelf LCD stereoglasses, modified to allow synchronization with the standard field rate of the camera and monitor. The glasses shutter the light alternately to each eye so that the left eye views the left-hand image and the right eye views the right-hand image. The resulting 3-D image is independent of the user's viewing angle or distance from the monitor.

Converse, Blake L.

1994-12-01

128

Astron. Nachr. / AN 000, No. 00, 1 4 (0000) / DOI please set DOI! Front-vs back-illuminated CCD cameras for photometric surveys : a  

E-print Network

Astron. Nachr. / AN 000, No. 00, 1 ­ 4 (0000) / DOI please set DOI! Front- vs back-illuminated CCD the CCD electrodes can be overcome using a gaussian PSF (Point Spread Function) of full width half maximum-illuminated CCD through a Monte- Carlo simulation. Both cameras give the same results for a PSF full width half

Paris-Sud XI, Université de

129

Astron. Nachr. / AN 000, No. 00, 1 --4 (0000) / DOI please set DOI! Front vs backilluminated CCD cameras for photometric surveys : a  

E-print Network

Astron. Nachr. / AN 000, No. 00, 1 -- 4 (0000) / DOI please set DOI! Front­ vs back­illuminated CCD the CCD electrodes can be overcome using a gaussian PSF (Point Spread Function) of full width half maximum­illuminated CCD through a Monte­ Carlo simulation. Both cameras give the same results for a PSF full width half

Recanati, Catherine

130

A new tubeless nanosecond streak camera based on optical deflection and direct CCD imaging  

SciTech Connect

A new optically deflected streaking camera with performance of nanosecond-range resolution, superior imaging quality, high signal detectability, and large format recording has been conceived and developed. Its construction is composed of an optomechanical deflector that deflects the line-shape image of spatial-distributed time-varying signals across the sensing surface of a cooled scientific two-dimensional CCD array with slow readout driving electronics, a lens assembly, and a desk-top computer for prompt digital data acquisition and processing. Its development utilizes the synergism of modern technologies in sensor, optical deflector, optics and microcomputer. With laser light as signal carrier, the deflecting optics produces near diffraction-limited streak images resolving to a single pixel size of 25{mu}. A 1kx1k-pixel array can thus provide a vast record of 1,000 digital data points along each spatial or temporal axis. Since only one photon-to-electron conversion exists in the entire signal recording path, the camera responses linearly to the incident light over a wide dynamic range in excess of 10{sup 4}:1. Various image deflection techniques are assessed for imaging fidelity, deflection speed, and capacity for external triggering. Innovative multiple-pass deflection methods for utilizing optomechanical deflector have been conceived and developed to attain multi-fold amplification for the optical scanning. speed across the CCD surface at a given angular deflector speed. Without significantly compromising imaging. quality or flux throughput efficiency, these optical methods enable a sub-10 ns/pixel streak speed with the deflector moving benignly at 500 radians/second, or equivalently 80 revolutions /second. Test results of the prototype performance are summarized including a spatial resolution of 10 lp/mm at 65% CTF and a temporal resolution of 11.4 ns at 3.8 ns/pixel.

Lai, C.C.

1992-12-01

131

A new tubeless nanosecond streak camera based on optical deflection and direct CCD imaging  

SciTech Connect

A new optically deflected streaking camera with performance of nanosecond-range resolution, superior imaging quality, high signal detectability, and large format recording has been conceived and developed. Its construction is composed of an optomechanical deflector that deflects the line-shape image of spatial-distributed time-varying signals across the sensing surface of a cooled scientific two-dimensional CCD array with slow readout driving electronics, a lens assembly, and a desk-top computer for prompt digital data acquisition and processing. Its development utilizes the synergism of modern technologies in sensor, optical deflector, optics and microcomputer. With laser light as signal carrier, the deflecting optics produces near diffraction-limited streak images resolving to a single pixel size of 25[mu]. A 1kx1k-pixel array can thus provide a vast record of 1,000 digital data points along each spatial or temporal axis. Since only one photon-to-electron conversion exists in the entire signal recording path, the camera responses linearly to the incident light over a wide dynamic range in excess of 10[sup 4]:1. Various image deflection techniques are assessed for imaging fidelity, deflection speed, and capacity for external triggering. Innovative multiple-pass deflection methods for utilizing optomechanical deflector have been conceived and developed to attain multi-fold amplification for the optical scanning. speed across the CCD surface at a given angular deflector speed. Without significantly compromising imaging. quality or flux throughput efficiency, these optical methods enable a sub-10 ns/pixel streak speed with the deflector moving benignly at 500 radians/second, or equivalently 80 revolutions /second. Test results of the prototype performance are summarized including a spatial resolution of 10 lp/mm at 65% CTF and a temporal resolution of 11.4 ns at 3.8 ns/pixel.

Lai, C.C.

1992-12-01

132

In-flight Video Captured by External Tank Camera System  

NASA Technical Reports Server (NTRS)

In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

2005-01-01

133

Real-time synchronous CCD camera observation and reflectance measurement of evaporation-induced polystyrene colloidal self-assembly.  

PubMed

A new monitoring technique, which combines real-time in-situ CCD camera observation and reflectance spectra measurement, has been developed to study the growing and drying processes of evaporation-induced self-assembly (EISA). Evolutions of the reflectance spectrum and CCD camera images both reveal that the entire process of polystyrene (PS) EISA contains three stages: crack-initiation stage (T1), crack-propagation stage (T2), and crack-remained stage (T3). A new phenomenon, the red-shift of stop-band, is observed when the crack begins to propagate in the monitored window of CCD camera. Deformation of colloidal spheres, which mainly results in the increase of volume fraction of spheres, is applied to explain the phenomenon. Moreover, the modified scalar wave approximation (SWA) is utilized to analyze the reflectance spectra, and the fitting results are in good agreement with the evolution of CCD camera images. This new monitoring technique and the analysis method provide a good way to get insight into the growing and drying processes of PS colloidal self-assembly, especially the crack propagation. PMID:24650361

Lin, Dongfeng; Wang, Jinze; Yang, Lei; Luo, Yanhong; Li, Dongmei; Meng, Qingbo

2014-04-15

134

Risk mitigation process for utilization of commercial off-the-shelf (COTS) parts in CCD camera for military applications  

Microsoft Academic Search

This paper presents the lessons learned during the design and development of a high performance cooled CCD camera for military applications utilizing common commercial off the shelf (COTS) parts. Our experience showed that concurrent evaluation and testing of high risk COTS must be performed to assess their performance over the required temperature range and other special product requirements such as

Anees Ahmad; Scott Batcheldor; Steven C. Cannon; Thomas E. Roberts

2002-01-01

135

Masking a CCD camera allows multichord charge exchange spectroscopy measurements at high speed on the DIII-D tokamak.  

PubMed

Charge exchange spectroscopy is one of the standard plasma diagnostic techniques used in tokamak research to determine ion temperature, rotation speed, particle density, and radial electric field. Configuring a charge coupled device (CCD) camera to serve as a detector in such a system requires a trade-off between the competing desires to detect light from as many independent spatial views as possible while still obtaining the best possible time resolution. High time resolution is essential, for example, for studying transient phenomena such as edge localized modes. By installing a mask in front of a camera with a 1024 × 1024 pixel CCD chip, we are able to acquire spectra from eight separate views while still achieving a minimum time resolution of 0.2 ms. The mask separates the light from the eight spectra, preventing spatial and temporal cross talk. A key part of the design was devising a compact translation stage which attaches to the front of the camera and allows adjustment of the position of the mask openings relative to the CCD surface. The stage is thin enough to fit into the restricted space between the CCD camera and the spectrometer endplate. PMID:21361580

Meyer, O; Burrell, K H; Chavez, J A; Kaplan, D H; Chrystal, C; Pablant, N A; Solomon, W M

2011-02-01

136

Using a digital video camera to examine coupled oscillations  

NASA Astrophysics Data System (ADS)

In our previous paper (Debowska E, Jakubowicz S and Mazur Z 1999 Eur. J. Phys. 20 89-95), thanks to the use of an ultrasound distance sensor, experimental verification of the solution of Lagrange equations for longitudinal oscillations of the Wilberforce pendulum was shown. In this paper the sensor and a digital video camera were used to monitor and measure the changes of both the pendulum's coordinates (vertical displacement and angle of rotation) simultaneously. The experiments were performed with the aid of the integrated software package COACH 5. Fourier analysis in Microsoft^{\\circledR} Excel 97 was used to find normal modes in each case of the measured oscillations. Comparison of the results with those presented in our previous paper (as given above) leads to the conclusion that a digital video camera is a powerful tool for measuring coupled oscillations of a Wilberforce pendulum. The most important conclusion is that a video camera is able to do something more than merely register interesting physical phenomena - it can be used to perform measurements of physical quantities at an advanced level.

Greczylo, T.; Debowska, E.

2002-07-01

137

Advances in low-light-level video imaging  

Microsoft Academic Search

Video imaging under low light level conditions necessitates a light amplification device to overcome the inherent lack of sensitivity of today's CCD video camera. Traditionally the microchannel plate image intensifier developed for military night vision goggles has been employed in front of a CCD camera to achieve low light level video imaging. Until recently the variety of these devices has

Richard A. Sturz

1995-01-01

138

Status of the CCD camera for the eROSITA space telescope  

NASA Astrophysics Data System (ADS)

The approved German X-ray telescope eROSITA (extended ROentgen Survey with an Imaging Telescope Array) is the core instrument on the Russian Spektrum-Roentgen-Gamma (SRG) mission. After satellite launch to Lagrangian point L2 in near future, eROSITA will perform a survey of the entire X-ray sky. In the soft band (0.5 keV - 2 keV), it will be about 30 times more sensitive than ROSAT, while in the hard band (2 keV - 8 keV) it will provide the first complete imaging survey of the sky. The design driving science is the detection of 100,000 clusters of galaxies up to redshift z ~ 1.3 in order to study the large scale structure in the Universe and test cosmological models including Dark Energy. Detection of single X-ray photons with information about their energy, arrival angle and time is accomplished by an array of seven identical and independent PNCCD cameras. Each camera is assigned to a dedicated mirror system of Wolter-I type. The key component of the camera is a 5 cm • 3 cm large, back-illuminated, 450 ?m thick and fully depleted frame store PNCCD chip. It is a further development of the sensor type which is in operation aboard the XMM-Newton satellite since 1999. Development and production of the CCDs for the eROSITA project were performed in the semiconductor laboratory of the Max-Planck-Institutes for Physics and Extraterrestrial Physics, the MPI Halbleiterlabor. By means of a unique so-called 'cold-chuck probe station', we have characterized the performance of each PNCCD sensor on chip-level. Various tests were carried out for a detailed characterization of the CCD and its custom-made analog readout ASIC. This includes in particular the evaluation of the optimum detector operating conditions in terms of operating sequence, supply voltages and operating temperature in order to achieve optimum performance. In the course of the eROSITA camera development, an engineering model of the eROSITA flight detector was assembled and is used for tests since 2010. Based on these results and on the extensive tests with lab model detectors, the design of the front-end electronics has meanwhile been finalized for the flight cameras. Furthermore, the specifications for the other supply and control electronics were precisely concluded on the basis of the experimental tests.

Meidinger, Norbert; Andritschke, Robert; Elbs, Johannes; Granato, Stefanie; Hälker, Olaf; Hartner, Gisela; Herrmann, Sven; Miessner, Danilo; Pietschner, Daniel; Predehl, Peter; Reiffers, Jonas; Rommerskirchen, Tanja; Schmaler, Gabriele; Strüder, Lothar; Tiedemann, Lars

2011-09-01

139

Fast auto-acquisition tomography tilt series by using HD video camera in ultra-high voltage electron microscope.  

PubMed

The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt series of 61 images within 30 minutes. Accuracy and repeatability were good enough to practical use (Figure 1). We successfully reduced the total acquisition time of a tomography tilt series in half than before.jmicro;63/suppl_1/i25/DFU066F1F1DFU066F1Fig. 1.Objective lens current change with a tilt angle during acquisition of tomography series (Sample: a rat hepatocyte, thickness: 2 m, magnification: 4k, acc. voltage: 2 MV). Tilt angle range is ±60 degree with 2 degree step angle. Two series were acquired in the same area. Both data were almost same and the deviation was smaller than the minimum step by manual, so auto-focus worked well. We also developed a computer-aided three dimensional (3D) visualization and analysis software for electron tomography "HawkC" which can sectionalize the 3D data semi-automatically[5,6]. If this auto-acquisition system is used with IMOD reconstruction software[7] and HawkC software, we will be able to do on-line UHVEM tomography. The system would help pathology examination in the future.This work was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, under a Grant-in-Aid for Scientific Research (Grant No. 23560024, 23560786), and SENTAN, Japan Science and Technology Agency, Japan. PMID:25359822

Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto

2014-11-01

140

OP09O-OP404-9 Wide Field Camera 3 CCD Quantum Efficiency Hysteresis  

NASA Technical Reports Server (NTRS)

The HST/Wide Field Camera (WFC) 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. At the nominal operating temperature of -83C, the QEH feature contrast was typically 0.1-0.2% or less. The behavior was replicated using flight spare detectors. A visible light flat-field (540nm) with a several times full-well signal level can pin the detectors at both optical (600nm) and near-UV (230nm) wavelengths, suppressing the QEH behavior. We are characterizing the timescale for the detectors to become unpinned and developing a protocol for flashing the WFC3 CCDs with the instrument's internal calibration system in flight. The HST/Wide Field Camera 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. The first observed manifestation of QEH was the presence in a small percentage of flat-field images of a bowtie-shaped contrast that spanned the width of each chip. At the nominal operating temperature of -83C, the contrast observed for this feature was typically 0.1-0.2% or less, though at warmer temperatures contrasts up to 5% (at -50C) have been observed. The bowtie morphology was replicated using flight spare detectors in tests at the GSFC Detector Characterization Laboratory by power cycling the detector while cold. Continued investigation revealed that a clearly-related global QE suppression at the approximately 5% level can be produced by cooling the detector in the dark; subsequent flat-field exposures at a constant illumination show asymptotically increasing response. This QE "pinning" can be achieved with a single high signal flat-field or a series of lower signal flats; a visible light (500-580nm) flat-field with a signal level of several hundred thousand electrons per pixel is sufficient for QE pinning at both optical (600nm) and near-UV (230nm) wavelengths. We are characterizing the timescale for the detectors to become unpinned and developing a protocol for flashing the WFC3 CCDs with the instrument's internal calibration system in flight. A preliminary estimate of the decay timescale for one detector is that a drop of 0.1-0.2% occurs over a ten day period, indicating that relatively infrequent cal lamp exposures can mitigate the behavior to extremely low levels.

Collins, Nick

2009-01-01

141

Field-programmable gate array-based hardware architecture for high-speed camera with KAI-0340 CCD image sensor  

NASA Astrophysics Data System (ADS)

We present a field-programmable gate array (FPGA)-based hardware architecture for high-speed camera which have fast auto-exposure control and colour filter array (CFA) demosaicing. The proposed hardware architecture includes the design of charge coupled devices (CCD) drive circuits, image processing circuits, and power supply circuits. CCD drive circuits transfer the TTL (Transistor-Transistor-Logic) level timing Sequences which is produced by image processing circuits to the timing Sequences under which CCD image sensor can output analog image signals. Image processing circuits convert the analog signals to digital signals which is processing subsequently, and the TTL timing, auto-exposure control, CFA demosaicing, and gamma correction is accomplished in this module. Power supply circuits provide the power for the whole system, which is very important for image quality. Power noises effect image quality directly, and we reduce power noises by hardware way, which is very effective. In this system, the CCD is KAI-0340 which is can output 210 full resolution frame-per-second, and our camera can work outstandingly in this mode. The speed of traditional auto-exposure control algorithms to reach a proper exposure level is so slow that it is necessary to develop a fast auto-exposure control method. We present a new auto-exposure algorithm which is fit high-speed camera. Color demosaicing is critical for digital cameras, because it converts a Bayer sensor mosaic output to a full color image, which determines the output image quality of the camera. Complexity algorithm can acquire high quality but cannot implement in hardware. An low-complexity demosaicing method is presented which can implement in hardware and satisfy the demand of quality. The experiment results are given in this paper in last.

Wang, Hao; Yan, Su; Zhou, Zuofeng; Cao, Jianzhong; Yan, Aqi; Tang, Linao; Lei, Yangjie

2013-08-01

142

Video summarization based on camera motion and a subjective evaluation method  

E-print Network

Video summarization based on camera motion and a subjective evaluation method M. Guironnet a , D of video summarization based on camera motion. It consists in selecting frames according to the succession summaries more generally. Subjects were asked to watch a video and to create a summary manually. From

Paris-Sud XI, Université de

143

A dual charge-coupled device /CCD/, astronomical spectrometer and direct imaging camera. I - Optical and detector systems  

NASA Technical Reports Server (NTRS)

The MASCOT (MIT Astronomical Spectrometer/Camera for Optical Telescopes), an instrument capable of simultaneously performing both direct imaging and spectrometry of faint objects, is examined. An optical layout is given of the instrument which uses two CCD's mounted on the same temperature regulated detector block. Two sources of noise on the signal are discussed: (1) the CCD readout noise, which results in a constant uncertainty in the number of electrons collected from each pixel; and (2) the photon counting noise. The sensitivity of the device is limited by the sky brightness, the overall quantum efficiency, the resolution, and the readout noise of the CCD. Therefore, total system efficiency is calculated at about 15%.

Meyer, S. S.; Ricker, G. R.

1980-01-01

144

Non-mydriatic, wide field, fundus video camera  

NASA Astrophysics Data System (ADS)

We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

2014-02-01

145

Detection of multimode spatial correlation in PDC and application to the absolute calibration of a CCD camera  

E-print Network

We propose and demonstrate experimentally a new method based on the spatial entanglement for the absolute calibration of analog detector. The idea consists on measuring the sub-shot-noise intensity correlation between two branches of parametric down conversion, containing many pairwise correlated spatial modes. We calibrate a scientific CCD camera and a preliminary evaluation of the statistical uncertainty indicates the metrological interest of the method.

Giorgio Brida; Ivo Pietro Degiovanni; Marco Genovese; Maria Luisa Rastello; Ivano Ruo-Berchera

2010-05-17

146

MOA-cam3: a wide-field mosaic CCD camera for a gravitational microlensing survey in New Zealand  

E-print Network

We have developed a wide-field mosaic CCD camera, MOA-cam3, mounted at the prime focus of the Microlensing Observations in Astrophysics (MOA) 1.8-m telescope. The camera consists of ten E2V CCD4482 chips, each having 2kx4k pixels, and covers a 2.2 deg^2 field of view with a single exposure. The optical system is well optimized to realize uniform image quality over this wide field. The chips are constantly cooled by a cryocooler at -80C, at which temperature dark current noise is negligible for a typical 1-3 minute exposure. The CCD output charge is converted to a 16-bit digital signal by the GenIII system (Astronomical Research Cameras Inc.) and readout is within 25 seconds. Readout noise of 2--3 ADU (rms) is also negligible. We prepared a wide-band red filter for an effective microlensing survey and also Bessell V, I filters for standard astronomical studies. Microlensing studies have entered into a new era, which requires more statistics, and more rapid alerts to catch exotic light curves. Our new system is a powerful tool to realize both these requirements.

T. Sako; T. Sekiguchi; M. Sasaki; K. Okajima; F. Abe; I. A. Bond; J. B. Hearnshaw; Y. Itow; K. Kamiya; P. M. Kilmartin; K. Masuda; Y. Matsubara; Y. Muraki; N. J. Rattenbury; D. J. Sullivan; T. Sumi; P. Tristram; T. Yanagisawa; P. C. M. Yock

2008-04-04

147

Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range  

NASA Astrophysics Data System (ADS)

The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/?E) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 ?m from the current 24 ?m spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 ?m square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 ?m and 3.9±0.1 ?m at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these worst-case resolution measurements, estimating the spatial resolution to be approximately 3.5 ?m and 3.0 ?m at 530 eV and 680 eV, well below the resolution limit of 5 ?m required to improve the spectral resolution by a factor of 2.

Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.

2013-12-01

148

Classification of volcanic ash particles from Sakurajima volcano using CCD camera image and cluster analysis  

NASA Astrophysics Data System (ADS)

Quantitative and speedy characterization of volcanic ash particle is needed to conduct a petrologic monitoring of ongoing eruption. We develop a new simple system using CCD camera images for quantitatively characterizing ash properties, and apply it to volcanic ash collected at Sakurajima. Our method characterizes volcanic ash particles by 1) apparent luminance through RGB filters and 2) a quasi-fractal dimension of the shape of particles. Using a monochromatic CCD camera (Starshoot by Orion Co. LTD.) attached to a stereoscopic microscope, we capture digital images of ash particles that are set on a glass plate under which white colored paper or polarizing plate is set. The images of 1390 x 1080 pixels are taken through three kinds of color filters (Red, Green and Blue) under incident-light and transmitted-light through polarizing plate. Brightness of the light sources is set to be constant, and luminance is calibrated by white and black colored papers. About fifteen ash particles are set on the plate at the same time, and their images are saved with a bit map format. We first extract the outlines of particles from the image taken under transmitted-light through polarizing plate. Then, luminances for each color are represented by 256 tones at each pixel in the particles, and the average and its standard deviation are calculated for each ash particle. We also measure the quasi-fractal dimension (qfd) of ash particles. We perform box counting that counts the number of boxes which consist of 1×1 and 128×128 pixels that catch the area of the ash particle. The qfd is estimated by taking the ratio of the former number to the latter one. These parameters are calculated by using software R. We characterize volcanic ash from Showa crater of Sakurajima collected in two days (Feb 09, 2009, and Jan 13, 2010), and apply cluster analyses. Dendrograms are formed from the qfd and following four parameters calculated from the luminance: Rf=R/(R+G+B), G=G/(R+G+B), B=B/(R+G+B), and total luminance=(R+G+B)/665. We classify the volcanic ash particles from the Dendrograms into three groups based on the euclid distance. The groups are named as Group A, B and C in order of increasing of the average value of total luminance. The classification shows that the numbers of particles belonging to Group A, B and C are 77, 25 and 6 in Feb, 09, 2009 sample, and 102, 19 and 6 in Jan, 13, 2010 sample, respectively. The examination under stereoscopic microscope suggests that Group A, B and C mainly correspond with juvenile, altered and free-crystal particles, respectively. So the result of classification by present method demonstrates a difference in the contribution of juvenile material between the two days. To evaluate reliability of our classification, we classify pseudo-samples in which errors of 10% are added in the measured parameters. We apply our method to one thousand psuedo-samples, and the result shows that the numbers of particles classified into the three groups vary less than 20 % of the total number of 235 particles. Our system can classify 120 particles within 6 minutes so that we easily increase the number of ash particles, which enable us to improve reliabilities and resolutions of the classification and to speedily capture temporal changes of the property of ash particles from active volcanoes.

Miwa, T.; Shimano, T.; Nishimura, T.

2012-12-01

149

Video-Camera-Based Position-Measuring System  

NASA Technical Reports Server (NTRS)

A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white squares to an object of interest (see Figure 2). For other situations, where circular symmetry is more desirable, circular targets also can be created. Such a target can readily be generated and modified by use of commercially available software and printed by use of a standard office printer. All three relative coordinates (x, y, and z) of each target can be determined by processing the video image of the target. Because of the unique design of corresponding image-processing filters and targets, the vision-based position- measurement system is extremely robust and tolerant of widely varying fields of view, lighting conditions, and varying background imagery.

Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

2005-01-01

150

Deep-Sea Video Cameras Without Pressure Housings  

NASA Technical Reports Server (NTRS)

Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If power-supply regulators or filter capacitors were needed, these could be attached in chip form directly onto the back of, and potted with, the imager chip. Because CMOS imagers dissipate little power, the potting would not result in overheating. To minimize the cost of the camera, a fixed lens could be fabricated as part of the plastic case. For improved optical performance at greater cost, an adjustable glass achromatic lens would be mounted in a reservoir that would be filled with transparent oil and subject to the full hydrostatic pressure, and the reservoir would be mounted on the case to position the lens in front of the image sensor. The lens would by adjusted for focus by use of a motor inside the reservoir (oil-filled motors already exist).

Cunningham, Thomas

2004-01-01

151

The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras  

ERIC Educational Resources Information Center

Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

Bird, Jo; Colliver, Yeshe; Edwards, Susan

2014-01-01

152

Evaluation of Motion Blur Considering Temporal Frequency Characteristics of Video Camera and LCD Systems  

NASA Astrophysics Data System (ADS)

In this study, we show that the motion blur is caused by exposure time of video camera as well as the characteristics of LCD system. Also, we suggest that evaluation method of motion picture quality according to the frequency response of video camera and LCD systems of hold and scanning backlight type.

Chae, Seok-Min; Song, In-Ho; Lee, Sung-Hak; Sohng, Kyu-Ik

153

Real-Time Video Analysis on an Embedded Smart Camera for Traffic Surveillance  

Microsoft Academic Search

A smart camera combines video sensing, high-level vid- eoprocessingandcommunicationwithin a single embedded device. Such cameras are key components in novel surveil- lance systems. This paper reports on a prototyping development of a smart camera for traffic surveillance. We present its scal- able architecture comprised of a CMOS sensor, digital sig- nal processors (DSP), and a network processor. We further discuss

Michael Bramberger; Josef Brunner; Bernhard Rinner; Helmut Schwabach

2004-01-01

154

New stereoscopic video camera and monitor system with central high resolution  

NASA Astrophysics Data System (ADS)

A new stereoscopic video system (the Q stereoscopic video system), which has high resolution in the central area, has been developed using four video cameras and four video displays. The Q stereoscopic camera system is constructed using two cameras with wide-angle lenses, which are combined as the stereoscopic camera system, and two cameras with narrow-angle lenses, which are combined (using half mirrors) with each of the wide-angle cameras to have the same optical center axis. The Q stereoscopic display system is composed of two large video displays that receive images from the wide-angle stereoscopic cameras, and two smaller displays projecting images from the narrow-angle cameras. With this system, human operators are able to see the stereoscopic images of the smaller displays inserted in the images of the larger displays. Completion times for the pick-up task of a remote controlled robot were shorter when using the Q stereoscopic video system rather than a conventional stereoscopic video system.

Matsunaga, Katsuya; Nose, Yasuhiro; Minamoto, Masahiko; Shidoji, Kazunori; Ebuchi, Kazuhisa; Itoh, Daisuke; Inoue, Tomonori; Hayami, Taketo; Matsuki, Yuji; Arikawa, Yuko; Matsubara, Kenjiro

1998-04-01

155

Simultaneous monitoring of a collapsing landslide with video cameras  

NASA Astrophysics Data System (ADS)

Effective countermeasures and risk management to reduce landslide hazards require a full understanding of the processes of collapsing landslides. While the processes are generally estimated from the features of debris deposits after collapse, simultaneous monitoring during collapse provides more insights into the processes. Such monitoring, however, is usually very difficult, because it is rarely possible to predict when a collapse will occur. This study introduces a rare case in which a collapsing landslide (150 m in width and 135 m in height) was filmed with three video cameras in Higashi-Yokoyama, Gifu Prefecture, Japan. The cameras were set up in the front and on the right and left sides of the slide in May 2006, one month after a series of small slope failures in the toe and the formation of cracks on the head indicated that a collapse was imminent. The filmed images showed that the landslide collapse started from rock falls and slope failures occurring mainly around the margin, that is, the head, sides and toe. These rock falls and slope failures, which were individually counted on the screen, increased with time. Analyzing the images, five of the failures were estimated to have each produced more than 1000 m3 of debris, and the landslide collapsed with several surface failures accompanied by a toppling movement. The manner of the collapse suggested that the slip surface initially remained on the upper slope, and then extended down the slope as the excessive internal stress shifted downwards. Image analysis, together with field measurements using a ground-based laser scanner after the collapse, indicated that the landslide produced a total of 50 000 m3 of debris. As described above, simultaneous monitoring provides valuable information about landslide processes. Further development of monitoring techniques will help clarify landslide processes qualitatively as well as quantitatively.

Fujisawa, K.; Ohara, J.

2008-01-01

156

Electro-optical testing of fully depleted CCD image sensors for the Large Synoptic Survey Telescope camera  

NASA Astrophysics Data System (ADS)

The LSST Camera science sensor array will incorporate 189 large format Charge Coupled Device (CCD) image sensors. Each CCD will include over 16 million pixels and will be divided into 16 equally sized segments and each segment will be read through a separate output amplifier. The science goals of the project require CCD sensors with state of the art performance in many aspects. The broad survey wavelength coverage requires fully depleted, 100 micrometer thick, high resistivity, bulk silicon as the imager substrate. Image quality requirements place strict limits on the image degradation that may be caused by sensor effects: optical, electronic, and mechanical. In this paper we discuss the design of the prototype sensors, the hardware and software that has been used to perform electro-optic testing of the sensors, and a selection of the results of the testing to date. The architectural features that lead to internal electrostatic fields, the various effects on charge collection and transport that are caused by them, including charge diffusion and redistribution, effects on delivered PSF, and potential impacts on delivered science data quality are addressed.

Doherty, Peter E.; Antilogus, Pierre; Astier, Pierre; Chiang, James; Gilmore, D. Kirk; Guyonnet, Augustin; Huang, Dajun; Kelly, Heather; Kotov, Ivan; Kubanek, Petr; Nomerotski, Andrei; O'Connor, Paul; Rasmussen, Andrew; Riot, Vincent J.; Stubbs, Christopher W.; Takacs, Peter; Tyson, J. Anthony; Vetter, Kurt

2014-07-01

157

GPC1 and GPC2: the Pan-STARRS 1.4 gigapixel mosaic focal plane CCD cameras with an on-sky on-CCD tip-tilt image compensation  

NASA Astrophysics Data System (ADS)

We will report on the on-sky, on-CCD, tip-tilt image compensation performance of GPC1, the 1.4 gigapixel mosaic focal plane CCD camera for wide field surveys with a 7 square degree field of view. The camera uses 60 Orthogonal Transfer Arrays (OTAs) with a novel 4 phase pixel architecture and the STARGRASP controller for closed loop multi-guide star centroiding and image correction. The Pan-STARRS project is also constructing GPC2, the second 1.4 gigapixel camera using 64 OTAs. GPC2 will include design enhancements over GPC1 including a new generation of OTAs, titanium mosaic focal plane with adjustable three point kinematic mounts, cyro flex wiring and the recent software distributed over 32 controllers. We will discuss the design, cost, schedule, tools developed, shortcomings and future plans for the two largest digital cameras in the world.

Onaka, P.; Rae, C.; Isani, S.; Tonry, J. L.; Lee, A.; Uyeshiro, R.; Robertson, L.; Ching, G.

2012-07-01

158

Acceptance/operational test procedure 241-AN-107 Video Camera System  

SciTech Connect

This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer`s specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights.

Pedersen, L.T.

1994-11-18

159

Color Enhancement in Images with Single CCD camera in Night Vision Environment  

Microsoft Academic Search

In this paper, we describe an effective method to enhance the color night images with spatio-temporal multi-scale retinex focused to the Intelligent Transportation System (ITS) applications such as in the single CCD based Electronic Toll Collection System (ETCS). The basic spatial retinex is known to provide color constancy while effectively removing local shades. However, it is relatively ineffective in night

Wonjun Hwang; Hanseok Ko

2000-01-01

160

Achievable resolution from images of biological specimens acquired from a 4k × 4k CCD camera in a 300kV electron cryomicroscope  

Microsoft Academic Search

Bacteriorhodopsin and ? 15 bacteriophage were used as biological test specimens to evaluate the potential structural resolution with images captured from a 4k×4k charge-coupled device (CCD) camera in a 300-kV electron cryomicroscope. The phase residuals computed from the bacteriorhodopsin CCD images taken at 84,000× effective magnification averaged 15.7° out to 5.8-Å resolution relative to Henderson’s published values. Using a single-particle

Dong-Hua Chen; Joanita Jakana; Xiangan Liu; Michael F. Schmid; Wah Chiu

2008-01-01

161

Variable high-resolution color CCD camera system with online capability for professional photo studio application  

NASA Astrophysics Data System (ADS)

Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

1998-04-01

162

Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam  

NASA Astrophysics Data System (ADS)

When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2×107 cm s, which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300×1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200 rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points.

Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A. E.; Engelhardt, M.

2005-04-01

163

Night Vision Camera  

NASA Technical Reports Server (NTRS)

PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

1996-01-01

164

CCD Astronomy Software User's Guide  

E-print Network

CCDSoft CCD Astronomy Software User's Guide Version 5 Revision 1.11 Copyright © 1992­2006 Santa ...................................................................................................11 Controlling a CCD Camera

165

Using a Video Camera to Measure the Radius of the Earth  

ERIC Educational Resources Information Center

A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

Carroll, Joshua; Hughes, Stephen

2013-01-01

166

Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics  

NASA Astrophysics Data System (ADS)

A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system matched the clinical results. Digital image measurement of specimen deformation based on CCD cameras and Image J software has good perspective for application in biomechanical research, which has the advantage of simple optical setup, no-contact, high precision, and no special requirement of test environment.

Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

2008-03-01

167

Assessing the capabilities of a 4kx4k CCD camera for electron cryo-microscopy at 300kV.  

PubMed

CCD cameras have numerous advantages over photographic film for detecting electrons; however the point spread function of these cameras has not been sufficient for single particle data collection to subnanometer resolution with 300kV microscopes. We have adopted spectral signal to noise ratio (SNR) as a parameter for assessing detector quality for single particle imaging. The robustness of this parameter is confirmed under a variety of experimental conditions. Using this parameter, we demonstrate that the SNR of images of either amorphous carbon film or ice embedded virus particles collected on a new commercially available 4kx4k CCD camera are slightly better than photographic film at low spatial frequency (<1/5 Nyquist frequency), and as good as photographic film out to half of the Nyquist frequency. In addition it is slightly easier to visualize ice embedded particles on this CCD camera than on photographic film. Based on this analysis it is realistic to collect images containing subnanometer resolution data (6-9A) using this CCD camera at an effective magnification of approximately 112000x on a 300kV electron microscope. PMID:17067819

Booth, Christopher R; Jakana, Joanita; Chiu, Wah

2006-12-01

168

Range-Gated LADAR Coherent Imaging Using Parametric Up-Conversion of IR and NIR Light for Imaging with a Visible-Range Fast-Shuttered Intensified Digital CCD Camera  

SciTech Connect

Research is presented on infrared (IR) and near infrared (NIR) sensitive sensor technologies for use in a high speed shuttered/intensified digital video camera system for range-gated imaging at ''eye-safe'' wavelengths in the region of 1.5 microns. The study is based upon nonlinear crystals used for second harmonic generation (SHG) in optical parametric oscillators (OPOS) for conversion of NIR and IR laser light to visible range light for detection with generic S-20 photocathodes. The intensifiers are ''stripline'' geometry 18-mm diameter microchannel plate intensifiers (MCPIIS), designed by Los Alamos National Laboratory and manufactured by Philips Photonics. The MCPIIS are designed for fast optical shattering with exposures in the 100-200 ps range, and are coupled to a fast readout CCD camera. Conversion efficiency and resolution for the wavelength conversion process are reported. Experimental set-ups for the wavelength shifting and the optical configurations for producing and transporting laser reflectance images are discussed.

YATES,GEORGE J.; MCDONALD,THOMAS E. JR.; BLISS,DAVID E.; CAMERON,STEWART M.; ZUTAVERN,FRED J.

2000-12-20

169

Video-based Animal Behavior Analysis From Multiple Cameras Xinwei Xue and Thomas C. Henderson  

E-print Network

the genetics of certain diseases. In one instance, this requires the determination of the time the lab mouse results on artificial mouse video using single, stereo and three cameras. Overall the results the mouse for a period of time, and then an observer watches the video and records the behaviors

Henderson, Thomas C.

170

Development of Slow Scan Digital CCD Camera for Low light level Image  

Microsoft Academic Search

this paper studies the method of the development of low cost and high resolving power scientific grade camera for low light level image, its image can be received by computer. The main performance parameter and readout driving signal are introduced, the total scheme of image acquisition is designed. Using computer Expand Parallel Port and the pipelining work method of readout,

YAOYU CHENG; YAN HU; YONGHONG LI

171

Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems  

SciTech Connect

This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program.

Kohlman, E.H.

1994-11-17

172

Automated image acquisition and processing using a new generation of 4K x 4K CCD cameras for cryo electron microscopic studies of macromolecular assemblies.  

PubMed

We have previously reported the development of AutoEM, a software package for semi-automated acquisition of data from a transmission electron microscope. In continuing efforts to improve the speed of structure determination of macromolecular assemblies by electron microscopy, we report here on the performance of a new generation of 4 K CCD cameras for use in cryo electron microscopic applications. We demonstrate that at 120 kV, and at a nominal magnification of 67000 x, power spectra and signal-to-noise ratios for the new 4 K CCD camera are comparable to values obtained for film images scanned using a Zeiss scanner to resolutions as high as approximately 1/6.5A(-1). The specimen area imaged for each exposure on the 4 K CCD is about one-third of the area that can be recorded with a similar exposure on film. The CCD camera also serves the purpose of recording images at low magnification from the center of the hole to measure the thickness of vitrified ice in the hole. The performance of the camera is satisfactory under the low-dose conditions used in cryo electron microscopy, as demonstrated here by the determination of a three-dimensional map at 15 A for the catalytic core of the 1.8 MDa Bacillus stearothermophilus icosahedral pyruvate dehydrogenase complex, and its comparison with the previously reported atomic model for this complex obtained by X-ray crystallography. PMID:12972350

Zhang, Peijun; Borgnia, Mario J; Mooney, Paul; Shi, Dan; Pan, Ming; O'Herron, Philip; Mao, Albert; Brogan, David; Milne, Jacqueline L S; Subramaniam, Sriram

2003-08-01

173

Comparison of EM-CCD and scientific CMOS based camera systems for high resolution X-ray imaging and tomography applications  

NASA Astrophysics Data System (ADS)

We have developed an Electron Multiplying (EM) CCD based, high frame rate camera system using an optical lens system for X-ray imaging and tomography. The current state of the art systems generally use scientific CMOS sensors that have a readout noise of a few electrons and operate at high frame rates. Through the use of electron multiplication, the EM-CCD camera is able to operate with a sub-electron equivalent readout noise and a frame rate of up to 50 HZ (full-frame). The EM-CCD-based camera system has a major advantage over existing technology in that it has a high signal-to-noise ratio even at very low signal levels. This allows radiation-sensitive samples to be analysed with low flux X-ray beams which greatly reduces the beam damage. This paper shows that under the conditions of this experiment the EM-CCD camera system has a comparable spatial resolution performance to the scientific CMOS based imaging system and has a superior signal-to-noise ratio.

Tutt, J. H.; Hall, D. J.; Soman, M. R.; Holland, A. D.; Warren, A. J.; Connolley, T.; Evagora, A. M.

2014-06-01

174

STEREO MOSAICS FROM A MOVING VIDEO CAMERA FOR ENVIRONMENTAL MONITORING  

Microsoft Academic Search

Environmental monitoring using automated analysis of high-resolution aerial video is an application of growing importance with its own set of technical challenges. A mosaic is a commonly used tool for representing the enormous amount of data generated from ae rial video sequences in an easily viewable form. In contrast to the usual applic ations of mosaics, the environmental monitoring domain

Zhigang Zhu; Allen R. Hanson; Howard Schultz; Frank Stolle; Edward M. Riseman

1999-01-01

175

Rayleigh Laser Guide Star Systems UnISIS Bow Tie Shutter and CCD39 Wavefront Camera  

E-print Network

Laser guide star systems based on Rayleigh scattering require some means to deal with the flash of low altitude laser light that follows immediately after each laser pulse. These systems also need a fast shutter to isolate the high altitude portion of the focused laser beam to make it appear star-like to the wavefront sensor. We describe how these tasks are accomplished with UnISIS, the Rayleigh laser guided adaptive optics system at the Mt. Wilson Observatory 2.5-m telescope. We use several methods: a 10,000 RPM rotating disk, dichroics, a fast sweep and clear mode of the CCD readout electronics on a 10 $\\mu$s timescale, and a Pockel's cell shutter system. The Pockel's cell shutter would be conventional in design if the laser light were naturally polarized, but the UnISIS 351 nm laser is unpolarized. So we have designed and put into operation a dual Pockel's cell shutter in a unique bow tie arrangement.

Thompson, L A; Crawford, S L; Leach, R W; Thompson, Laird A.; Teare, Scott W.; Crawford, Samuel L.; Leach, Robert W.

2002-01-01

176

Rayleigh Laser Guide Star Systems: UnISIS Bow Tie Shutter and CCD39 Wavefront Camera  

E-print Network

Laser guide star systems based on Rayleigh scattering require some means to deal with the flash of low altitude laser light that follows immediately after each laser pulse. These systems also need a fast shutter to isolate the high altitude portion of the focused laser beam to make it appear star-like to the wavefront sensor. We describe how these tasks are accomplished with UnISIS, the Rayleigh laser guided adaptive optics system at the Mt. Wilson Observatory 2.5-m telescope. We use several methods: a 10,000 RPM rotating disk, dichroics, a fast sweep and clear mode of the CCD readout electronics on a 10 $\\mu$s timescale, and a Pockel's cell shutter system. The Pockel's cell shutter would be conventional in design if the laser light were naturally polarized, but the UnISIS 351 nm laser is unpolarized. So we have designed and put into operation a dual Pockel's cell shutter in a unique bow tie arrangement.

Laird A. Thompson; Scott W. Teare; Samuel L. Crawford; Robert W. Leach

2002-07-10

177

Experimental comparison of the high-speed imaging performance of an EM-CCD and sCMOS camera in a dynamic live-cell imaging test case.  

PubMed

The study of living cells may require advanced imaging techniques to track weak and rapidly changing signals. Fundamental to this need is the recent advancement in camera technology. Two camera types, specifically sCMOS and EM-CCD, promise both high signal-to-noise and high speed (>100 fps), leaving researchers with a critical decision when determining the best technology for their application. In this article, we compare two cameras using a live-cell imaging test case in which small changes in cellular fluorescence must be rapidly detected with high spatial resolution. The EM-CCD maintained an advantage of being able to acquire discernible images with a lower number of photons due to its EM-enhancement. However, if high-resolution images at speeds approaching or exceeding 1000 fps are desired, the flexibility of the full-frame imaging capabilities of sCMOS is superior. PMID:24404178

Beier, Hope T; Ibey, Bennett L

2014-01-01

178

Experimental Comparison of the High-Speed Imaging Performance of an EM-CCD and sCMOS Camera in a Dynamic Live-Cell Imaging Test Case  

PubMed Central

The study of living cells may require advanced imaging techniques to track weak and rapidly changing signals. Fundamental to this need is the recent advancement in camera technology. Two camera types, specifically sCMOS and EM-CCD, promise both high signal-to-noise and high speed (>100 fps), leaving researchers with a critical decision when determining the best technology for their application. In this article, we compare two cameras using a live-cell imaging test case in which small changes in cellular fluorescence must be rapidly detected with high spatial resolution. The EM-CCD maintained an advantage of being able to acquire discernible images with a lower number of photons due to its EM-enhancement. However, if high-resolution images at speeds approaching or exceeding 1000 fps are desired, the flexibility of the full-frame imaging capabilities of sCMOS is superior. PMID:24404178

Beier, Hope T.; Ibey, Bennett L.

2014-01-01

179

Utilization of an Electron Multiplying CCD camera for applications in quantum information processing  

NASA Astrophysics Data System (ADS)

Electron Multiplying Charge-Coupled Device (EMCCD) cameras utilize an on-chip amplification process which boosts low-light signals above the readout noise floor. Although traditionally used for biological imaging, they have recently attracted interest for single-photon counting and entangled state characterization in quantum information processing applications. In addition, they exhibit some photon number-resolving capacity, which is attractive from the point-of-view of several applications in optical continous-variable computing, such as building a cubic phase gate. We characterize the Andor Luca-R EMCCD camera as an affordable tool for applications in optical quantum information. We present measurements of single-photon detection efficiency, dark count probability as well as photon-number resolving capacity and place quantitative bounds on the noise performance and detection efficiency of the EMCCD detector array. We find that the readout noise floor is a Gaussian distribution centered at 500 counts/pixel/frame at high EM gain setting. We also characterize the trade-off between quantum efficiency and detector dark-count probability.

Patel, Monika; Chen, Jian; Habif, Jonathan

2013-03-01

180

On Relativistic Disk Spectroscopy in Compact Objects with X-ray CCD Cameras  

E-print Network

X-ray charge-coupled devices (CCDs) are the workhorse detectors of modern X-ray astronomy. Typically covering the 0.3-10.0 keV energy range, CCDs are able to detect photoelectric absorption edges and K shell lines from most abundant metals. New CCDs also offer resolutions of 30-50 (E/dE), which is sufficient to detect lines in hot plasmas and to resolve many lines shaped by dynamical processes in accretion flows. The spectral capabilities of X-ray CCDs have been particularly important in detecting relativistic emission lines from the inner disks around accreting neutron stars and black holes. One drawback of X-ray CCDs is that spectra can be distorted by photon "pile-up", wherein two or more photons may be registered as a single event during one frame time. We have conducted a large number of simulations using a statistical model of photon pile-up to assess its impacts on relativistic disk line and continuum spectra from stellar-mass black holes and neutron stars. The simulations cover the range of current X-ray CCD spectrometers and operational modes typically used to observe neutron stars and black holes in X-ray binaries. Our results suggest that severe photon pile-up acts to falsely narrow emission lines, leading to falsely large disk radii and falsely low spin values. In contrast, our simulations suggest that disk continua affected by severe pile-up are measured to have falsely low flux values, leading to falsely small radii and falsely high spin values. The results of these simulations and existing data appear to suggest that relativistic disk spectroscopy is generally robust against pile-up when this effect is modest.

J. M. Miller; A. D'Ai; M. W. Bautz; S. Bhattacharyya; D. N. Burrows; E. M. Cackett; A. C. Fabian; M. J. Freyberg; F. Haberl; J. Kennea; M. A Nowak; R. C. Reis; T. E. Strohmayer; M. Tsujimoto

2010-09-22

181

1 Astrometric calibration for INT Wide Field Camera images The Wide Field Camera instrument on the Isaac Newton Telescope contains four CCD chips of  

E-print Network

on the Isaac Newton Telescope contains four CCD chips of 2048 #2; 4096 pixels positioned roughly, it is necessary to correct for the exact orientation and position of each CCD in relation to the others, as well the pixel coordinates (x i ; y i ) of CCD#i into a new pixel-like coordinate system (x 0 ; y 0 ) of uniform

Taylor, Mark

182

Single video camera method for using scene metrics to measure constrained 3D displacements  

NASA Astrophysics Data System (ADS)

There are numerous ways to use video cameras to measure 3D dynamic spatial displacements. When the scene geometry is unknown and the motion is unconstrained, two calibrated cameras are required. The data from both scenes are combined to perform the measurements using well known stereoscopic techniques. There are occasions where the measurement system can be simplified considerably while still providing a calibrated spatial measurement of a complex dynamic scene. For instance, if the sizes of objects in the scene are known a priori, these data may be used to provide scene specific spatial metrics to compute calibration coefficients. With this information, it is not necessary to calibrate the camera before use, nor is it necessary to precisely know the geometry between the camera and the scene. Field-ofview coverage and sufficient spatial and temporal resolution are the main camera requirements. Further simplification may be made if the 3D displacements of interest are small or constrained enough to allow for an accurate 2D projection of the spatial variables of interest. With proper camera orientation and scene marking, the apparent pixel movements can be expressed as a linear combination of the underlying spatial variables of interest. In many cases, a single camera may be used to perform complex 3D dynamic scene measurements. This paper will explain and illustrate a technique for using a single uncalibrated video camera to measure the 3D displacement of the end of a constrained rigid body subject to a perturbation.

Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.

2014-09-01

183

Design of Pan-Camera Decoder in Digital Video Monitoring System Based on Embedded Microcontroller  

Microsoft Academic Search

This paper presents the design method of pan-camera decoder applying in digital video monitoring system including software and hardware. This decoder was based on embedded microcontroller C8051F330 and was mainly composed of regulated power supply circuit, serial communication interface circuit, watching dog circuit, input\\/output alarm circuit and pan-camera control circuit. It is connected with the UP computer by RS-485 serial

Tan Kejun; Luan Xiuzhen; Meng Xianyao

2007-01-01

184

Video-Based Camera Tracking Using Rotation-Discriminative Template Matching  

Microsoft Academic Search

This paper presents a video-based camera tracker that combines mark- er-based and feature point-based cues in a particle filter\\u000a framework. The framework relies on their complementary performance. Marker-based trackers can robustly recover camera position\\u000a and orientation when a reference (marker) is available, but fail once the reference becomes unavailable. On the other hand,\\u000a feature point tracking can still provide estimates

David Marimon; Touradj Ebrahimi

185

BOREAS RSS-3 Imagery and Snapshots from a Helicopter-Mounted Video Camera  

NASA Technical Reports Server (NTRS)

The BOREAS RSS-3 team collected helicopter-based video coverage of forested sites acquired during BOREAS as well as single-frame "snapshots" processed to still images. Helicopter data used in this analysis were collected during all three 1994 IFCs (24-May to 16-Jun, 19-Jul to 10-Aug, and 30-Aug to 19-Sep), at numerous tower and auxiliary sites in both the NSA and the SSA. The VHS-camera observations correspond to other coincident helicopter measurements. The field of view of the camera is unknown. The video tapes are in both VHS and Beta format. The still images are stored in JPEG format.

Walthall, Charles L.; Loechel, Sara; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor)

2000-01-01

186

Camera/Video Phones in Schools: Law and Practice  

ERIC Educational Resources Information Center

The emergence of mobile phones with built-in digital cameras is creating legal and ethical concerns for school systems throughout the world. Users of such phones can instantly email, print or post pictures to other MMS1 phones or websites. Local authorities and schools in Britain, Europe, USA, Canada, Australia and elsewhere have introduced…

Parry, Gareth

2005-01-01

187

Passive millimeter-wave video camera for aviation applications  

NASA Astrophysics Data System (ADS)

Passive Millimeter Wave (PMMW) imaging technology offers significant safety benefits to world aviation. Made possible by recent technological breakthroughs, PMMW imaging sensors provide visual-like images of objects under low visibility conditions (e.g., fog, clouds, snow, sandstorms, and smoke) which blind visual and infrared sensors. TRW has developed an advanced, demonstrator version of a PMMW imaging camera that, when front-mounted on an aircraft, gives images of the forward scene at a rate and quality sufficient to enhance aircrew vision and situational awareness under low visibility conditions. Potential aviation uses for a PMMW camera are numerous and include: (1) Enhanced vision for autonomous take- off, landing, and surface operations in Category III weather on Category I and non-precision runways; (2) Enhanced situational awareness during initial and final approach, including Controlled Flight Into Terrain (CFIT) mitigation; (3) Ground traffic control in low visibility; (4) Enhanced airport security. TRW leads a consortium which began flight tests with the demonstration PMMW camera in September 1997. Flight testing will continue in 1998. We discuss the characteristics of PMMW images, the current state of the technology, the integration of the camera with other flight avionics to form an enhanced vision system, and other aviation applications.

Fornaca, Steven W.; Shoucri, Merit; Yujiri, Larry

1998-07-01

188

J.C. Mullikin, L.J. van Vliet, H. Netten, F.R. Boddeke, G. van der Feltz and I.T. Young, Methods for CCD camera characterazation, in: H.C. Titus, A. Waks (eds), SPIE vol. 2173, "Image Acquisition and Scientific Imaging  

E-print Network

for CCD camera characterazation, in: H.C. Titus, A. Waks (eds), SPIE vol. 2173, "Image Acquisition and Scientific Imaging Systems", 1994, 73-84. Methods for CCD Camera Characterization J.C. Mullikin1,2, L.J. van for characterizing CCD cameras. Interesting properties are linearity of photometric response, signal-to-noise ratio

van Vliet, Lucas J.

189

J.C. Mullikin, L.J. van Vliet, H. Netten, F.R. Boddeke, G. van der Feltz and I.T. Young, Methods for CCD camera characterazation, in: H.C. Titus, A. Waks (eds), SPIE vol. 2173, ``Image Acquisition and Scientific Imaging  

E-print Network

for CCD camera characterazation, in: H.C. Titus, A. Waks (eds), SPIE vol. 2173, ``Image Acquisition and Scientific Imaging Systems'', 1994, 73­84. Methods for CCD Camera Characterization J.C. Mullikin 1,2 , L for characterizing CCD cameras. Interesting properties are linearity of photometric response, signal­to­noise ratio

van Vliet, Lucas J.

190

Evaluation of the disintegration time of rapidly disintegrating tablets via a novel method utilizing a CCD camera.  

PubMed

Many kinds of rapidly disintegrating or oral disintegrating tablets (RDT) have been developed to improve the ease of tablet administration, especially for elderly and pediatric patients. In these cases, knowledge regarding disintegration behavior appears important with respect to the development of such a novel tablet. Ordinary disintegration testing, such as the Japanese Pharmacopoeia (JP) method, faces limitations with respect to the evaluation of rapid disintegration due to strong agitation. Therefore, we have developed a novel apparatus and method to determine the dissolution of the RDT. The novel device consists of a disintegrating bath and CCD camera interfaced with a personal computer equipped with motion capture and image analysis software. A newly developed RDT containing various types of binder was evaluated with this protocol. In this method, disintegration occurs in a mildly agitated medium, which allows differentiation of minor distinctions among RDTs of different formulations. Simultaneously, we were also able to detect qualitative information, i.e., morphological changes in the tablet during disintegration. This method is useful for the evaluation of the disintegration of RDT during pharmaceutical development, and also for quality control during production. PMID:12237533

Morita, Yutaka; Tsushima, Yuki; Yasui, Masanobu; Termoz, Ryoji; Ajioka, Junko; Takayama, Kozo

2002-09-01

191

Transient noise characterization and filtration in CCD cameras exposed to stray radiation from a medical linear accelerator.  

PubMed

Charge coupled devices (CCDs) are being increasingly used in radiation therapy for dosimetric purposes. However, CCDs are sensitive to stray radiation. This effect induces transient noise. Radiation-induced noise strongly alters the image and therefore limits its quantitative analysis. The purpose of this work is to characterize the radiation-induced noise and to develop filtration algorithms to restore image quality. Two models of CCD were used for measurements close to a medical linac. The structure of the transient noise was first characterized. Then, four methods of noise filtration were compared: median filtering of a time series of identical images, uniform median filtering of single images, an adaptive filter with switching mechanism, and a modified version of the adaptive switch filter. The intensity distribution of noisy pixels was similar in both cameras. However, the spatial distribution of the noise was different: The average noise cluster size was 1.2 +/- 0.6 and 3.2 +/- 2.7 pixels for the U2000 and the Luca, respectively. The median of a time series of images resulted in the best filtration and minimal image distortion. For applications where time series is impractical, the adaptive switch filter must be used to reduce image distortion. Our modified version of the switch filter can be used in order to handle nonisolated groups of noisy pixels. PMID:18975680

Archambault, Louis; Briere, Tina Marie; Beddar, Sam

2008-10-01

192

Transient noise characterization and filtration in CCD cameras exposed to stray radiation from a medical linear accelerator  

PubMed Central

Charge coupled devices (CCDs) are being increasingly used in radiation therapy for dosimetric purposes. However, CCDs are sensitive to stray radiation. This effect induces transient noise. Radiation-induced noise strongly alters the image and therefore limits its quantitative analysis. The purpose of this work is to characterize the radiation-induced noise and to develop filtration algorithms to restore image quality. Two models of CCD were used for measurements close to a medical linac. The structure of the transient noise was first characterized. Then, four methods of noise filtration were compared: median filtering of a time series of identical images, uniform median filtering of single images, an adaptive filter with switching mechanism, and a modified version of the adaptive switch filter. The intensity distribution of noisy pixels was similar in both cameras. However, the spatial distribution of the noise was different: The average noise cluster size was 1.2±0.6 and 3.2±2.7 pixels for the U2000 and the Luca, respectively. The median of a time series of images resulted in the best filtration and minimal image distortion. For applications where time series is impractical, the adaptive switch filter must be used to reduce image distortion. Our modified version of the switch filter can be used in order to handle nonisolated groups of noisy pixels. PMID:18975680

Archambault, Louis; Briere, Tina Marie; Beddar, Sam

2008-01-01

193

Spatial resolution limit study of a CCD camera and scintillator based neutron imaging system according to MTF determination and analysis.  

PubMed

Spatial resolution limit is a very important parameter of an imaging system that should be taken into consideration before examination of any object. The objectives of this work are the determination of a neutron imaging system's response in terms of spatial resolution. The proposed procedure is based on establishment of the Modulation Transfer Function (MTF). The imaging system being studied is based on a high sensitivity CCD neutron camera (2×10(-5)lx at f1.4). The neutron beam used is from the horizontal beam port (H.6) of the Algerian Es-Salam research reactor. Our contribution is on the MTF determination by proposing an accurate edge identification method and a line spread function undersampling problem-resolving procedure. These methods and procedure are integrated into a MatLab code. The methods, procedures and approaches proposed in this work are available for any other neutron imaging system and allow for judging the ability of a neutron imaging system to produce spatial (internal details) properties of any object under examination. PMID:22014891

Kharfi, F; Denden, O; Bourenane, A; Bitam, T; Ali, A

2012-01-01

194

Video surveillance using a time-of-light camera  

E-print Network

systems for oserving humns nd understnding their pperne nd tivitiesF he im of this work is to implement to mesure oth the grysle imge of the sene nd the depth for eh pixelF ith these two types of informtion in the videoF fesides the interest will e pointed not only on the results tht n e otinedD ut lso on the nlysis

195

Temperature monitoring of Nd:YAG laser cladding (CW and PP) by advanced pyrometry and CCD-camera-based diagnostic tool  

NASA Astrophysics Data System (ADS)

The set of original pyrometers and the special diagnostic CCD-camera were applied for monitoring of Nd:YAG laser cladding (Pulsed-Periodic and Continuous Wave) with coaxial powder injection and on-line measurement of cladded layer temperature. The experiments were carried out in course of elaboration of wear resistant coatings using various powder blends (WC-Co, CuSn, Mo, Stellite grade 12, etc.) applying variation of different process parameters: laser power, cladding velocity, powder feeding rate, etc. Surface temperature distribution to the cladding seam and the overall temperature mapping were registered. The CCD-camera based diagnostic tool was applied for: (1) monitoring of flux of hot particles and its instability; (2) measurement of particle-in-flight size and velocity; (3) monitoring of particle collision with the clad in the interaction zone.

Doubenskaia, M.; Bertrand, Ph.; Smurov, Igor Y.

2004-04-01

196

A research on controlling three-dimensional effect of the video frames obtained from stereo camera  

Microsoft Academic Search

In recent years, 3D TVs and the 3D video contents such as movie, commercial film, or drama are widely exploited for many people. Due to these developments, many people will want to make the 3D contents with 3D cameras, and share these contents by uploading them to the UCC website. However, some people can experience the visual discomfort and fatigue

Jeehong Lee; Kyu-yeol Chae; Simon Ji

2011-01-01

197

Digital Video Cameras for Brainstorming and Outlining: The Process and Potential  

ERIC Educational Resources Information Center

This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…

Unger, John A.; Scullion, Vicki A.

2013-01-01

198

Evaluation of Lightning Location accuracy of JLDN with a lightning video camera system  

Microsoft Academic Search

This paper discusses evaluation of Lightning Location accuracy of Japan Lightning Detection Network (JLDN). JLDN has observed lightning activities since 1998. It has been said that the nominal location accuracy of JLDN is less than 500 meters in most area of Japan. But, the actual location accuracy of JLDN has not been evaluated. Sankosha Corporation installed lightning video camera system

Michihiro Matsui; Nobuyoshi Takano

2010-01-01

199

Novel insights into green sea turtle behaviour using animal-borne video cameras  

E-print Network

grass habitats, only two green turtles consumed sea grass, with each turtle grazing for less than twoNovel insights into green sea turtle behaviour using animal-borne video cameras Michael R. Heithaus to collect behavioural data on green (Chelonia mydas) and logger- head (Caretta caretta) turtles in Western

Dill, Lawrence M.

200

Onboard video cameras and instruments to measure the flight behavior of birds  

Microsoft Academic Search

Summary We have recently developed several novel techniques to measure flight kinematic parameters on free-flying birds of prey using onboard wireless video cameras and inertial measurement systems (1). Work to date has involved captive trained raptors including a Steppe Eagle (Aquila nipalensis), Peregrine falcon (Falco peregrinus) and Gyrfalcon (Falco rusticolus). We aim to describe mathematically the dynamics of the relationship

J. A. Gillies; M. Bacic; A. L. R. Thomas; G. K. Taylor

2008-01-01

201

Learning to Segment a Video to Clips Based on Scene and Camera Motion  

E-print Network

Learning to Segment a Video to Clips Based on Scene and Camera Motion Movie List Adarsh Kowdle. Peeping Tom 9. Sound of Music 10. Spartacus 11. Swiss Family Robinson 12. Time Machine Movies from 9. Scary Movie 10. The Perfect Storm 11. What Women Want 12. Xmen #12;

Chen, Tsuhan

202

Object Tracking and Activity Recognition in Video Acquired Using Mobile Cameras  

E-print Network

representation. The first approach tracks the centroids of the objects in Forward Looking Infrared Imagery (FLIRObject Tracking and Activity Recognition in Video Acquired Using Mobile Cameras by Alper Yilmaz B years, object tracking and activity recognition are receiving considerable attention in the research com

Central Florida, University of

203

Object Tracking and QOS Control Using Infrared Sensor and Video Cameras  

Microsoft Academic Search

Object tracking and the quality of service (QOS) control is one of key issues in sensor networks. In order to track objects and provide information about the location, trajectory and identity for each object, we propose a new object tracking scheme using both infrared sensors and video cameras, where infrared sensors are used to detect and locate moving objects, while

Can Zhang; Jiankang Wu; Guofang Tu

2006-01-01

204

Composite video and graphics display for multiple camera viewing system in robotics and teleoperation  

NASA Technical Reports Server (NTRS)

A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

Diner, Daniel B. (inventor); Venema, Steven C. (inventor)

1991-01-01

205

Composite video and graphics display for camera viewing systems in robotics and teleoperation  

NASA Technical Reports Server (NTRS)

A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

Diner, Daniel B. (inventor); Venema, Steven C. (inventor)

1993-01-01

206

A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis  

PubMed Central

This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes. PMID:22319349

Porteiro, Jacobo; Riveiro, Belen; Granada, Enrique; Armesto, Julia; Eguia, Pablo; Collazo, Joaquin

2011-01-01

207

High-speed CCD image processing at 75 MSPS and 10-bit range  

Microsoft Academic Search

High frame rate CCD video cameras require high clock frequencies for running photocharge transport registers, especially when the sensor has only a single video port. It was found that some sensors of the interline transfer architecture allow for horizontal charge transport clock rates in excess of 75 MHz while still producing images of acceptable quality. We describe a relatively inexpensive

Bojan T. Turko; George J. Yates; Kevin L. Albright; Nicholas S. King

1993-01-01

208

Spectral-based calorimetric calibration of a 3CCD color camera for fast and accurate characterization and calibration of LCD displays  

NASA Astrophysics Data System (ADS)

LCD displays exhibit significant amount of variability in their tone-responses, color responses and backlight-modulation responses. LCD display characterization and calibration using a spectrometer or a color meter, however, leads to two basic deficiencies: (a) It can only generate calibration data based on a single spot on the display (usually at panel center); and (b) It generally takes a significant amount of time to do the required measurement. As a result, a fast and efficient system for a full LCD display characterization and calibration is required. Herein, a system based on a 3CCD calorimetrically-calibrated camera is presented which can be used for full characterization and calibration of LCD displays. The camera can provide full tri-stimulus measurements in real time. To achieve high-degree of accuracy, colorimetric calibration of camera is carried out based on spectral method.

Safaee-Rad, Reza; Aleksic, Milivoje

2011-03-01

209

Video and acoustic camera techniques for studying fish under ice: a review and comparison  

SciTech Connect

Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures. This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes, rivers, and streams with ice cover. Methods are provided for determining fish density and size, identifying species, and measuring swimming speed and successful applications of previous surveys of fish under the ice are described. These include drilling ice holes, selecting batteries and generators, deploying pan and tilt cameras, and using paired colored lasers to determine fish size and habitat associations. We also discuss use of infrared and white light to enhance image-capturing capabilities, deployment of digital recording systems and time-lapse techniques, and the use of imaging software. Data are presented from initial surveys with video and acoustic cameras in the Sagavanirktok River Delta, Alaska, during late winter 2004. These surveys represent the first known successful application of a dual-frequency identification sonar (DIDSON) acoustic camera under the ice that achieved fish detection and sizing at camera ranges up to 16 m. Feasibility tests of video and acoustic cameras for determining fish size and density at various turbidity levels are also presented. Comparisons are made of the different techniques in terms of suitability for achieving various fisheries research objectives. This information is intended to assist researchers in choosing the equipment that best meets their study needs.

Mueller, Robert P.; Brown, Richard S.; Hop, Haakon H.; Moulton, Larry

2006-09-05

210

Highly flexible and Internet-programmable CCD camera with a frequency-selectable read-out for imaging and spectroscopy applications  

NASA Astrophysics Data System (ADS)

A new concept CCD camera is currently being realized at the XUV Lab of the Department of Astronomy and Space Science of the University of Florence. The main features we aim to get are a high level of versatility and a fast pixel rate. Within this project, a versatile CCD sequencer has been realized with interesting and innovative features. Based on a microcontroller and complex programmable logic devices (CPLD), it allows the selection of all the parameters related to charge transfer and CCD readout (number, duration and overlapping of serial and parallel transfer clocks, number of output nodes, pixel transfer rate) and therefore it allows the use of virtually any CCD sensor. Comparing to a common DSP-based sequencer, it is immune to jitter noise and it can also reach pixel rates greater than 40 MHz. The software interface is LabVIEW 6i based and it will allow both local or remote control and display. Furthermore, it will be possible to remote debug the system and to upgrade the LabVIEW interface itself and also the microcontroller resident program and the CPLD implemented schemes.

Gori, Luca; Pace, Emanuele; Tommasi, Leonardo; Sarocchi, D.; Bagnoli, V.; Sozzi, M.; Puri, S.

2001-12-01

211

A digital underwater video camera system for aquatic research in regulated rivers  

USGS Publications Warehouse

We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

Martin, Benjamin M.; Irwin, Elise R.

2010-01-01

212

Gated video signal baseline restoration with a voltage comparator  

Microsoft Academic Search

Video baseline drift in CCD cameras with variable frame formats is often difficult to correct. Conventional baseline restorers generally do not work well at very high frame rates, since frequently only a short, few nanosecond wide point in each video line is available for reference. A.C. coupled video level drifts as the signal average changes with illumination. Also, low frequency

Bojan T. Turko; George J. Yates; Kevin L. Albright

1994-01-01

213

Human Daily Activities Indexing in Videos from Wearable Cameras for Monitoring of Patients with Dementia Diseases  

E-print Network

Our research focuses on analysing human activities according to a known behaviorist scenario, in case of noisy and high dimensional collected data. The data come from the monitoring of patients with dementia diseases by wearable cameras. We define a structural model of video recordings based on a Hidden Markov Model. New spatio-temporal features, color features and localization features are proposed as observations. First results in recognition of activities are promising.

Karaman, Svebor; Mégret, Rémi; Dovgalecs, Vladislavs; Dartigues, Jean-François; Gaëstel, Yann

2010-01-01

214

Analysis of Head-Mounted Wireless Camera Videos for Early Diagnosis of Autism  

Microsoft Academic Search

Summary. In this paper we present a computer based approach to analysis of social interaction experiments for the diagnosis of autism spectrum disorders in young children of 6-18 months of age. We apply face detection on videos from a head-mounted wireless camera to measure the time a child spends looking at people. In-Plane rotation invariant Face Detection is used to

Basilio Noris; Karim Benmachiche; Julien Meynet; Jean-philippe Thiran; Aude G. Billard

2008-01-01

215

Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras  

NASA Technical Reports Server (NTRS)

The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

1973-01-01

216

Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)  

SciTech Connect

A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

Strehlow, J.P.

1994-08-24

217

Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy  

NASA Technical Reports Server (NTRS)

Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

1984-01-01

218

High speed cooled CCD experiments  

Microsoft Academic Search

Experiments were conducted using cooled and intensified CCD cameras. Two different cameras were identically tested using different Optical test stimulus variables. Camera gain and dynamic range were measured by varying microchannel plate (MCP) voltages and controlling light flux using neutral density (ND) filters to yield analog digitized units (ADU) which are digitized values of the CCD pixel`s analog charge. A

C. R. Pena; K. L. Albright; G. J. Yates

1998-01-01

219

Geometry compensation using depth and camera parameters for three-dimensional video coding  

NASA Astrophysics Data System (ADS)

One of the important issues for a next generation broadcasting system is how to compress a massive amount of threedimensional (3D) video efficiently. In this paper, we propose a geometry compensation method for 3D video coding exploiting color videos, depth videos and camera parameters. In the proposed method, we first generate a compensated view, which is located at the geometrically same position with the current view, using depth and camera parameters of neighboring views. Then, the compensated view is used as a reference picture to reduce the inter-view redundancies such as disparity and motion vectors. Furthermore, considering the direction of hole-regions, we propose a hole-filling method for picture of P-view to fill up the holes based on the neighboring background pixels. The experimental results show that the proposed algorithm increases BD-PSNRs up to 0.22dB and 0.63dB for P- and B-views, respectively. Meanwhile, we achieved up to 6.28% and 18.32% BD bit-rates gain for P- and B- views, respectively.

Kim, Dong Hyun; Seo, Jungdong; Ryu, Seungchul; Lee, Jin Young; Wey, Ho-Cheon; Sohn, Kwanghoon

2012-03-01

220

A New Remote Sensing Filter Radiometer Employing a Fabry-Perot Etalon and a CCD Camera for Column Measurements of Methane in the Earth Atmosphere  

NASA Technical Reports Server (NTRS)

A portable remote sensing system for precision column measurements of methane has been developed, built and tested at NASA GSFC. The sensor covers the spectral range from 1.636 micrometers to 1.646 micrometers, employs an air-gapped Fabry-Perot filter and a CCD camera and has a potential to operate from a variety of platforms. The detector is an XS-1.7-320 camera unit from Xenics Infrared solutions which combines an uncooled InGaAs detector array working up to 1.7 micrometers. Custom software was developed in addition to the graphical user basic interface X-Control provided by the company to help save and process the data. The technique and setup can be used to measure other trace gases in the atmosphere with minimal changes of the etalon and the prefilter. In this paper we describe the calibration of the system using several different approaches.

Georgieva, E. M.; Huang, W.; Heaps, W. S.

2012-01-01

221

Photon-number distributions of twin beams generated in spontaneous parametric down-conversion and measured by an intensified CCD camera  

NASA Astrophysics Data System (ADS)

The measurement of photon-number statistics of fields composed of photon pairs, generated in spontaneous parametric down-conversion and detected by an intensified charge-coupled device (iCCD) camera, is described. Final quantum detection efficiencies, electronic noises, finite numbers of detector pixels, transverse intensity spatial profiles of the detected beams, as well as losses of single photons from a pair are taken into account in a developed general theory of photon-number detection. The measured data provided by an iCCD camera with single-photon detection sensitivity are analyzed along the developed theory. Joint signal-idler photon-number distributions are recovered using the reconstruction method based on the principle of maximum likelihood. The range of applicability of the method is discussed. The reconstructed joint signal-idler photon-number distribution is compared with that obtained by a method that uses superposition of signal and noise and minimizes photoelectron entropy. Statistics of the reconstructed fields are identified to be multimode Gaussian. Elements of the measured as well as the reconstructed joint signal-idler photon-number distributions violate classical inequalities. Sub-shot-noise correlations in the difference of the signal and idler photon numbers as well as partial suppression of odd elements in the distribution of the sum of signal and idler photon numbers are observed.

Pe?ina, Jan, Jr.; Hamar, Martin; Michálek, Václav; Haderka, Ond?ej

2012-02-01

222

Photon-number distributions of twin beams generated in spontaneous parametric down-conversion and measured by an intensified CCD camera  

E-print Network

The measurement of photon-number statistics of fields composed of photon pairs, generated in spontaneous parametric down-conversion and detected by an intensified CCD camera is described. Final quantum detection efficiencies, electronic noises, finite numbers of detector pixels, transverse intensity spatial profiles of the detected beams as well as losses of single photons from a pair are taken into account in a developed general theory of photon-number detection. The measured data provided by an iCCD camera with single-photon detection sensitivity are analyzed along the developed theory. Joint signal-idler photon-number distributions are recovered using the reconstruction method based on the principle of maximum likelihood. The range of applicability of the method is discussed. The reconstructed joint signal-idler photon-number distribution is compared with that obtained by a method that uses superposition of signal and noise and minimizes photoelectron entropy. Statistics of the reconstructed fields are identified to be multi-mode Gaussian. Elements of the measured as well as the reconstructed joint signal-idler photon-number distributions violate classical inequalities. Sub-shot-noise correlations in the difference of the signal and idler photon numbers as well as partial suppression of odd elements in the distribution of the sum of signal and idler photon numbers are observed.

Jan Perina Jr; Ondrej Haderka; Martin Hamar; Vaclav Michalek

2012-02-07

223

Real Time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera  

PubMed Central

In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

Dogan, Sedat; Temiz, Mahir Serhan; Kulur, S?tk?

2010-01-01

224

Gain, Level, And Exposure Control For A Television Camera  

NASA Technical Reports Server (NTRS)

Automatic-level-control/automatic-gain-control (ALC/AGC) system for charge-coupled-device (CCD) color television camera prevents over-loading in bright scenes using technique for measuring brightness of scene from red, green, and blue output signals and processing these into adjustments of video amplifiers and iris on camera lens. System faster, does not distort video brightness signals, and built with smaller components.

Major, Geoffrey J.; Hetherington, Rolfe W.

1992-01-01

225

Fire Surveillance System Using an Omnidirectional Camera for Remote Monitoring  

Microsoft Academic Search

This paper proposes new video-based fire surveillance and remote monitoring system for real-life application. Most previous video-based fire detection systems using color information and temporal variations of pixels produce frequent false alarms due to the use of many heuristic features. Plus, they need several cameras to overcome the dead angle problem of a normal CCD camera. Thus, to overcome these

ByoungChul Ko; Hyun-Jae Hwang; In-Gyu Lee; Jae-Yeal Nam

2008-01-01

226

CCD camera systems and support electronics for a White Light Coronagraph and X-ray XUV solar telescope  

NASA Technical Reports Server (NTRS)

Two instruments, a White Light Coronagraph and an X-ray XUV telescope built into the same housing, share several electronic functions. Each instrument uses a CCD as an imaging detector, but due to different spectral requirements, each uses a different type. Hardware reduction, required by the stringent weight and volume allocations of the interplanetary mission, is made possible by the use of a microprocessor. Most instrument functions are software controlled with the end use circuits treated as peripherals to the microprocessor. The instruments are being developed for the International Solar Polar Mission.

Harrison, D. C.; Kubierschky, K.; Staples, M. H.; Carpenter, C. H.

1980-01-01

227

Angle-of-arrival anemometry by means of a large-aperture Schmidt-Cassegrain telescope equipped with a CCD camera.  

PubMed

The frequency spectrum of angle-of-arrival (AOA) fluctuations of optical waves propagating through atmospheric turbulence carries information of wind speed transverse to the propagation path. We present the retrievals of the transverse wind speed, upsilon b, from the AOA spectra measured with a Schmidt-Cassegrain telescope equipped with a CCD camera by estimating the "knee frequency," the intersection of two power laws of the AOA spectrum. The rms difference between 30 s estimates of upsilon b retrieved from the measured AOA spectra and 30s averages of the transverse horizontal wind speed measured with an ultrasonic anemometer was 11 cm s(-1) for a 1 h period, during which the transverse horizontal wind speed varied between 0 and 80 cm s(-1). Potential and limitations of angle-of-arrival anemometry are discussed. PMID:17975575

Cheon, Yonghun; Hohreiter, Vincent; Behn, Mario; Muschinski, Andreas

2007-11-01

228

Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras  

USGS Publications Warehouse

GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.

Harris, A.J.L.; Thornber, C.R.

1999-01-01

229

Fusion of video cameras with laser range scanners- the coastal monitoring system of the future?  

NASA Astrophysics Data System (ADS)

Coastal video monitoring systems have proven to be the most efficient way to follow the multiple scales of coastal hydro- and morpho-dynamic processes, and have resulted in important scientific contributions during the past 3 decades. The present contribution reports on recent developments in optical monitoring techniques, using sensor arrays which combine digital video cameras with laser range scanners; an approach which can improve the performance of several field and laboratory applications for nearshore measurements. Extensive testing during large-scale experiments, simulating highly erosive storm events and the consecutive post-storm recovery, has shown that the hybrid approach can reduce geo-rectification errors by an order of magnitude and in several cases, can facilitate the extraction of quantitative information from coastal imagery. The system provided wave-by-wave, water and beach surface elevation measurements in the swash zone, and has great potential for several other applications, such as detailed monitoring of wave breaking and other complex, three-dimensional wave propagation processes, as well as of complex morphologies without many of the artefacts of monoscopic video systems. Finally, apart from the laboratory, stationary version, it has been successfully implemented on a mobile platform, suitable for field application and capable of monitoring coastal areas of several km.

Vousdoukas, Michalis

2014-05-01

230

Multiformat video and laser cameras: history, design considerations, acceptance testing, and quality control. Report of AAPM Diagnostic X-Ray Imaging Committee Task Group No. 1.  

PubMed

Acceptance testing and quality control of video and laser cameras is relatively simple, especially with the use of the SMPTE test pattern. Photographic quality control is essential if one wishes to be able to maintain the quality of video and laser cameras. In addition, photographic quality control must be carried out with the film used clinically in the video and laser cameras, and with a sensitometer producing a light spectrum similar to that of the video or laser camera. Before the end of the warranty period a second acceptance test should be carried out. At this time the camera should produce the same results as noted during the initial acceptance test. With the appropriate acceptance and quality control the video and laser cameras should produce quality images throughout the life of the equipment. PMID:8497235

Gray, J E; Anderson, W F; Shaw, C C; Shepard, S J; Zeremba, L A; Lin, P J

1993-01-01

231

Quantitative underwater 3D motion analysis using submerged video cameras: accuracy analysis and trajectory reconstruction.  

PubMed

In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis. PMID:22435960

Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L

2013-01-01

232

Passive Submillimeter-wave Stand-off Video Camera for Security Applications  

NASA Astrophysics Data System (ADS)

We present the concept and experimental set-up of a passive submillimeter-wave stand-off imaging system for security applications. Our ambition is the design of an application-ready and user-friendly camera providing high sensitivity and high spatial resolution at video frame rates. As an intermediate step towards this goal, the current prototype already achieves a frame rate of 10 frames per second and a spatial resolution below 2 cm at 8 m distance. The camera is the result of a continuous development and a unique concept that yielded first high-resolution passive submillimeter-wave images provided by cryogenic sensors in May et al. (2007). It is based on an array of 20 superconducting transition-edge sensors operated at a temperature of 450 mK, a closed-cycle cooling system, a Cassegrain-type optics with a 50 cm main mirror, and an opto-mechanical scanner. Its outstanding features are the scanning solution allowing for high frame rates and the compact and integrated system design.

Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Anders, Solveig; Thorwirth, Günter; Zakosarenko, Viatcheslav; Schubert, Marco; Krause, Torsten; Starkloff, Michael; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

2010-11-01

233

Spatial correlations of spontaneously down-converted photon pairs detected with a single-photon-sensitive CCD camera.  

PubMed

A single-photon-sensitive intensified charge-coupled-device (ICCD) camera has been used to simultaneously detect, over a broad area, degenerate and nondegenerate photon pairs generated by the quantum-optical process of spontaneous parametric down-conversion. We have developed a new method for determining the quantum fourth- order correlations in spatially extended detection systems such as this one. Our technique reveals the expected phase-matching-induced spa- tial correlations in a 2-f Fourier-transform system. PMID:19381242

Jost, B; Sergienko, A; Abouraddy, A; Saleh, B; Teich, M

1998-07-20

234

Designing and testing a calibrating procedure for combining the coordination systems of a handling robot and a stationed video camera  

Microsoft Academic Search

The paper discusses some of the results of designing and testing a calibrating procedure with important implications in contemporary robotics. This procedure is also mentioned in the literature as the “procedure for calibrating by matching the coordination systems of a robot and a stationed video camera”. The procedure is tested by a training robotechnic system, which consists of an anthropomorphic

David Avishay; Veselin Pavlov; Ivan Avramov

2011-01-01

235

DIGITAL CAMERA WORK FOR SOCCER VIDEO PRODUCTION WITH EVENT RECOGNITION AND ACCURATE BALL TRACKING BY SWITCHING SEARCH METHOD  

E-print Network

DIGITAL CAMERA WORK FOR SOCCER VIDEO PRODUCTION WITH EVENT RECOGNITION AND ACCURATE BALL TRACKING@me.cs.scitec.kobe-u.ac.jp ABSTRACT In this paper, we propose a method of digital zooming by automat- ically recognizing the soccer game events such as penalty kick and free kick based on player and ball tracking. We also propose an ef

Takiguchi, Tetsuya

236

A unified and efficient framework for court-net sports video analysis using 3D camera modeling  

NASA Astrophysics Data System (ADS)

The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

Han, Jungong; de With, Peter H. N.

2007-01-01

237

A versatile digital video engine for safeguards and security applications  

SciTech Connect

The capture and storage of video images have been major engineering challenges for safeguard and security applications since the video camera provided a method to observe remote operations. The problems of designing reliable video cameras were solved in the early 1980`s with the introduction of the CCD (charged couple device) camera. The first CCD cameras cost in the thousands of dollars but have now been replaced by cameras costing in the hundreds. The remaining problem of storing and viewing video images in both attended and unattended video surveillance systems and remote monitoring systems is being solved by sophisticated digital compression systems. One such system is the PC-104 three card set which is literally a ``video engine`` that can provide power for video storage systems. The use of digital images in surveillance systems makes it possible to develop remote monitoring systems, portable video surveillance units, image review stations, and authenticated camera modules. This paper discusses the video card set and how it can be used in many applications.

Hale, W.R.; Johnson, C.S. [Sandia National Labs., Albuquerque, NM (United States); DeKeyser, P. [Fast Forward Video, Irvine, CA (United States)

1996-08-01

238

The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera  

NASA Astrophysics Data System (ADS)

To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the USGS Grand Canyon Monitoring and Research Center.

Wickert, A. D.

2010-12-01

239

Single-Camera Panoramic-Imaging Systems  

NASA Technical Reports Server (NTRS)

Panoramic detection systems (PDSs) are developmental video monitoring and image-data processing systems that, as their name indicates, acquire panoramic views. More specifically, a PDS acquires images from an approximately cylindrical field of view that surrounds an observation platform. The main subsystems and components of a basic PDS are a charge-coupled- device (CCD) video camera and lens, transfer optics, a panoramic imaging optic, a mounting cylinder, and an image-data-processing computer. The panoramic imaging optic is what makes it possible for the single video camera to image the complete cylindrical field of view; in order to image the same scene without the benefit of the panoramic imaging optic, it would be necessary to use multiple conventional video cameras, which have relatively narrow fields of view.

Lindner, Jeffrey L.; Gilbert, John

2007-01-01

240

Nyquist sampling theorem: understanding the illusion of a spinning wheel captured with a video camera  

NASA Astrophysics Data System (ADS)

Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the sampling time is chosen judiciously, then it is possible to accurately determine the frequency of a signal varying periodically with time. This paper is of educational value as it presents the principles of sampling during data acquisition. The concept of the Nyquist sampling theorem is usually introduced very briefly in the literature, with very little practical examples to grasp its importance during data acquisitions. Through a series of carefully chosen examples, we attempt to present data sampling from the elementary conceptual idea and try to lead the reader naturally to the Nyquist sampling theorem so we may more clearly understand why a signal can be interpreted incorrectly during a data acquisition procedure in the case of undersampling.

Lévesque, Luc

2014-11-01

241

Application of video-cameras for quality control and sampling optimisation of hydrological and erosion measurements in a catchment  

NASA Astrophysics Data System (ADS)

Long term soil erosion studies imply substantial efforts, particularly when there is the need to maintain continuous measurements. There are high costs associated to maintenance of field equipment keeping and quality control of data collection. Energy supply and/or electronic failures, vandalism and burglary are common causes of gaps in datasets, reducing their reach in many cases. In this work, a system of three video-cameras, a recorder and a transmission modem (3G technology) has been set up in a gauging station where rainfall, runoff flow and sediment concentration are monitored. The gauging station is located in the outlet of an olive orchard catchment of 6.4 ha. Rainfall is measured with one automatic raingauge that records intensity at one minute intervals. The discharge is measured by a flume of critical flow depth, where the water is recorded by an ultrasonic sensor. When the water level rises to a predetermined level, the automatic sampler turns on and fills a bottle at different intervals according to a program depending on the antecedent precipitation. A data logger controls the instruments' functions and records the data. The purpose of the video-camera system is to improve the quality of the dataset by i) the visual analysis of the measurement conditions of flow into the flume; ii) the optimisation of the sampling programs. The cameras are positioned to record the flow at the approximation and the gorge of the flume. In order to contrast the values of ultrasonic sensor, there is a third camera recording the flow level close to a measure tape. This system is activated when the ultrasonic sensor detects a height threshold, equivalent to an electric intensity level. Thus, only when there is enough flow, video-cameras record the event. This simplifies post-processing and reduces the cost of download of recordings. The preliminary contrast analysis will be presented as well as the main improvements in the sample program.

Lora-Millán, Julio S.; Taguas, Encarnacion V.; Gomez, Jose A.; Perez, Rafael

2014-05-01

242

Camera Animation  

NSDL National Science Digital Library

A general discussion of the use of cameras in computer animation. This section includes principles of traditional film techniques and suggestions for the use of a camera during an architectural walkthrough. This section includes html pages, images and one video.

2011-01-30

243

Technical assessment of low light color camera technology  

Microsoft Academic Search

In nighttime overcast conditions with a new moon (near-total darkness), typical light levels may only reach 10-2-10-4 lux. As such, standard CCD\\/CMOS video cameras have insufficient sensitivity to capture useful images. Third generation night vision cameras (Gen III NV) are the state-of-the-art in terms of imaging clarity and resolution at this light level, but rely on green or green\\/yellow phosphors

Scott A. Ramsey; Joseph Peak; Brian Setlik

2010-01-01

244

Lights, Camera, Action! A Guide to Using Video Production and Instruction in the Classroom.  

ERIC Educational Resources Information Center

This instructional guide offers practical ideas for incorporating video production in the classroom. Aspects of video production are presented sequentially. Strategies and suggestions are given for using video production to reinforce traditional subject content and provide interdisciplinary connections. The book is organized in two parts. After…

Limpus, Bruce

245

Application: Surveillance Data-Stream Compression ? Need: Continuous monitoring of scene with video camera  

E-print Network

only these "interesting" video frames Introduction #12;2 Embedded System Strategy: Model-Based Design Captured Video Frames System Design & Simulation #12;7 Steps to Target the TI C6416 DSK 1. Utilize I ? Input video frames ? Captured frames #12;12 Design Verification Automating embedded software

Kepner, Jeremy

246

High speed cooled CCD experiments  

SciTech Connect

Experiments were conducted using cooled and intensified CCD cameras. Two different cameras were identically tested using different Optical test stimulus variables. Camera gain and dynamic range were measured by varying microchannel plate (MCP) voltages and controlling light flux using neutral density (ND) filters to yield analog digitized units (ADU) which are digitized values of the CCD pixel`s analog charge. A Xenon strobe (5 {micro}s FWHM, blue light, 430 nm) and a doubled Nd.yag laser (10 ns FWHM, green light, 532 nm) were both used as pulsed illumination sources for the cameras. Images were captured on PC desktop computer system using commercial software. Camera gain and integration time values were adjusted using camera software. Mean values of camera volts versus input flux were also obtained by performing line scans through regions of interest. Experiments and results will be discussed.

Pena, C.R.; Albright, K.L.; Yates, G.J.

1998-12-31

247

The MMT all-sky camera  

NASA Astrophysics Data System (ADS)

The MMT all-sky camera is a low-cost, wide-angle camera system that takes images of the sky every 10 seconds, day and night. It is based on an Adirondack Video Astronomy StellaCam II video camera and utilizes an auto-iris fish-eye lens to allow safe operation under all lighting conditions, even direct sunlight. This combined with the anti-blooming characteristics of the StellaCam's detector allows useful images to be obtained during sunny days as well as brightly moonlit nights. Under dark skies the system can detect stars as faint as 6th magnitude as well as very thin cirrus and low surface brightness zodiacal features such as gegenschein. The total hardware cost of the system was less than $3500 including computer and framegrabber card, a fraction of the cost of comparable systems utilizing traditional CCD cameras.

Pickering, T. E.

2006-06-01

248

An IEEE 1394-Firewall-Based Embedded Video System for Surveillance Applications  

Microsoft Academic Search

In order to address the need for portable inexpensive systems for video surveillance, we built a computer vision system that provides digital video data transfer from a CCD camera using embedded software\\/hardware via the IEEE 1394 protocol, also known as FireWire or i.Link, and Ethernet TCP\\/IP interfaces. Controlled by an extended version of the IEEE 1394-based digital camera specification (DCAM),

Ashraf Saad; Donnie Smith

2003-01-01

249

The imaging system design of three-line LMCCD mapping camera  

NASA Astrophysics Data System (ADS)

In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

2011-08-01

250

Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera  

Microsoft Academic Search

This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is

Shinji Morita; Kazumasa Yamazawa; Naokazu Yokoya

2003-01-01

251

241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report  

SciTech Connect

This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

WERRY, S.M.

2000-03-23

252

Lights, Camera, Action! Learning about Management with Student-Produced Video Assignments  

ERIC Educational Resources Information Center

In this article, we present a proposal for fostering learning in the management classroom through the use of student-produced video assignments. We describe the potential for video technology to create active learning environments focused on problem solving, authentic and direct experiences, and interaction and collaboration to promote student…

Schultz, Patrick L.; Quinn, Andrew S.

2014-01-01

253

Architecture of PAU survey camera readout electronics  

NASA Astrophysics Data System (ADS)

PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

2012-07-01

254

A LOW-COST VIDEO CAMERA FOR MACHINE VISION AND CONSUMER USE  

E-print Network

, such as radio controlled cars. It also had to have good enough resolution so it could be used in a number of development of a low cost camera, initially for toy use, with a target manufacture price of $20

Bradbeer, Robin Sarah

255

Video-based realtime IMU-camera calibration for robot navigation  

NASA Astrophysics Data System (ADS)

This paper introduces a new method for fast calibration of inertial measurement units (IMU) with cameras being rigidly coupled. That is, the relative rotation and translation between the IMU and the camera is estimated, allowing for the transfer of IMU data to the cameras coordinate frame. Moreover, the IMUs nuisance parameters (biases and scales) and the horizontal alignment of the initial camera frame are determined. Since an iterated Kalman Filter is used for estimation, information on the estimations precision is also available. Such calibrations are crucial for IMU-aided visual robot navigation, i.e. SLAM, since wrong calibrations cause biases and drifts in the estimated position and orientation. As the estimation is performed in realtime, the calibration can be done using a freehand movement and the estimated parameters can be validated just in time. This provides the opportunity of optimizing the used trajectory online, increasing the quality and minimizing the time effort for calibration. Except for a marker pattern, used for visual tracking, no additional hardware is required. As will be shown, the system is capable of estimating the calibration within a short period of time. Depending on the requested precision trajectories of 30 seconds to a few minutes are sufficient. This allows for calibrating the system at startup. By this, deviations in the calibration due to transport and storage can be compensated. The estimation quality and consistency are evaluated in dependency of the traveled trajectories and the amount of IMU-camera displacement and rotation misalignment. It is analyzed, how different types of visual markers, i.e. 2- and 3-dimensional patterns, effect the estimation. Moreover, the method is applied to mono and stereo vision systems, providing information on the applicability to robot systems. The algorithm is implemented using a modular software framework, such that it can be adopted to altered conditions easily.

Petersen, Arne; Koch, Reinhard

2012-06-01

256

Lights! Camera! Action! Producing Library Instruction Video Tutorials Using Camtasia Studio  

ERIC Educational Resources Information Center

From Web guides to online tutorials, academic librarians are increasingly experimenting with many different technologies in order to meet the needs of today's growing distance education populations. In this article, the author discusses one librarian's experience using Camtasia Studio to create subject specific video tutorials. Benefits, as well…

Charnigo, Laurie

2009-01-01

257

A simple, inexpensive video camera setup for the study of avian nest activity  

Microsoft Academic Search

Time-lapse video photography has become a valuable tool for collecting data on avian nest activity and depredation; however, commercially available systems are expensive (.USA $4000\\/unit). We designed an inexpensive system to identify causes of nest failure of American Oystercatchers (Haematopus palliatus) and assessed its utility at Cumberland Island National Seashore, Georgia. We successfully identified raccoon (Procyon lotor), bobcat (Lynx rufus),

John B. Sabine; J. Michael Meyers; Sara H. Schweitzer

258

MULTIPLE BACKGROUND SPRITE GENERATION USING CAMERA MOTION CHARACTERIZATION FOR OBJECT-BASED VIDEO CODING  

E-print Network

-based video coding can pro- vide higher coding gain than common H.264/AVC for single-view and the MVC standard based on H.264 for multi-view (MVC). The use of background sprites outperformes the AVC/MVC especially

Wichmann, Felix

259

Speedup the Multi-camera Video-Surveillance System for Elder Falling Detection  

Microsoft Academic Search

Nowadays, all countries have to face the growing populations of elders. For most elders, unpredictable falling accidents may occur at the corner of stairs or a long corridor due to body functional decay. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may happen. Traditional secure or video surveillance systems need someone to

Wann-Yun Shieh; Ju-Chin Huang

2009-01-01

260

A multi-camera approach to sensor evaluation in video surveillance  

Microsoft Academic Search

In this paper, we provide an objective sensor performance evaluation approach to automatically carry out the sensor evaluation task in a multi-sensor system for video surveillance. That is, given a set of sensors monitoring the same area, the proposed technique estimates the accuracy of each sensor in detecting a given target. This measure depends on the conditions under which the

Lauro Snidaro; Gian Luca Foresti

2005-01-01

261

Technical Note: Determining regions of interest for CCD camera-based fiber optic luminescence dosimetry by examining signal-to-noise ratio  

PubMed Central

Purpose: The goal of this work was to develop a method for determining regions of interest (ROIs) based on signal-to-noise ratio (SNR) for the analysis of charge-coupled device (CCD) images used in luminescence-based radiation dosimetry. Methods: The ROI determination method was developed using images containing high-and low-intensity signals taken with a CCD-based, fiber optic plastic scintillation detector system. A series of threshold intensity values was defined for each signal, and ROIs were fitted around the pixels that exceeded each threshold. The SNR for each ROI was calculated and the relationship between SNR and ROI area was examined. Results: The SNR was found to increase rapidly over small ROIs for both signal levels. After reaching a maximum, the SNR of the low-intensity signal decreased steadily over larger ROIs, but the high-intensity SNR did not decrease appreciably over the ROI sizes studied. The spatial extent of the normalized images showed intensity independence, suggesting that a fixed ROI is useful for varying signal levels. Conclusions: The method described here constitutes a simple yet effective method for defining ROIs based on SNR that could enhance the low-level detection capabilities of CCD-based luminescence dosimetry systems. PMID:21520848

Klein, David M.; Therriault-Proulx, Francois; Archambault, Louis; Briere, Tina M.; Beaulieu, Luc; Beddar, A. Sam

2011-01-01

262

Combined video and laser camera for inspection of old mine shafts L. Cauvin (INERIS, Institut National de l'Environnement industriel et des RISques)  

E-print Network

, mining sites may indeed generate consequences that may affect people and goods located in the vicinity). This is notably due to the fact that shallow works often correspond to very old mining period with very high1 Combined video and laser camera for inspection of old mine shafts L. Cauvin (INERIS, Institut

Boyer, Edmond

263

Streak camera-TV system with improved S/N ratio incorporating a digital video recorder directly interfaced to a computer.  

PubMed

By incorporating a digital video recorder, a streak camera system previously developed for recording time-resolved transient absorption and emission spectra was greatly improved with respect to reproducibility and accuracy. The requirements for on-line computer time were also reduced. PMID:18699459

Schmidt, K H; Gordon, S

1979-12-01

264

Point Counts Underestimate the Importance of Arctic Foxes as Avian Nest Predators: Evidence from Remote Video Cameras in Arctic Alaskan Oil Fields  

Microsoft Academic Search

We used video cameras to identify nest predators at active shorebird and passerine nests and conducted point count surveys separately to determine species richness and detection frequency of potential nest predators in the Prudhoe Bay region of Alaska. From the surveys, we identified 16 potential nest predators, with glaucous gulls (Larus hyperboreus) and parasitic jaegers (Stercorarius parasiticus) making up more

JOSEPH R. LIEBEZEIT; STEVE ZACK

2008-01-01

265

The Advanced Camera for the Hubble Space Telescope  

Microsoft Academic Search

The Advanced Camera for the Hubble Space Telescope has three cameras. The first, the Wide Field Camera, will be a high- throughput, wide field, 4096 X 4096 pixel CCD optical and I-band camera that is half-critically sampled at 500 nm. The second, the High Resolution Camera (HRC), is a 1024 X 1024 pixel CCD camera that is critically sampled at

G. D. Illingworth; Paul D. Feldman; David A. Golimowski; Zlatan Tsvetanov; Christopher J. Burrows; James H. Crocker; Pierre Y. Bely; George F. Hartig; Randy A. Kimble; Michael P. Lesser; Richard L. White; Tom Broadhurst; William B. Sparks; Robert A. Woodruff; Pamela Sullivan; Carolyn A. Krebs; Douglas B. Leviton; William Burmester; Sherri Fike; Rich Johnson; Robert B. Slusher; Paul Volmer

1997-01-01

266

Endoscopic video texture mapping on pre-built 3-D anatomical objects without camera tracking.  

PubMed

Traditional minimally invasive surgeries use a view port provided by an endoscope or laparoscope. We argue that a useful addition to typical endoscopic imagery would be a global 3-D view providing a wider field of view with explicit depth information for both the exterior and interior of target anatomy. One technical challenge of implementing such a view is finding efficient and accurate means of registering texture images from the laparoscope on prebuilt 3-D surface models of target anatomy derived from magnetic resonance (MR) or computed tomography (CT) images. This paper presents a novel method for addressing this challenge that differs from previous approaches, which depend on tracking the position of the laparoscope. We take advantage of the fact that neighboring frames within a video sequence usually contain enough coherence to allow a 2-D-2-D registration, which is a much more tractable problem. The texturing process can be bootstrapped by an initial 2-D-3-D user-assisted registration of the first video frame followed by mostly-automatic texturing of subsequent frames. We perform experiments on phantom and real data, validate the algorithm against the ground truth, and compare it with the traditional tracking method by simulations. Experiments show that our method improves registration performance compared to the traditional tracking approach. PMID:19666333

Wang, Xianwang; Zhang, Qing; Han, Qiong; Yang, Ruigang; Carswell, Melody; Seales, Brent; Sutton, Erica

2010-06-01

267

Metrology of linear CCD arrays  

Microsoft Academic Search

In order to define the characteristic parameters of CCD linear sensors used for an electrooptic camera on the SPOT-CNES remote sensing satellite, an automatic measuring system was developed. Attention is given to the automatic acquisition and optical assembly of the test bench, and to methods of measuring sensitivity, linearity, uniformity and spectral response. Experimental results relative to the modulation transfer

D. Bize

1980-01-01

268

Implementation of insect-vision-based motion detection models using a video camera  

NASA Astrophysics Data System (ADS)

Despite their limited information processing capabilities, insects (with brains smaller than a pinhead) are able to manoeuvre with precision through environments that are highly-crowded and contain moving objects. Their ability to avoid collisions using limited computing power forms the basis for this project, in which we attempt to simulate the motion detection ability of insects using two models - the Horridge Template Model and the Reichardt Correlation Model. In this project, the direction of motion of a moving object and its angular speed are determined by capturing visual data using a web camera focussed on a moving pattern generated by VisionEgg software. The performance of both the models is quantitatively compared and various error-reducing techniques are investigated.

Budimir, Andrew; Correll, Sean; Rajesh, Sreeja; Abbott, Derek

2005-02-01

269

A simple, inexpensive video camera setup for the study of avian nest activity  

USGS Publications Warehouse

Time-lapse video photography has become a valuable tool for collecting data on avian nest activity and depredation; however, commercially available systems are expensive (>USA $4000/unit). We designed an inexpensive system to identify causes of nest failure of American Oystercatchers (Haematopus palliatus) and assessed its utility at Cumberland Island National Seashore, Georgia. We successfully identified raccoon (Procyon lotor), bobcat (Lynx rufus), American Crow (Corvus brachyrhynchos), and ghost crab (Ocypode quadrata) predation on oystercatcher nests. Other detected causes of nest failure included tidal overwash, horse trampling, abandonment, and human destruction. System failure rates were comparable with commercially available units. Our system's efficacy and low cost (<$800) provided useful data for the management and conservation of the American Oystercatcher.

Sabine, J.B.; Meyers, J.M.; Schweitzer, S.H.

2005-01-01

270

Proceedings of ICRC 2001: 639 c Copernicus Gesellschaft 2001 Determining the Alignment of HiRes Optics Using a CCD Camera.  

E-print Network

camera. The stars on the screen come from the constellation Orion. At the upper-right are the three stars of the "belt". At the center-left is Orionis, the "left shoulder". The holes in the screen are clearly visible

271

BLINC: a 640x480 CMOS active pixel video camera with adaptive digital processing, extended optical dynamic range, and miniature form factor  

NASA Astrophysics Data System (ADS)

A miniaturized camera utilizing advanced extended dynamic range CMOS APS imager technology and employing real-time histogram equalization has been developed for capturing scenes having high intra-scenic dynamic range. The camera adapts to changes in scene brightness and contrast in two frame periods, and acquires fully processed images in less than 100 milliseconds after power is applied. The BLINC camera contains an automatic exposure time control and is capable of capturing over 8 equivalent f-stops of optical dynamic range. This exposure time control along with programmable extended dynamic range and built-in 12-bit analog to digital converter allows the Sarnoff APS75 CMOS VGA image sensor to accommodate up to 15 f-stops of intra- scenic dynamic range. The APS75 sensor was fabricated with standard CMOS-7 design rules in a 0.5 micron SPTM process. Progressive scan digital video is stored and processed in real-time by an application specific integrated circuit image processor to provide optimal image contrast and exposure. The processed video is then transformed to 10-bits with a proprietary adaptive non-linear mapper before being converted to standard RS-170 analog video. Small size, light weight and low energy consumption make this camera well suited for UAV, and automotive applications.

Smith, Scott T.; Zalud, Peter; Kalinowski, John; McCaffrey, Nathaniel J.; Levine, Peter A.; Lin, Min-Long

2001-05-01

272

Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea): video evidence from animal-borne cameras.  

PubMed

The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km) from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate) correlate with the daytime foraging behavior of leatherbacks (n = 19) in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08-3:38 h), and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata) was the dominant prey (83-100%), but moon jellyfish (Aurelia aurita) were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models). Handling time increased with prey size regardless of prey species (p = 0.0001). Estimates of energy intake averaged 66,018 kJ • d(-1) but were as high as 167,797 kJ • d(-1) corresponding to turtles consuming an average of 330 kg wet mass • d(-1) (up to 840 kg • d(-1)) or approximately 261 (up to 664) jellyfish • d(-1). Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass • d(-1) equating to an average energy intake of 3-7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to southward migration. PMID:22438906

Heaslip, Susan G; Iverson, Sara J; Bowen, W Don; James, Michael C

2012-01-01

273

Jellyfish Support High Energy Intake of Leatherback Sea Turtles (Dermochelys coriacea): Video Evidence from Animal-Borne Cameras  

PubMed Central

The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km) from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate) correlate with the daytime foraging behavior of leatherbacks (n?=?19) in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08–3:38 h), and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata) was the dominant prey (83–100%), but moon jellyfish (Aurelia aurita) were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p?=?0.74, linear mixed-effects models). Handling time increased with prey size regardless of prey species (p?=?0.0001). Estimates of energy intake averaged 66,018 kJ•d?1 but were as high as 167,797 kJ•d?1 corresponding to turtles consuming an average of 330 kg wet mass•d?1 (up to 840 kg•d?1) or approximately 261 (up to 664) jellyfish•d-1. Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass•d?1 equating to an average energy intake of 3–7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to southward migration. PMID:22438906

Heaslip, Susan G.; Iverson, Sara J.; Bowen, W. Don; James, Michael C.

2012-01-01

274

Evaluation of a high dynamic range video camera with non-regular sensor  

NASA Astrophysics Data System (ADS)

Although there is steady progress in sensor technology, imaging with a high dynamic range (HDR) is still difficult for motion imaging with high image quality. This paper presents our new approach for video acquisition with high dynamic range. The principle is based on optical attenuation of some of the pixels of an existing image sensor. This well known method traditionally trades spatial resolution for an increase in dynamic range. In contrast to existing work, we use a non-regular pattern of optical ND filters for attenuation. This allows for an image reconstruction that is able to recover high resolution images. The reconstruction is based on the assumption that natural images can be represented nearly sparse in transform domains, which allows for recovery of scenes with high detail. The proposed combination of non-regular sampling and image reconstruction leads to a system with an increase in dynamic range without sacrificing spatial resolution. In this paper, a further evaluation is presented on the achievable image quality. In our prototype we found that crosstalk is present and significant. The discussion thus shows the limits of the proposed imaging system.

Schöberl, Michael; Keinert, Joachim; Ziegler, Matthias; Seiler, Jürgen; Niehaus, Marco; Schuller, Gerald; Kaup, André; Foessel, Siegfried

2013-01-01

275

Hardware environment for a retinal CCD visual sensor Fernando Pardo  

E-print Network

Hardware environment for a retinal CCD visual sensor Fernando Pardo Enrico Martinuzzi pardo A CCD retinal sensor has been developed recently. The main property of this de- vice is to perform in structure respect to the matrix based CCD sensors used in normal cameras. This special structure re- quires

Valencia, Universidad de

276

AY 105 Lab Experiment #6: CCD Characteristics II: Image Analysis  

E-print Network

AY 105 Lab Experiment #6: CCD Characteristics II: Image Analysis Purpose: In this lab, you will continue investigating the properties of CCDs that you started in Experiment #5. The CCD enables a CCD at the focus of a telescope or behind a camera lens allows it to detect and record images, i

Hillenbrand, Lynne

277

Assessing the application of an airborne intensified multispectral video camera to measure chlorophyll a in three Florida estuaries  

SciTech Connect

After absolute and spectral calibration, an airborne intensified, multispectral video camera was field tested for water quality assessments over three Florida estuaries (Tampa Bay, Indian River Lagoon, and the St. Lucie River Estuary). Univariate regression analysis of upwelling spectral energy vs. ground-truthed uncorrected chlorophyll a (Chl a) for each estuary yielded lower coefficients of determination (R{sup 2}) with increasing concentrations of Gelbstoff within an estuary. More predictive relationships were established by adding true color as a second independent variable in a bivariate linear regression model. These regressions successfully explained most of the variation in upwelling light energy (R{sup 2}=0.94, 0.82 and 0.74 for the Tampa Bay, Indian River Lagoon, and St. Lucie estuaries, respectively). Ratioed wavelength bands within the 625-710 nm range produced the highest correlations with ground-truthed uncorrected Chl a, and were similar to those reported as being the most predictive for Chl a in Tennessee reservoirs. However, the ratioed wavebands producing the best predictive algorithms for Chl a differed among the three estuaries due to the effects of varying concentrations of Gelbstoff on upwelling spectral signatures, which precluded combining the data into a common data set for analysis.

Dierberg, F.E. [DB Environmental Labs., Inc., Rockledge, FL (United States); Zaitzeff, J. [National Oceanographic and Atmospheric Adminstration, Washington, DC (United States)

1997-08-01

278

Ground-based CCD astrometry with wide field imagers. IV. An improved geometric-distortion correction for the blue prime-focus camera at the LBT  

NASA Astrophysics Data System (ADS)

High precision astrometry requires an accurate geometric-distortion solution. In this work, we present an average correction for the blue camera of the Large Binocular Telescope which enables a relative astrometric precision of ~15 mas for the BBessel and VBessel broad-band filters. The result of this effort is used in two companion papers: the first to measure the absolute proper motion of the open cluster M 67 with respect to the background galaxies; the second to decontaminate the color-magnitude of M 67 from field objects, enabling the study of the end of its white dwarf cooling sequence. Many other applications might find this distortion correction useful. Based on data acquired using the Large Binocular Telescope (LBT) at Mt. Graham, Arizona, under the Commissioning of the Large Binocular Blue Camera. The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia.Visiting Ph.D. Student at STScI under the “2008 graduate research assistantship” program.

Bellini, A.; Bedin, L. R.

2010-07-01

279

Video photographic considerations for measuring the proximity of a probe aircraft with a smoke seeded trailing vortex  

NASA Technical Reports Server (NTRS)

Considerations for acquiring and analyzing 30 Hz video frames from charge coupled device (CCD) cameras mounted in the wing tips of a Beech T-34 aircraft are described. Particular attention is given to the characterization and correction of optical distortions inherent in the data.

Childers, Brooks A.; Snow, Walter L.

1990-01-01

280

Embedded Smart Camera Performance Analysis  

Microsoft Academic Search

Increasingly powerful integrated circuits are making an entire range of new applications possible. Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While todaypsilas digital cameras capture images, smart camera capture high-level descriptions of the scene and analyze what they see. A smart camera combines video sensing, high-level video processing and

N. F. Kahar; R. B. Ahmad; Z. Hussin; A. N. C. Rosli

2009-01-01

281

Magellan Instant Camera testbed  

E-print Network

The Magellan Instant Camera (MagIC) is an optical CCD camera that was built at MIT and is currently used at Las Campanas Observatory (LCO) in La Serena, Chile. It is designed to be both simple and efficient with minimal ...

McEwen, Heather K. (Heather Kristine), 1982-

2004-01-01

282

Testing fully depleted CCD  

NASA Astrophysics Data System (ADS)

The focal plane of the PAU camera is composed of eighteen 2K x 4K CCDs. These devices, plus four spares, were provided by the Japanese company Hamamatsu Photonics K.K. with type no. S10892-04(X). These detectors are 200 ?m thick fully depleted and back illuminated with an n-type silicon base. They have been built with a specific coating to be sensitive in the range from 300 to 1,100 nm. Their square pixel size is 15 ?m. The read-out system consists of a Monsoon controller (NOAO) and the panVIEW software package. The deafualt CCD read-out speed is 133 kpixel/s. This is the value used in the calibration process. Before installing these devices in the camera focal plane, they were characterized using the facilities of the ICE (CSIC- IEEC) and IFAE in the UAB Campus in Bellaterra (Barcelona, Catalonia, Spain). The basic tests performed for all CCDs were to obtain the photon transfer curve (PTC), the charge transfer efficiency (CTE) using X-rays and the EPER method, linearity, read-out noise, dark current, persistence, cosmetics and quantum efficiency. The X-rays images were also used for the analysis of the charge diffusion for different substrate voltages (VSUB). Regarding the cosmetics, and in addition to white and dark pixels, some patterns were also found. The first one, which appears in all devices, is the presence of half circles in the external edges. The origin of this pattern can be related to the assembly process. A second one appears in the dark images, and shows bright arcs connecting corners along the vertical axis of the CCD. This feature appears in all CCDs exactly in the same position so our guess is that the pattern is due to electrical fields. Finally, and just in two devices, there is a spot with wavelength dependence whose origin could be the result of a defectous coating process.

Casas, Ricard; Cardiel-Sas, Laia; Castander, Francisco J.; Jiménez, Jorge; de Vicente, Juan

2014-08-01

283

Vacuum Camera Cooler  

NASA Technical Reports Server (NTRS)

Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

Laugen, Geoffrey A.

2011-01-01

284

Are traditional methods of determining nest predators and nest fates reliable? An experiment with Wood Thrushes (Hylocichla mustelina) using miniature video cameras  

USGS Publications Warehouse

We used miniature infrared video cameras to monitor Wood Thrush (Hylocichla mustelina) nests during 1998-2000. We documented nest predators and examined whether evidence at nests can be used to predict predator identities and nest fates. Fifty-six nests were monitored; 26 failed, with 3 abandoned and 23 depredated. We predicted predator class (avian, mammalian, snake) prior to review of video footage and were incorrect 57% of the time. Birds and mammals were underrepresented whereas snakes were over-represented in our predictions. We documented ???9 nest-predator species, with the southern flying squirrel (Glaucomys volans) taking the most nests (n = 8). During 2000, we predicted fate (fledge or fail) of 27 nests; 23 were classified correctly. Traditional methods of monitoring nests appear to be effective for classifying success or failure of nests, but ineffective at classifying nest predators.

Williams, G.E.; Wood, P.B.

2002-01-01

285

CAMERA CONTEXT BASED ESTIMATION OF SPATIAL AND TEMPORAL ACTIVITY PARAMETERS FOR VIDEO QUALITY METRICS IN AUTOMOTIVE APPLICATIONS  

E-print Network

METRICS IN AUTOMOTIVE APPLICATIONS Christian Lottermann1,2 Alexander Machado1 Damien Schroeder2 Wolfgang Hintermaier1 Eckehard Steinbach2 1 BMW Group, Munich, Germany {forename.lastname@bmw.de} 2 Institute for Media the video. Besides the influence of the content-dependent parameters of the source video and the encoding

Steinbach, Eckehard

286

Digital readout for image converter cameras  

NASA Astrophysics Data System (ADS)

There is an increasing need for fast and reliable analysis of recorded sequences from image converter cameras so that experimental information can be readily evaluated without recourse to more time consuming photographic procedures. A digital readout system has been developed using a randomly triggerable high resolution CCD camera, the output of which is suitable for use with IBM AT compatible PC. Within half a second from receipt of trigger pulse, the frame reformatter displays the image and transfer to storage media can be readily achieved via the PC and dedicated software. Two software programmes offer different levels of image manipulation which includes enhancement routines and parameter calculations with accuracy down to pixel levels. Hard copy prints can be acquired using a specially adapted Polaroid printer, outputs for laser and video printer extend the overall versatility of the system.

Honour, Joseph

1991-04-01

287

Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic  

NASA Astrophysics Data System (ADS)

Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

2008-11-01

288

Real-time user independent hand gesture recognition from time-of-flight camera video using static and dynamic models  

Microsoft Academic Search

The use of hand gestures offers an alternative to the commonly used human computer interfaces, providing a more intuitive\\u000a way of navigating among menus and multimedia applications. This paper presents a system for hand gesture recognition devoted\\u000a to control windows applications. Starting from the images captured by a time-of-flight camera (a camera that produces images\\u000a with an intensity level inversely

Javier Molina; Marcos Escudero-Viñolo; Alessandro Signoriello; Montse Pardàs; Christian Ferrán; Jesús Bescós; Ferran Marqués; José M. Martínez

289

Upgrading a CCD camera for astronomical use  

E-print Network

Borealis 95 Herculis Mizar 5. 2 5. 0 2. 5 3. 2 5. 1 5. 1 2. 1 5. 5 6. 1 5. 0 5. 4 6. 0 5. 2 4. 0 2 4s 2. 6 2. 8 4. 7 6. 3 6. 5 14. 0 The data in Table 2 was from Menzel . overlapped by the primary. An example of how big the brighter.... At the small separation of 2. 4 arc seconds, cz Lyrae was an ovular blob. See Figure 16. When seen on the monitor or film, ( Coronae Borealis had an ovular shape but its individual components were not distinguishable. One half of the blob was fainter than...

Lamecker, James Frank

2012-06-07

290

Technical assessment of low light color camera technology  

NASA Astrophysics Data System (ADS)

In nighttime overcast conditions with a new moon (near-total darkness), typical light levels may only reach 10-2-10-4 lux. As such, standard CCD/CMOS video cameras have insufficient sensitivity to capture useful images. Third generation night vision cameras (Gen III NV) are the state-of-the-art in terms of imaging clarity and resolution at this light level, but rely on green or green/yellow phosphors to produce monochromatic images while true color information is lost. More recently, low-light color video cameras have become commercially available which are purportedly able to produce truecolor images at rates of 15-30 frames per second (fps) in near-total darkness without loss in clarity. This study determined if the sensitivities of two low-light color video cameras, Toshiba's IK-1000 EMCCD and Opto-Knowledge System's (OKSI) True Color Night Vision (TCNV) cameras are comparable to current Gen II/III NV technology. NRL, in a joint effort with NSWC Carderock Division, quantified the effectiveness of these cameras in terms of objective laboratory characterization and subjective field testing. Laboratory tests included signal-to-noise (S/N), spectral response, and imaging quality at 2, 15, and 30 frames per second (fps). Field tests were performed at 8, 15, and 30 fps to determine clarity and color composition of camouflaged human subjects and stationary objects from a set number of standoff distances under near-total darkness (measured at 10-8-10-10 W/cm2 sr @ 650nm). Low-light camera video was qualitatively compared to imagery taken by Stanford Photonics Mega-10 Gen III Night Vision Scientific and Tactical Imagers under identical conditions.

Ramsey, Scott A.; Peak, Joseph; Setlik, Brian

2010-04-01

291

A video precipitation sensor for imaging and velocimetry of hydrometeors  

NASA Astrophysics Data System (ADS)

A new method to determine the shape and fall velocity of hydrometeors by using a single CCD camera is proposed in this paper, and a prototype of Video Precipitation Sensor (VPS) is developed. The instrument consists of an optical unit (collimated light source with multi-mode fiber cluster), an imaging unit (planar array CCD sensor), an acquisition and control unit, and a data processing unit, the cylindrical space between the optical unit and imaging unit is sampling volume (300 mm × 40 mm × 30 mm). As the precipitation particles fall through the sampling volume, the CCD camera exposures two times in a single frame, by which the double-exposure of particles images can be obtained. The size and shape can be obtained by the images of particles; the fall velocity can be calculated by particle displacement in double-exposure image and interval time; the drop size distribution and velocity distribution, precipitation intensity, and accumulated precipitation amount can be calculated by time integration. The innovation of VPS is that the shape, size, and velocity of precipitation particles can be measured by only one planar array CCD sensor, which can address the disadvantages of linear scan CCD disdrometer and impact disdrometer. Field measurements of rainfall demonstrate the VPS's capability to measure micro-physical properties of single particles and integral parameters of precipitation.

Liu, X. C.; Gao, T. C.; Liu, L.

2013-11-01

292

Mapping herbage biomass and nitrogen status in an Italian ryegrass (Lolium multiflorum L.) field using a digital video camera with balloon system  

NASA Astrophysics Data System (ADS)

Improving current precision nutrient management requires practical tools to aid the collection of site specific data. Recent technological developments in commercial digital video cameras and the miniaturization of systems on board low-altitude platforms offer cost effective, real time applications for efficient nutrient management. We tested the potential use of commercial digital video camera imagery acquired by a balloon system for mapping herbage biomass (BM), nitrogen (N) concentration, and herbage mass of N (Nmass) in an Italian ryegrass (Lolium multiflorum L.) meadow. The field measurements were made at the Setouchi Field Science Center, Hiroshima University, Japan on June 5 and 6, 2009. The field consists of two 1.0 ha Italian ryegrass meadows, which are located in an east-facing slope area (230 to 240 m above sea level). Plant samples were obtained at 20 sites in the field. A captive balloon was used for obtaining digital video data from a height of approximately 50 m (approximately 15 cm spatial resolution). We tested several statistical methods, including simple and multivariate regressions, using forage parameters (BM, N, and Nmass) and three visible color bands or color indices based on ratio vegetation index and normalized difference vegetation index. Of the various investigations, a multiple linear regression (MLR) model showed the best cross validated coefficients of determination (R2) and minimum root-mean-squared error (RMSECV) values between observed and predicted herbage BM (R2 = 0.56, RMSECV = 51.54), Nmass (R2 = 0.65, RMSECV = 0.93), and N concentration (R2 = 0.33, RMSECV = 0.24). Applying these MLR models on mosaic images, the spatial distributions of the herbage BM and N status within the Italian ryegrass field were successfully displayed at a high resolution. Such fine-scale maps showed higher values of BM and N status at the bottom area of the slope, with lower values at the top of the slope.

Kawamura, Kensuke; Sakuno, Yuji; Tanaka, Yoshikazu; Lee, Hyo-Jin; Lim, Jihyun; Kurokawa, Yuzo; Watanabe, Nariyasu

2011-01-01

293

Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.  

PubMed

Abstract The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC)?=?0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC?=?0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC?=?0.86) and lowest during mid-stance at the hip without markers (ICC?=?0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited. PMID:25230893

Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

2015-01-01

294

AN ACTIVE CAMERA SYSTEM FOR ACQUIRING MULTI-VIEW VIDEO Robert T. Collins, Omead Amidi, and Takeo Kanade  

E-print Network

recogni- tion, 3D reconstruction, entertainment and sports, it is often desirable to capture a set effects in the movie The Matrix, where playing back frames from a single time step, across all cameras synchronized views of a moving person from multiple viewpoints. independently with respect to a 3D scene

Collins, Robert

295

EVENT-DRIVEN VIDEO CODING FOR OUTDOOR WIRELESS MONITORING CAMERAS Zichong Chen, Guillermo Barrenetxea and Martin Vetterli  

E-print Network

such as H.264 are inefficient as they ignore the "meaning" of video content and thus waste many bits of interest pe- riodically (1-200 images/hour in our case), and transfer the image sequences back to a base station (BS). A user at the BS can inquiry the recorded images to determine if there is any particular

Vetterli, Martin

296

Evaluating intensified camera systems  

SciTech Connect

This paper describes image evaluation techniques used to standardize camera system characterizations. The authors group is involved with building and fielding several types of camera systems. Camera types include gated intensified cameras, multi-frame cameras, and streak cameras. Applications range from X-ray radiography to visible and infrared imaging. Key areas of performance include sensitivity, noise, and resolution. This team has developed an analysis tool, in the form of image processing software, to aid an experimenter in measuring a set of performance metrics for their camera system. These performance parameters are used to identify a camera system's capabilities and limitations while establishing a means for camera system comparisons. The analysis tool is used to evaluate digital images normally recorded with CCD cameras. Electro-optical components provide fast shuttering and/or optical gain to camera systems. Camera systems incorporate a variety of electro-optical components such as microchannel plate (MCP) or proximity focused diode (PFD) image intensifiers; electro-static image tubes; or electron-bombarded (EB) CCDs. It is often valuable to evaluate the performance of an intensified camera in order to determine if a particular system meets experimental requirements.

S. A. Baker

2000-06-30

297

Time and Frequency Domain CCD-Based Thermoreflectance Techniques for High-Resolution Transient Thermal Imaging  

E-print Network

Time and Frequency Domain CCD-Based Thermoreflectance Techniques for High-Resolution Transient-fast, high-resolution, CCD 1. Introduction Thermoreflectance imaging offers an interesting tool for ac a microscope, while a charge coupled device (CCD) camera cap- tures the reflected light. From the difference

298

Baltic Astronomy, vol.8, XXX--XXX, 1999. WISE OBSERVATORY SYSTEM OF FAST CCD PHO  

E-print Network

Baltic Astronomy, vol.8, XXX--XXX, 1999. WISE OBSERVATORY SYSTEM OF FAST CCD PHO­ TOMETRY E with the Wise Observatory CCD camera. The method is based on successively collecting frames, each one is a mere small fraction of the entire CCD array. If necessary, the observer is able to place the object star

Ofek, Eran

299

CCD-aided steering system for Cherenkov telescopes  

Microsoft Academic Search

Cherenkov Telescopes (dubbed CT) usually suffer from small steering errors due to tiny misalignments in both CT axis. Sometimes pointing errors can also appear due to tiny bending over the years on the masts supporting the CT camera. In this note, we present the results of a new method to correct these errors by use of a CCD camera. The

I. Sevilla; J. A. Barrio; V. Fonseca

2001-01-01

300

Lights, Camera...Citizen Science: Assessing the Effectiveness of Smartphone-Based Video Training in Invasive Plant Identification  

PubMed Central

The rapid growth and increasing popularity of smartphone technology is putting sophisticated data-collection tools in the hands of more and more citizens. This has exciting implications for the expanding field of citizen science. With smartphone-based applications (apps), it is now increasingly practical to remotely acquire high quality citizen-submitted data at a fraction of the cost of a traditional study. Yet, one impediment to citizen science projects is the question of how to train participants. The traditional “in-person” training model, while effective, can be cost prohibitive as the spatial scale of a project increases. To explore possible solutions, we analyze three training models: 1) in-person, 2) app-based video, and 3) app-based text/images in the context of invasive plant identification in Massachusetts. Encouragingly, we find that participants who received video training were as successful at invasive plant identification as those trained in-person, while those receiving just text/images were less successful. This finding has implications for a variety of citizen science projects that need alternative methods to effectively train participants when in-person training is impractical. PMID:25372597

Starr, Jared; Schweik, Charles M.; Bush, Nathan; Fletcher, Lena; Finn, Jack; Fish, Jennifer; Bargeron, Charles T.

2014-01-01

301

Video Relay Services  

MedlinePLUS

... video camera device and a broadband (high speed) Internet connection, contacts a VRS CA, who is a ... CA can be reached through the VRS provider’s Internet site, or through video equipment attached to a ...

302

A CCD-BASED WAVEFORM GENERATOR FOR DRIVING CCD CIRCUITS  

E-print Network

A CCD-BASED WAVEFORM GENERATOR FOR DRIVING CCD CIRCUITS S.E. Kemeny and E.R. Fossum Center for charge-coupled device (CCD) circuits is described. The method utilizes four parallel-in, serial-out CCD in CCD circuits of generating complex waveforms on-chip with minimum power and maximum flexibility. #12

Fossum, Eric R.

303

The Video Book.  

ERIC Educational Resources Information Center

This book provides a comprehensive step-by-step learning guide to video production. It begins with camera equipment, both still and video. It then describes how to reassemble the video and build a final product out of "video blocks," and discusses multiple-source configurations, which are required for professional level productions of live shows.…

Clendenin, Bruce

304

Advisory Surveillance Cameras Page 1 of 2  

E-print Network

be produced and how will it be secured, who will have access to the tape? 7. At what will the camera to ensure the cameras' presence doesn't create a false sense of security #12;Advisory ­ Surveillance CamerasAdvisory ­ Surveillance Cameras May 2008 Page 1 of 2 ADVISORY -- USE OF CAMERAS/VIDEO SURVEILLANCE

Liebling, Michael

305

Benefits of image deconvolution in CCD imagery  

NASA Astrophysics Data System (ADS)

We show how wavelet-based image deconvolution can provide to wide-field CCD telescopes an increase in limiting magnitude of ? R ˜ 0.6 and significant deblending improvement. Astrometric accuracy is not distorted, therefore, the feasibility of the technique for astrometric projects is validated. We apply the deconvolution process to a set of images from the recently refurbished Baker-Nunn camera at Rothney Astrophysical Observatory.

Fors, O.; Merino, M.; Otazu, X.; Cardinal, R.; Núñez, J.; Hildebrand, A.

2006-01-01

306

CCD Detection System User Manual  

E-print Network

SYNAPSE CCD Detection System User Manual Part Number 81100 ­ Revision 2 #12;ii Copyright © November-up......................................................................................................................... 19 CCD Focus and Alignment on the Spectrograph

Rubloff, Gary W.

307

Application of a Two Camera Video Imaging System to Three-Dimensional Vortex Tracking in the 80- by 120-Foot Wind Tunnel  

NASA Technical Reports Server (NTRS)

A description is presented of two enhancements for a two-camera, video imaging system that increase the accuracy and efficiency of the system when applied to the determination of three-dimensional locations of points along a continuous line. These enhancements increase the utility of the system when extracting quantitative data from surface and off-body flow visualizations. The first enhancement utilizes epipolar geometry to resolve the stereo "correspondence" problem. This is the problem of determining, unambiguously, corresponding points in the stereo images of objects that do not have visible reference points. The second enhancement, is a method to automatically identify and trace the core of a vortex in a digital image. This is accomplished by means of an adaptive template matching algorithm. The system was used to determine the trajectory of a vortex generated by the Leading-Edge eXtension (LEX) of a full-scale F/A-18 aircraft tested in the NASA Ames 80- by 120-Foot Wind Tunnel. The system accuracy for resolving the vortex trajectories is estimated to be +/-2 inches over distance of 60 feet. Stereo images of some of the vortex trajectories are presented. The system was also used to determine the point where the LEX vortex "bursts". The vortex burst point locations are compared with those measured in small-scale tests and in flight and found to be in good agreement.

Meyn, Larry A.; Bennett, Mark S.

1993-01-01

308

Surveillance camera scheduling: a virtual vision approach  

Microsoft Academic Search

ABSTRACT We present a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan\\/tilt\\/zoom (PTZ) active cameras, which automatically captures and labels high-resolution videos of pedestrians as they move,through a designated area. A wide-FOV stationary camera can track multiple pedestrians, while any PTZ active camera can capture high-quality videos of a single pedestrian at a time. We propose a multi-camera

Faisal Z. Qureshi; Demetri Terzopoulos

2006-01-01

309

Surveillance camera scheduling: a virtual vision approach  

Microsoft Academic Search

We present a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan\\/tilt\\/zoom (PTZ) active cameras, which automatically captures and labels high-resolution videos of pedestrians as they move through a designated area. A wide-FOV stationary camera can track multiple pedestrians, while any PTZ active camera can capture high-quality videos of a single pedestrian at a time. We propose a multi-camera

Faisal Z. Qureshi; Demetri Terzopoulos

2005-01-01

310

Video Object Tracking and Analysis for Computer Assisted Surgery  

E-print Network

Pedicle screw insertion technique has made revolution in the surgical treatment of spinal fractures and spinal disorders. Although X- ray fluoroscopy based navigation is popular, there is risk of prolonged exposure to X- ray radiation. Systems that have lower radiation risk are generally quite expensive. The position and orientation of the drill is clinically very important in pedicle screw fixation. In this paper, the position and orientation of the marker on the drill is determined using pattern recognition based methods, using geometric features, obtained from the input video sequence taken from CCD camera. A search is then performed on the video frames after preprocessing, to obtain the exact position and orientation of the drill. Animated graphics, showing the instantaneous position and orientation of the drill is then overlaid on the processed video for real time drill control and navigation.

Pallath, Nobert Thomas

2012-01-01

311

A CMOS interface IC for CCD imagers  

Microsoft Academic Search

A 2- mu m CMOS IC interfaces directly to the output of a CCD, level shifts the signal, amplifies it by a 4-bit programmable gain of up to 20 dB, and corrects the offset per pixel with a 3-bit word. A non-reset video output is obtained with an internal time-interleaved architecture. The total harmonic distortion (THD) of -50 dB is

K. Y. Kim; A. A. Abidi

1993-01-01

312

Vision Sensors and Cameras  

NASA Astrophysics Data System (ADS)

Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 ?m technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and ?m-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

Hoefflinger, Bernd

313

Megapixel imaging camera for expanded H{sup {minus}} beam measurements  

SciTech Connect

A charge coupled device (CCD) imaging camera system has been developed as part of the Ground Test Accelerator project at the Los Alamos National Laboratory to measure the properties of a large diameter, neutral particle beam. The camera is designed to operate in the accelerator vacuum system for extended periods of time. It would normally be cooled to reduce dark current. The CCD contains 1024 {times} 1024 pixels with pixel size of 19 {times} 19 {mu}m{sup 2} and with four phase parallel clocking and two phase serial clocking. The serial clock rate is 2.5{times}10{sup 5} pixels per second. Clock sequence and timing are controlled by an external logic-word generator. The DC bias voltages are likewise located externally. The camera contains circuitry to generate the analog clocks for the CCD and also contains the output video signal amplifier. Reset switching noise is removed by an external signal processor that employs delay elements to provide noise suppression by the method of double-correlated sampling. The video signal is digitized to 12 bits in an analog to digital converter (ADC) module controlled by a central processor module. Both modules are located in a VME-type computer crate that communicates via ethernet with a separate workstation where overall control is exercised and image processing occurs. Under cooled conditions the camera shows good linearity with dynamic range of 2000 and with dark noise fluctuations of about {plus_minus}1/2 ADC count. Full well capacity is about 5{times}10{sup 5} electron charges.

Simmons, J.E.; Lillberg, J.W.; McKee, R.J.; Slice, R.W.; Torrez, J.H. [Los Alamos National Lab., NM (United States); McCurnin, T.W.; Sanchez, P.G. [EG and G Energy Measurements, Inc., Los Alamos, NM (United States). Los Alamos Operations

1994-02-01

314

The Digital Interactive Video  

E-print Network

The Digital Interactive Video Exploration and Reflection (Diver) system lets users create virtual pathways through existing video content using a virtual camera and an annotation window for commentary repurposing, and discussion. W ith the inexorable growth of low-cost consumer video elec- tronics

Paris-Sud XI, Université de

315

Video Parsing and Browsing Using Compressed Data  

Microsoft Academic Search

Parsing video content is an important first step in the video indexing process. This paper presents algorithms to automate the video parsing task, including partitioning a source video into clips and classifying those clips according to camera operations, using compressed video data. We have developed two algorithms and a hybrid approach to partitioning video data compressed according to the JPEG

Hongjiang Zhang; Chien Yong Low; Stephen W. Smoliar

1995-01-01

316

VLSI-distributed architectures for smart cameras  

Microsoft Academic Search

Smart cameras use video\\/image processing algorithms to capture images as objects, not as pixels. This paper describes architectures for smart cameras that take advantage of VLSI to improve the capabilities and performance of smart camera systems. Advances in VLSI technology aid in the development of smart cameras in two ways. First, VLSI allows us to integrate large amounts of processing

Wayne H. Wolf

2001-01-01

317

Smart Camera Networks in Virtual Reality  

Microsoft Academic Search

We present smart camera network research in the context of a unique new synthesis of advanced computer graphics and vision simulation technologies. We design and experiment with simulated camera networks within visually and behaviorally realistic virtual environments. Specifically, we demonstrate a smart camera network comprising static and active simulated video surveillance cameras that provides perceptive coverage of a large virtual

Faisal Qureshi; Demetri Terzopoulos

2007-01-01

318

Virtual Vision and Smart Camera Networks  

Microsoft Academic Search

This paper presents camera network research in the context of a unique synthesis of advanced computer graphics and vision sim- ulation technologies. In particular, we propose the design of and experimentation with simulated camera networks within visually and behaviorally realistic virtual environments. Specifically, we demonstrate a smart camera network comprising static and active simulated video surveillance cameras that provides perceptive

Faisal Qureshi; Demetri Terzopoulos

319

Shutter-free flatfielding for CCD detectors  

NASA Astrophysics Data System (ADS)

We introduce a computational method to deconvolve the intrinsic CCD flatfield and the camera's 2D shutter function from a series of twilight flatfields of different exposure times. This allows to measure the shutter time and to assess the significance of the shutter's shading effect for a given application. Using the deconvolved 2D functions it is possible to flatfield precisely object exposures of any given exposure time and without interference by the shutter which may otherwise introduce systematic errors as strong as several percent. This is especially important for accurate stellar CCD photometry (e.g. calibration standard stars), as well as direct imaging work with the new generation of large-format CCDs and future wide-field imaging telescopes.

Surma, P.

1993-11-01

320

Observational Astronomy CCD Imaging of Planetary Nebulae or H II Regions  

E-print Network

, and the nebula is located almost exactly between two naked-eye stars. The nebula itself looks like a faint "smoke out to observe. 1 #12;You will observe a second planetary nebula in addition to the Ring Nebula. You the field of view covered by the CCD camera. You will find the field of view of the CCD in the hand

Harrington, J. Patrick

321

Observational Astronomy CCD Imaging of Planetary Nebulae or H II Regions  

E-print Network

, and the nebula is located almost exactly between two naked-eye stars. The nebula itself looks like a faint "smoke out to observe. You will observe a second planetary nebula in addition to the Ring Nebula. You the field of view covered by the CCD camera. You will find the field of view of the CCD in the hand

Veilleux, Sylvain

322

A CCD processing system design implemented by an embedded DSP in the OPM  

NASA Astrophysics Data System (ADS)

A CCD signal acquisition system featuring a flexible and compact (Charged-Coupled Device) CCD driving mode based on a DSP device is designed for a new developed optical performance monitor (OPM) module. The design has prominent advantages, such as less power consumption, fewer and cheaper attached components and flexibility. The DSP-based system aims to implement the high accuracy conversion from successive video output signals of a CCD device to DSP Processor with least additional circuit components.

Peng, Dingmin; Yu, Jiekui; Cheng, Xiaohu; Hu, Qianggao

2004-05-01

323

CCD high-speed videography system with new concepts and techniques  

NASA Astrophysics Data System (ADS)

A novel CCD high speed videography system with brand-new concepts and techniques is developed by Zhejiang University recently. The system can send a series of short flash pulses to the moving object. All of the parameters, such as flash numbers, flash durations, flash intervals, flash intensities and flash colors, can be controlled according to needs by the computer. A series of moving object images frozen by flash pulses, carried information of moving object, are recorded by a CCD video camera, and result images are sent to a computer to be frozen, recognized and processed with special hardware and software. Obtained parameters can be displayed, output as remote controlling signals or written into CD. The highest videography frequency is 30,000 images per second. The shortest image freezing time is several microseconds. The system has been applied to wide fields of energy, chemistry, medicine, biological engineering, aero- dynamics, explosion, multi-phase flow, mechanics, vibration, athletic training, weapon development and national defense engineering. It can also be used in production streamline to carry out the online, real-time monitoring and controlling.

Zheng, Zengrong; Zhao, Wenyi; Wu, Zhiqiang

1997-05-01

324

Solid State Television Camera (CID)  

NASA Technical Reports Server (NTRS)

The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

Steele, D. W.; Green, W. T.

1976-01-01

325

Electronic still camera  

NASA Astrophysics Data System (ADS)

A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

Holland, S. Douglas

1992-09-01

326

Electronic Still Camera  

NASA Technical Reports Server (NTRS)

A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

Holland, S. Douglas (inventor)

1992-01-01

327

Camera Obscura  

NSDL National Science Digital Library

Before photography was invented there was the camera obscura, useful for studying the sun, as an aid to artists, and for general entertainment. What is a camera obscura and how does it work ??? Camera = Latin for room Obscura = Latin for dark But what is a Camera Obscura? The Magic Mirror of Life What is a camera obscura? A French drawing camera with supplies A French drawing camera with supplies Drawing Camera Obscuras with Lens at the top Drawing Camera Obscuras with Lens at the top Read the first three paragraphs of this article. Under the portion Early Observations and Use in Astronomy you will find the answers to the ...

Engelman, Mr.

2008-10-28

328

A CCD search for geosynchronous debris  

NASA Technical Reports Server (NTRS)

Using the Spacewatch Camera, a search was conducted for objects in geosynchronous earth orbit. The system is equipped with a CCD camera cooled with dry ice; the image scale is 1.344 arcsec/pixel. The telescope drive was off so that during integrations the stars were trailed while geostationary objects appeared as round images. The technique should detect geostationary objects to a limiting apparent visual magnitude of 19. A sky area of 8.8 square degrees was searched for geostationary objects while geosynchronous debris passing through was 16.4 square degrees. Ten objects were found of which seven are probably geostationary satellites having apparent visual magnitudes brighter than 13.1. Three objects having magnitudes equal to or fainter than 13.7 showed motion in the north-south direction. The absence of fainter stationary objects suggests that a gap in debris size exists between satellites and particles having diameters in the millimeter range.

Gehrels, Tom; Vilas, Faith

1986-01-01

329

Cameras in the courtroom  

Microsoft Academic Search

The present experiment examined some of the key psychological issues associated with electronic media coverage (EMC) of courtroom trials. Undergraduate student subjects served as eitherwitnesses orjurors in one of three types of trials:EMC, in which a video camera was present; conventional media coverage (CMC), in which a journalist was present; or, ano-media control, in which no media representative or equipment

Eugene Borgida; Kenneth G. DeBono; Lee A. Buckmant

1990-01-01

330

THE APPLYING APPRAISEMENT OF CHINA-BRAZIL EARTH RESOURCE SATELLITE 1 CCD DATA FOR RECONNAISSANCE OF METALLIC ORE IN ZHONGDIAN AREA, YUNNAN, CHINA  

Microsoft Academic Search

CBERS-1 is a new satellite developed and made by China and Brazil. It has three sensors. One of them is CCD Multi-spectrum camera. The authors have appraised the quality of a scene CCD data and the effects applied in reconnaissanc e of metallic ore. The result shows that CCD data have great potentials in minerals exploration.

Xiaohong WANG; Ruijiang ZHANG; Hong HU

331

Formation mechanism and a universal period formula for the CCD moiré.  

PubMed

Moiré technique is often used to measure surface morphology and deformation fields. CCD moiré is a special kind of moiré and is produced when a digital camera is used to capture periodic grid structures, like gratings. Different from the ordinary moiré setups with two gratings, however, CCD moiré requires only one grating. But the formation mechanism is not fully understood and also, a high-quality CCD moiré pattern is hard to achieve. In this paper, the formation mechanism of a CCD moiré pattern, based on the imaging principle of a digital camera, is analyzed and a way of simulating the pattern is proposed. A universal period formula is also proposed and the validity of the simulation and formula is verified by experiments. The proposed model is shown to be an efficient guide for obtaining high-quality CCD moiré patterns. PMID:25321292

Junfei, Li; Youqi, Zhang; Jianglong, Wang; Yang, Xiang; Zhipei, Wu; Qinwei, Ma; Shaopeng, Ma

2014-08-25

332

Using a digital camera to study motion  

Microsoft Academic Search

A digital camera can easily be used to make a video record of a range of motions and interactions of objects - shm, free-fall and collisions, both elastic and inelastic. The video record allows measurements of displacement and time, and hence calculation of velocities, and practice with the standard formulas for motions and collisions. The camera extends the range of

Andrew J. McNeil; Steven Daniel

333

Panorama video server system  

NASA Astrophysics Data System (ADS)

A panorama video server system has been developed. This system produces a continuous panoramic view of the entire surrounding area in real time and allows multiple users to select and view visual fields independently. A significant feature of the system is that each user can select the visual field he or she wants to see at the same time. This new system is composed of video cameras, video signal conversion units, video busses, and visual field selection units. It can be equipped with up to 24 video cameras. The most appropriate camera arrangement can be decided by considering both the objects to be taken and the viewing angle of the cameras. The visual field selection unit picks up the required image data from video busses, on which all of the video data is provided. The number of users who can access simultaneously depends only on the number of visual field selection units. To smoothly connect two images captured by different cameras, a luminance-compensating function and a geometry-compensating function are included. This system has many interesting applications, such as in the distribution of beautiful scenery, sports, and monitoring.

Okimura, Takayuki; Kimura, Kazuo; Nakazawa, Kenji; Nakajima, Hideki

1998-04-01

334

Automated video tracking of contact lens motion  

NASA Astrophysics Data System (ADS)

Successful extended contact lens wear requires lens motion that provides adequate tear mixing to remove ocular debris. Proper lens motion of rigid contact lenses is also important for proper fitting. Moreover, a factor in final lens comfort and optical quality for contact lens fitting is lens centration. Calculation of the post lens volume of rigid contact lenses at different corneal surface locations can be used to produce a volume map. Such maps often reveal channels of minimum volume in which lenses may be expected to move, or local minima, where lenses may be expected to settle. To evaluate the utility of our volume map technology and evaluate other models of contact lens performance we have developed an automated video-based lens tracking system that provides detailed information about lens translation and rotation. The system uses standard video capture technology with a CCD camera attached to an ophthalmic slit lamp biomicroscope. The subject wears a specially marked contact lens for tracking purposes. Several seconds of video data are collected in real-time as the patient blinks naturally. The data are processed off-line, with the experimenter providing initial location estimates of the pupil and lens marks. The technique provides a fast and accurate method of quantifying lens motion. With better contact lens motion information we will gain a better understanding of the relationships between corneal shapes, lens design parameters, tear mixing, and patient comfort.

Carney, Thom; Dastmalchi, Shahram

2000-05-01

335

Video Visualization Gareth Daniel Min Chen  

E-print Network

, generated by the entertainment industry, security and traffic cameras, video conferencing systems, video, such as the United Kingdom, it is estimated that on av- erage a citizen is caught on security and traffic cameras 300 in the security industry is the ratio of surveillance cameras to security personnel. Imagine that security

Grant, P. W.

336

An advanced CCD emulator with 32MB image memory  

NASA Astrophysics Data System (ADS)

As part of the LSST sensor development program we have developed an advanced CCD emulator for testing new multichannel readout electronics. The emulator, based on an Altera Stratix II FPGA for timing and control, produces 4 channels of simulated video waveforms in response to an appropriate sequence of horizontal and vertical clocks. It features 40MHz, 16-bit DACs for reset and video generation, 32MB of image memory for storage of arbitrary grayscale bitmaps, and provision to simulate reset and clock feedthrough ("glitches") on the video channels. Clock inputs are qualified for proper sequences and levels before video output is generated. Binning, region of interest, and reverse clock sequences are correctly recognized and appropriate video output will be produced. Clock transitions are timestamped and can be played back to a control PC. A simplified user interface is provided via a daughter card having an ARM M3 Cortex microprocessor and miniature color LCD display and joystick. The user can select video modes from stored bitmap images, or flat, gradient, bar, chirp, or checkerboard test patterns; set clock thresholds and video output levels; and set row/column formats for image outputs. Multiple emulators can be operated in parallel to simulate complex CCDs or CCD arrays.

O'Connor, P.; Fried, J.; Kotov, I.

2012-07-01

337

1 GHz CCD transient detector  

Microsoft Academic Search

A charge coupled-device (CCD) capable of acquiring 2 K samples of analog information at a 1-GHz rate, the fastest CCD on silicon, is reported. The prototype fast-in\\/slow-out (FISO) CCD is configured as single-input, single-output device. Subsequent to the front-end 1-GHz input sampler a novel three-stage charge-domain demultiplexer is implemented which distributes the samples from one input to 8 parallel CCDs

L. Sankaranarayanan; W. Hoekstra; L. G. M. Heldens; A. Kokshoorn

1991-01-01

338

Improved Optical Techniques for Studying Sonic and Supersonic Injection into MACH-3 Flow. Video Supplement E-10853-V  

NASA Technical Reports Server (NTRS)

This video supplements a report examining optical techniques for studying sonic and supersonic injection into MACH-3 flow The study used an injection-seeded, frequency doubled ND:YAG pulsed laser to illuminate a transverse section of the injectant plume. Rayleigh scattered light was passed through an iodine absorption cell to suppress stray laser light and was imaged onto a cooled CCD camera. The scattering was based on condensation of water vapor in the injectant flow. High speed shadowgraph flow visualization images were obtained with several video camera systems. Roof and floor static pressure data are presented several ways for the three configurations of injection designs with and without helium and/or air injection into Mach 3 flow.

Buggele, A. E.; Seasholtz, R. G.

1997-01-01

339

New video pupillometer  

NASA Astrophysics Data System (ADS)

An instrument is developed to measure pupil diameter from both eyes in the dark. Each eye is monitored with a small IR video camera and pupil diameters are calculated from the video signal at a rate of 60 Hz. A processing circuit, designed around a video digitizer, a digital logic circuit, and a microcomputer, extracts pupil diameter from each video frame in real time. This circuit also highlights the detected outline of the pupil on a monitored video image of each eye. Diameters are exported to a host computer that displays, graphs, analyzes, and stores them as pupillograms. The host computer controls pupil measurements and can turn on a yellow light emitting diode mounted just above each video camera to excite the pupillary light reflex. We present examples of pupillograms to illustrate how this instrument is used to measure the pupillary light reflex and pupil motility in the dark.

McLaren, Jay W.; Fjerstad, Wayne H.; Ness, Anders B.; Graham, Matthew D.; Brubaker, Richard F.

1995-03-01

340

Focalization in 3D Video Games  

Microsoft Academic Search

This paper investigates Bal's concept of focalization for 3D video games. First, the argument traces focalization in the historical development of camera strategies in 3D video games. It highlights the detachment of the camera into an own interactive operator. Then, it exemplifies the visual focalization in video games using two case studies. In the following, it looks at possible problems

Michael Nitsche

341

Video monitoring system for car seat  

NASA Technical Reports Server (NTRS)

A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.

Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

2004-01-01

342

The Orthogonal Transfer CCD  

E-print Network

We have designed and built a new type of CCD that we call an orthogonal transfer CCD (OTCCD), which permits parallel clocking horizontally as well as vertically. The device has been used successfully to remove image motion caused by atmospheric turbulence at rates up to 100 Hz, and promises to be a better, cheaper way to carry out image motion correction for imaging than by using fast tip/tilt mirrors. We report on the device characteristics, and find that the large number of transfers needed to track image motion does not significantly degrade the image either because of charge transfer inefficiency or because of charge traps. For example, after 100 sec of tracking at 100 Hz approximately 3% of the charge would diffuse into a skirt around the point spread function. Four nights of data at the Michigan-Dartmouth-MIT (MDM) 2.4-m telescope also indicate that the atmosphere is surprisingly benign, in terms of both the speed and coherence angle of image motion. Image motion compensation improved image sharpness by about 0.5 arcsec in quadrature with no degradation over a field of at least 3 arcminutes.

John L. Tonry; Barry E. Burke; Paul L. Schechter

1997-05-21

343

CCD imaging sensors  

NASA Technical Reports Server (NTRS)

A method for promoting quantum efficiency (QE) of a CCD imaging sensor for UV, far UV and low energy x-ray wavelengths by overthinning the back side beyond the interface between the substrate and the photosensitive semiconductor material, and flooding the back side with UV prior to using the sensor for imaging. This UV flooding promotes an accumulation layer of positive states in the oxide film over the thinned sensor to greatly increase QE for either frontside or backside illumination. A permanent or semipermanent image (analog information) may be stored in a frontside SiO.sub.2 layer over the photosensitive semiconductor material using implanted ions for a permanent storage and intense photon radiation for a semipermanent storage. To read out this stored information, the gate potential of the CCD is biased more negative than that used for normal imaging, and excess charge current thus produced through the oxide is integrated in the pixel wells for subsequent readout by charge transfer from well to well in the usual manner.

Janesick, James R. (Inventor); Elliott, Stythe T. (Inventor)

1989-01-01

344

Guerrilla Video: A New Protocol for Producing Classroom Video  

ERIC Educational Resources Information Center

Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

Fadde, Peter; Rich, Peter

2010-01-01

345

Video Mosaicking for Inspection of Gas Pipelines  

NASA Technical Reports Server (NTRS)

A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.

Magruder, Darby; Chien, Chiun-Hong

2005-01-01

346

The Dark Energy Camera  

NASA Astrophysics Data System (ADS)

The DES Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which is now mounted at the prime focus of the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory. DECam is comprised of 74 250 micron thick fully depleted CCDs: 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. A filter set of u,g,r,i,z, and Y, a hexapod for focus and lateral alignment as well as thermal management of the cage temperature. DECam will be used to perform the Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. An overview of the DECam design, construction and initial on-sky performance information will be presented.

Flaugher, Brenna; DES Collaboration

2013-01-01

347

A HARDWARE PLATFORM FOR AN AUTOMATIC VIDEO TRACKING  

E-print Network

cameras. Video tracking can be used in many areas especially in security-related areas such as airportsA HARDWARE PLATFORM FOR AN AUTOMATIC VIDEO TRACKING SYSTEM USING MULTIPLE PTZ CAMERAS A report on fixed position still cameras. In this report, we proposed a hardware platform for a video tracking

Abidi, Mongi A.

348

Campus Security Camera Issued: April 2009  

E-print Network

and Technology: As the campus moves to a centralized CCTV system, existing video monitoring equipment such as analog cameras and Digital Video Recorders (DVR) may be utilized through the remaining usable life Network Video Recorder (NVR) Server. The NVR Server is maintained by the Campus Computing, Communications

349

Optimization of precision localization microscopy using CMOS camera technology  

NASA Astrophysics Data System (ADS)

Light microscopy imaging is being transformed by the application of computational methods that permit the detection of spatial features below the optical diffraction limit. Successful localization microscopy (STORM, dSTORM, PALM, PhILM, etc.) relies on the precise position detection of fluorescence emitted by single molecules using highly sensitive cameras with rapid acquisition speeds. Electron multiplying CCD (EM-CCD) cameras are the current standard detector for these applications. Here, we challenge the notion that EM-CCD cameras are the best choice for precision localization microscopy and demonstrate, through simulated and experimental data, that certain CMOS detector technology achieves better localization precision of single molecule fluorophores. It is well-established that localization precision is limited by system noise. Our findings show that the two overlooked noise sources relevant for precision localization microscopy are the shot noise of the background light in the sample and the excess noise from electron multiplication in EM-CCD cameras. At low light conditions (< 200 photons/fluorophore) with no optical background, EM-CCD cameras are the preferred detector. However, in practical applications, optical background noise is significant, creating conditions where CMOS performs better than EM-CCD. Furthermore, the excess noise of EM-CCD is equivalent to reducing the information content of each photon detected which, in localization microscopy, reduces the precision of the localization. Thus, new CMOS technology with 100fps, <1.3 e- read noise and high QE is the best detector choice for super resolution precision localization microscopy.

Fullerton, Stephanie; Bennett, Keith; Toda, Eiji; Takahashi, Teruo

2012-02-01

350

STIS-01 CCD Functional  

NASA Astrophysics Data System (ADS)

This activity measures the baseline performance and commandability of the CCD subsystem. Only primary amplifier D is used. Bias, Dark, and Flat Field exposures are taken in order to measure read noise, dark current, CTE, and gain. Numerous bias frames are taken to permit construction of "superbias" frames in which the effects of read noise have been rendered negligible. Dark exposures are made outside the SAA. Full frame and binned observations are made, with binning factors of 1x1 and 2x2. Finally, tungsten lamp exposures are taken through narrow slits to confirm the slit positions in the current database. All exposures are internals. This is a reincarnation of SM3A proposal 8502 with some unnecessary tests removed from the program.

Valenti, Jeff

2001-07-01

351

CCD Hot Pixel Annealing  

NASA Astrophysics Data System (ADS)

Hot pixel annealing will continue to be performed once every 4 weeks. The CCD TECswill be turned off and heaters will be activated to bring the detectortemperatures to about +20C. This state will be held for approximately6 hours, after which the heaters are turned off, the TECs turned on,and the CCDs returned to normal operating condition. To assess the effectiveness of the annealing, a bias and four dark images will be taken before and after the annealing procedure for both WFC and HRC.The HRC darks are taken in parallel with the WFC darks.The charge transfer efficiency {CTE} of the ACS CCD detectors declinesas damage due to on-orbit radiation exposure accumulates. This degradationhas been closely monitored at regular intervals, because it is likely todetermine the useful lifetime of the CCDs.We combine the annealling activity with the charge transfer efficiency monitoring and also merge into the routine dark image collection. To this end, the CTE monitoring exposures have been moved into this proposal . All the data for this program is acquired using internal targets {lamps} only,so all of the exposures should be taken during Earth occultation time{but not during SAA passages}.This program emulates the ACS pre-flight ground calibration and post-launchSMOV testing {program 8948}, so that results from each epoch can be directlycompared. Extended Pixel Edge Response {EPER} and First Pixel Response {FPR}data will be obtained over a range of signal levels for both theWide Field Channel {WFC}, and the High Resolution Channel {HRC}.

Cox, Colin

2005-07-01

352

CCD Hot Pixel Annealing  

NASA Astrophysics Data System (ADS)

Hot pixel annealing will continue to be performed once every 4 weeks. The CCD TECswill be turned off and heaters will be activated to bring the detectortemperatures to about +20C. This state will be held for approximately6 hours, after which the heaters are turned off, the TECs turned on,and the CCDs returned to normal operating condition. To assess the effectiveness of the annealing, a bias and four dark images will be taken before and after the annealing procedure for both WFC and HRC.The HRC darks are taken in parallel with the WFC darks.The charge transfer efficiency {CTE} of the ACS CCD detectors declinesas damage due to on-orbit radiation exposure accumulates. This degradationhas been closely monitored at regular intervals, because it is likely todetermine the useful lifetime of the CCDs.We combine the annealling activity with the charge transfer efficiency monitoring and also merge into the routine dark image collection. To this end, the CTE monitoring exposures have been moved into this proposal . All the data for this program is acquired using internal targets {lamps} only,so all of the exposures should be taken during Earth occultation time{but not during SAA passages}.This program emulates the ACS pre-flight ground calibration and post-launchSMOV testing {program 8948}, so that results from each epoch can be directlycompared. Extended Pixel Edge Response {EPER} and First Pixel Response {FPR}data will be obtained over a range of signal levels for both theWide Field Channel {WFC}, and the High Resolution Channel {HRC}.

Cox, Colin

2006-07-01

353

CCD Hot Pixel Annealing  

NASA Astrophysics Data System (ADS)

Hot pixel annealing will continue to be performed once every 4 weeks. The CCD TECs will be turned off and heaters will be activated to bring the detector temperatures to about +20C. This state will be held for approximately 12 hours, after which the heaters are turned off, the TECs turned on, and the CCDs returned to normal operating condition. To assess the effectiveness of the annealing, a bias and four dark images will be taken before and after the annealing procedure for both WFC and HRC. The HRC darks are taken in parallel with the WFC darks. The charge transfer efficiency {CTE} of the ACS CCD detectors declines as damage due to on-orbit radiation exposure accumulates. This degradation has been closely monitored at regular intervals, because it is likely to determine the useful lifetime of the CCDs. We will now combine the annealling activity with the charge transfer efficiency monitoring and also merge into the routine dark image collection. To this end, the CTE monitoring exposures have been moved into this proposal . All the data for this program is acquired using internal targets {lamps} only, so all of the exposures should be taken during Earth occultation time {but not during SAA passages}. This program emulates the ACS pre-flight ground calibration and post-launch SMOV testing {program 8948}, so that results from each epoch can be directly compared. Extended Pixel Edge Response {EPER} and First Pixel Response {FPR} data will be obtained over a range of signal levels for both the Wide Field Channel {WFC}, and the High Resolution Channel {HRC}.

Cox, Colin

2004-07-01

354

CCD filters for communication systems  

Microsoft Academic Search

CCD filters implement the concept of digital, sample-data nonrecursive filters in an analog manner using the analog delay line properties of charge-coupled semiconductor devices, thereby bringing the main features of monolithic ICs into the realm of analog filtering. The paper examines the CCD filter technology, its promises and problems, from the viewpoint of the user and designer of communication systems.

O. Mueller

1975-01-01

355

Toying with obsolescence : Pixelvision filmmakers and the Fisher Price PXL 2000 camera  

E-print Network

This thesis is a study of the Fisher Price PXL 2000 camera and the artists and amateurs who make films and videos with this technology. The Pixelvision camera records video onto an audiocassette; its image is low-resolution, ...

McCarty, Andrea Nina

2005-01-01

356

PAU camera: detectors characterization  

NASA Astrophysics Data System (ADS)

The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

2012-07-01

357

Video sensor with range measurement capability  

NASA Technical Reports Server (NTRS)

A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Howard, Richard T. (Inventor); Broderick, David J. (Inventor)

2008-01-01

358

Efficient Subframe Video Alignment Using Short Descriptors  

E-print Network

for the duration of the videos. If there is no spatial overlap * Georgios D. Evangelidis is with Perception Team same event temporally different events static cameras coherent scene appearance and motion coherent scene appearance coherent scene motion moving cameras joint camera motion free camera motion coherent

Paris-Sud XI, Université de

359

On the Development of a Digital Video Motion Detection Test Set  

SciTech Connect

This paper describes the current effort to develop a standardized data set, or suite of digital video sequences, that can be used for test and evaluation of digital video motion detectors (VMDS) for exterior applications. We have drawn from an extensive video database of typical application scenarios to assemble a comprehensive data set. These data, some existing for many years on analog videotape, have been converted to a reproducible digital format and edited to generate test sequences several minutes long for many scenarios. Sequences include non- alarm video, intrusions and nuisance alarm sources, taken with a variety of imaging sensors including monochrome CCD cameras and infrared (thermal) imaging cameras, under a variety of daytime and nighttime conditions. The paper presents an analysis of the variables and estimates the complexity of a thorough data set. Some of this video data test has been digitized for CD-ROM storage and playback. We are considering developing a DVD disk for possible use in screening and testing VMDs prior to government testing and deployment. In addition, this digital video data may be used by VMD developers for fhrther refinement or customization of their product to meet specific requirements. These application scenarios may also be used to define the testing parameters for futore procurement qualification. A personal computer may be used to play back either the CD-ROM or the DVD video data. A consumer electronics-style DVD player may be used to replay the DVD disk. This paper also discusses various aspects of digital video storage including formats, resolution, CD-ROM and DVD storage capacity, formats, editing and playback.

Pritchard, Daniel A.; Vigil, Jose T.

1999-06-07

360

Mobile Phones Digital Cameras  

E-print Network

News· Tutorials· Reviews· Features· Videos· Search· Mobile Phones· Notebooks· Digital Cameras· Gaming· Computers· Audio· Software· Follow Us· Subscribe· Airport Security to Get New Scanning Device Relations Accredited online university. Get an international relations degree. www.AMUOnline.com security

Suslick, Kenneth S.

361

Image responses to x-ray radiation in ICCD camera  

NASA Astrophysics Data System (ADS)

When used in digital radiography, ICCD camera will be inevitably irradiated by x-ray and the output image will degrade. In this research, we separated ICCD camera into two optical-electric parts, CCD camera and MCP image intensifier, and irradiated them respectively on Co-60 gamma ray source and pulsed x-ray source. By changing time association between radiation and the shutter of CCD camera, the state of power supply of MCP image intensifier, significant differences have been observed in output images. A further analysis has revealed the influence of the CCD chip, readout circuit in CCD camera, and the photocathode, microchannel plate and fluorescent screen in MCP image intensifier on image quality of an irradiated ICCD camera. The study demonstrated that compared with other parts, irradiation response of readout circuit is very slight and in most cases negligible. The interaction of x-ray with CCD chip usually behaves as bright spots or rough background in output images, which depends on x-ray doses. As to the MCP image intensifier, photocathode and microchannel plate are the two main steps that degrade output images. When being irradiated by x-ray, microchannel plate in MCP image intensifier tends to contribute a bright background in output images. Background caused by the photocathode looks more bright and fluctuant. Image responses of fluorescent screen in MCP image intensifier in ICCD camera and that of a coupling fiber bundle are also evaluated in this presentation.

Ma, Jiming; Duan, Baojun; Song, Yan; Song, Guzhou; Han, Changcai; Zhou, Ming; Du, Jiye; Wang, Qunshu; Zhang, Jianqi

2013-08-01

362

The Use of Video Technology in Education  

Microsoft Academic Search

This article underlines the significance of video usage in educational institutes for imparting knowledge in various fields including media, history, arts and medical sciences, and gives detailed information about numerous video formats. It also describes the popularity of video cameras, specifically in cultural settings, for producing videos of wedding ceremonies and other social occasions. Moreover, the usefulness of videodisc educational

Nazir Ahmad

1990-01-01

363

Digest Generation Using Surveillance Video in Kindergarten  

Microsoft Academic Search

In this paper, we address the task of automatic digest generating of video data taken from kindergarten surveillance cameras. Our objective is extracting and merging video segments to recode children's daily life. In order to deal with mass video data efficiently, we jointly utilize location information and visual features to segment raw material videos. Our proposed method involves two steps.

Yu Wang; Jien Kato

2010-01-01

364

The Sloan Digital Sky Survey Photometric Camera  

Microsoft Academic Search

We have constructed a large-format mosaic CCD camera for the Sloan Digital Sky Survey. The camera consists of two arrays, a photometric array that uses 30 2048 x 2048 SITe\\/Tektronix CCDs (24 mum pixels) with an effective imaging area of 720 cm^2 and an astrometric array that uses 24 400 x 2048 CCDs with the same pixel size, which will

J. E. Gunn; M. Carr; C. Rockosi; M. Sekiguchi; K. Berry; B. Elms; E. de Haas; Z. Ivezic; G. Knapp; R. Lupton; G. Pauls; R. Simcoe; R. Hirsch; D. Sanford; S. Wang; D. York; F. Harris; J. Annis; L. Bartozek; W. Boroski; J. Bakken; M. Haldeman; S. Kent; S. Holm; D. Holmgren; D. Petravick; A. Prosapio; R. Rechenmacher; M. Doi; M. Fukugita; K. Shimasaku; N. Okada; C. Hull; W. Siegmund; E. Mannery; M. Blouke; D. Heidtman; D. Schneider; R. Lucinio; J. Brinkman

1998-01-01

365

Timing of satellite observations for telescope with TV CCD camera  

NASA Astrophysics Data System (ADS)

The time reference system to be used for linking of the satellite position and brightness measurements to the universal time scale UTC are described. These are used in Odessa astronomical observatory. They provides stable error does not exceeding the absolute value of 0.1 ms. The achieved accuracy of the timing allows us to study a very short-term satellite brightness variations and the actual unevenness of its orbital motion.

Dragomiretskoy, V. V.; Koshkin, N. I.; Korobeinikova, E. A.; Melikyants, C. M.; Ryabov, A. V.; Strahova, S. L.; Terpan, S. S.; Shakun, L. S.

2013-12-01

366

Automatic fire detection system using CCD camera and Bayesian network  

Microsoft Academic Search

This paper proposes a new vision-based fire detection method for real-life application. Most previous vision-based methods using color information and temporal variations of pixels produce frequent false alarms due to the use of many heuristic features. Plus, there is usually a computation delay for accurate fire detection. Thus, to overcome these problems, candidate fire regions are first detected using a

Kwang-Ho Cheong; Byoung-Chul Ko; Jae-Yeal Nam

2008-01-01

367

Development of a CCD array as an imaging detector for advanced X-ray astrophysics facilities  

NASA Technical Reports Server (NTRS)

The development of a charge coupled device (CCD) X-ray imager for a large aperture, high angular resolution X-ray telescope is discussed. Existing CCDs were surveyed and three candidate concepts were identified. An electronic camera control and computer interface, including software to drive a Fairchild 211 CCD, is described. In addition a vacuum mounting and cooling system is discussed. Performance data for the various components are given.

Schwartz, D. A.

1981-01-01

368

CCD Spectroscopy of the optical transient PNV J03093063+2638031  

NASA Astrophysics Data System (ADS)

M.M.M. Santangelo and S. Gambogi obtained long-slit CCD spectroscopy of the recently discovered optical transient PNV J03093063+2638031 The observations were performed on October 30.8 UT 2014 by means of a SBIG SGS spectrometer + CCD camera ST-7 XME attached at the 0.60 m f/9.83 Cassegrain reflector of OAC Astronomical Observatory of Capannori, Italy.

Santangelo, M., M. M.; Gambogi, S.

2014-11-01

369

Study on the relative radiometric correction of CBERS satellite CCD image  

Microsoft Academic Search

The main payload on CBERS-01\\/02 of China-Brazil Earth Resources Satellite (CBERS) is a push-broom CCD camera with moderate\\u000a spatial and radiant resolution. Because at lab the data for calibration at satellite assembly stage were unable to be collected,\\u000a and also because the onboard calibrator after launch was in a different state from imaging, the calibration of CCD image got\\u000a a

Jianning Guo; Jin Yu; Yong Zeng; Jianyan Xu; Zhiqiang Pan; Minghui Hou

2005-01-01

370

Colorimetric phosphorescence measurements with a color camera for oxygen determination  

NASA Astrophysics Data System (ADS)

We developed a simple oxygen imaging platform with phosphorescent oxygen sensor films to demonstrate a quantitative oxygen determination method utilizing a color CCD camera. Phosphorescence quenching of a luminophore Pt(II) meso-tetrakis (pentafluorophenyl) porphyrin complex (PtTFPP) immobilized in poly (dimethylsiloxane) (PDMS) matrix, is the principal detection mechanism. This sensor material was cast to form a film on the bottom surface of a transparent Petri dish. As levels of dissolved oxygen increased, phosphorescence of the complex decreased, allowing for measurement of oxygen levels which developed in the sensor film. A camera with a charge-coupled device (CCD) was used in conjunction with processing software to quantify oxygen levels colorimetrically. Microscopic images were collected using a CCD camera and stored as a set of red/green/blue (RGB) images. Phosphorescence excitation (390 nm peak) is limited to the blue (B) pixels of the CCD chip, and these values were discarded; while retaining the oxygen-responsive phosphorescence emission (645 nm peak) almost identical with the response range of the red (R) pixels. Red pixel intensity analysis effectively extracts color intensity information, which can be in turn directly related to oxygen contents. Color CCD cameras allow simultaneous acquisition of many types of chemical information by combining the merits of digital imaging with the attributes of spectroscopic measurement. Therefore, use of color CCD cameras is considered as an inexpensive alternative to time-resolved imaging for relatively short-term monitoring.

Bhagwat, Prajakta; Achanta, Gowthami Satya; Henthorn, David; Kim, Chang-Soo

2011-05-01

371

Pixel-to-Pixel Correspondence Adjustment in DMD Camera by Moiré Methodology  

Microsoft Academic Search

DMD (Digital Micro-mirror Device) is a new device, which has hundreds of thousands of micro-mirrors in one chip. We developed\\u000a a DMD reflection-type CCD camera that we call ‘DMD camera’ previously. In this optical system of the DMD camera, each mirror\\u000a of the DMD is corresponded to each pixel of the CCD. As a result, each DMD mirror works as

S. Ri; M. Fujigaki; T. Matui; Y. Morimoto

2006-01-01

372

Subaru Prime Focus Camera -- Suprime-Cam  

Microsoft Academic Search

We have built an 80-mega pixels (10240 × 8192) mosaic CCD camera, called Suprime-Cam, for the wide-field prime focus of the 8.2m Subaru telescope. Suprime-Cam covers a field of view 34' × 27', a unique facility among the 8-10m class telescopes, with a resolution of 0\\

Satoshi Miyazaki; Yutaka Komiyama; Maki Sekiguchi; Sadanori Okamura; Mamoru Doi; Hisanori Furusawa; Masaru Hamabe; Katsumi Imi; Masahiko Kimura; Fumiaki Nakata; Norio Okada; Masami Ouchi; Kazuhiro Shimasaku; Masafumi Yagi; Naoki Yasuda

2002-01-01

373

High-speed video analysis system using multiple shuttered charge-coupled device imagers and digital storage  

NASA Astrophysics Data System (ADS)

A fully solid state high-speed video analysis system is presented. It is based on the use of several independent charge-coupled device (CCD) imagers, each shuttered by a liquid crystal light valve. The imagers are exposed in rapid succession and are then read out sequentially at standard video rate into digital memory, generating a time-resolved sequence with as many frames as there are imagers. This design allows the use of inexpensive, consumer-grade camera modules and electronics. A microprocessor-based controller, designed to accept up to ten imagers, handles all phases of the recording from exposure timing to image capture and storage to playback on a standard video monitor. A prototype with three CCD imagers and shutters has been built. It has allowed successful three-image video recordings of phenomena such as the action of an air rifle pellet shattering a piece of glass, using a high-intensity pulsed light emitting diode as the light source. For slower phenomena, recordings in continuous light are also possible by using the shutters themselves to control the exposure time. The system records full-screen black and white images with spatial resolution approaching that of standard television, at rates up to 5000 images per second.

Racca, Roberto G.; Stephenson, Owen; Clements, Reginald M.

1992-06-01

374

Single camera based spectral domain polarization sensitive optical coherence tomography  

Microsoft Academic Search

We developed a new spectral domain polarization sensitive optical coherence tomography (SD PS-OCT) system that requires only a single spectrometer CCD camera. The spectra of the horizontal and vertical polarization channels are imaged adjacent to each other on a 2048 pixel line scan camera, using 1024 pixels for each channel. Advantages of the system are reduced costs and complexity, lower

Bernhard Baumann; Erich Götzinger; Michael Pircher; Christoph K. Hitzenberger

2007-01-01

375

Video Toroid Cavity Imager  

DOEpatents

A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

2004-08-10

376

STS-135 Fused Launch Video  

NASA Video Gallery

Imaging experts funded by the Space Shuttle Program and located at NASA's Ames Research Center prepared this video of the STS-135 launch by merging images taken by a set of six cameras capturing fi...

377

Space solar EUV and x-ray imaging camera  

NASA Astrophysics Data System (ADS)

The space solar EUV and X ray imaging camera is the core of the solar EUV and X ray imaging telescope, which is designed to monitor and predict solar activities, such as CMEs, flare and coronal hole. This paper presents a comprehensive description of the camera, from the detector selection to the electronics system design, and gives some experiment results. The camera adopts a back illuminated X ray sensible CCD as detector and the support electronics is based on it. The electronics system should be designed according to the CCD structure. This article gives an illustration of the three modules and a software in electronics system, and offers possible solutions to some common problems in CCD camera designing. There are detailed descriptions about drivers and signal amplifing and processing Module.

Peng, Ji-Long; Li, Bao-Quan; Wei, Fei; Zhang, Xin; Liu, Xin; Zeng, Zhi-Rong

2008-10-01

378

Modular integrated video system  

SciTech Connect

The Modular Integrated Video System (MIVS) is intended to provide a simple, highly reliable closed circuit television (CCTV) system capable of replacing the IAEA Twin Minolta Film Camera Systems in those safeguards facilities where mains power is readily available, and situations where it is desired to have the CCTV camera separated from the CCTV recording console. This paper describes the MIVS and the Program Plan which is presently being followed for the development, testing, and implementation of the system.

Gaertner, K.J.; Heaysman, B.; Holt, R.; Sonnier, C.

1986-01-01

379

CCD readout electronics for the Subaru Prime Focus Spectrograph  

NASA Astrophysics Data System (ADS)

The following paper details the design for the CCD readout electronics for the Subaru Telescope Prime Focus Spectrograph (PFS). PFS is designed to gather spectra from 2394 objects simultaneously, covering wavelengths that extend from 380 nm to 1260 nm. The spectrograph is comprised of four identical spectrograph modules, each collecting roughly 600 spectra. The spectrograph modules provide simultaneous wavelength coverage over the entire band through the use of three separate optical channels: blue, red, and near infrared (NIR). A camera in each channel images the multi-object spectra onto a 4k × 4k, 15 ?m pixel, detector format. The two visible cameras use a pair of Hamamatsu 2k × 4k CCDs with readout provided by custom electronics, while the NIR camera uses a single Teledyne HgCdTe 4k × 4k detector and Teledyne's ASIC Sidecar to read the device. The CCD readout system is a custom design comprised of three electrical subsystems - the Back End Electronics (BEE), the Front End Electronics (FEE), and a Pre-amplifier. The BEE is an off-the-shelf PC104 computer, with an auxiliary Xilinx FPGA module. The computer serves as the main interface to the Subaru messaging hub and controls other peripheral devices associated with the camera, while the FPGA is used to generate the necessary clocks and transfer image data from the CCDs. The FEE board sets clock biases, substrate bias, and CDS offsets. It also monitors bias voltages, offset voltages, power rail voltage, substrate voltage and CCD temperature. The board translates LVDS clock signals to biased clocks and returns digitized analog data via LVDS. Monitoring and control messages are sent from the BEE to the FEE using a standard serial interface. The Pre-amplifier board resides behind the detectors and acts as an interface to the two Hamamatsu CCDs. The Pre-amplifier passes clocks and biases to the CCDs, and analog CCD data is buffered and amplified prior to being returned to the FEE. In this paper we describe the detailed design of the PFS CCD readout electronics and discuss current status of the design, preliminary performance, and proposed enhancements.

Hope, Stephen C.; Gunn, James E.; Loomis, Craig P.; Fitzgerald, Roger E.; Peacock, Grant O.

2013-07-01

380

CCD readout electronics for the Subaru Prime Focus Spectrograph  

NASA Astrophysics Data System (ADS)

The following paper details the design for the CCD readout electronics for the Subaru Telescope Prime Focus Spectrograph (PFS). PFS is designed to gather spectra from 2394 objects simultaneously, covering wavelengths that extend from 380 nm to 1260 nm. The spectrograph is comprised of four identical spectrograph modules, each collecting roughly 600 spectra. The spectrograph modules provide simultaneous wavelength coverage over the entire band through the use of three separate optical channels: blue, red, and near infrared (NIR). A camera in each channel images the multi-object spectra onto a 4k × 4k, 15 ?m pixel, detector format. The two visible cameras use a pair of Hamamatsu 2k × 4k CCDs with readout provided by custom electronics, while the NIR camera uses a single Teledyne HgCdTe 4k × 4k detector and Teledyne's ASIC Sidecar to read the device. The CCD readout system is a custom design comprised of three electrical subsystems - the Back End Electronics (BEE), the Front End Electronics (FEE), and a Pre-amplifier. The BEE is an off-the-shelf PC104 computer, with an auxiliary Xilinx FPGA module. The computer serves as the main interface to the Subaru messaging hub and controls other peripheral devices associated with the camera, while the FPGA is used to generate the necessary clocks and transfer image data from the CCDs. The FEE board sets clock biases, substrate bias, and CDS offsets. It also monitors bias voltages, offset voltages, power rail voltage, substrate voltage and CCD temperature. The board translates LVDS clock signals to biased clocks and returns digitized analog data via LVDS. Monitoring and control messages are sent from the BEE to the FEE using a standard serial interface. The Pre-amplifier board resides behind the detectors and acts as an interface to the two Hamamatsu CCDs. The Pre-amplifier passes clocks and biases to the CCDs, and analog CCD data is buffered and amplified prior to being returned to the FEE. In this paper we describe the detailed design of the PFS CCD readout electronics and discuss current status of the design, preliminary performance, and proposed enhancements.

Hope, Stephen C.; Gunn, James E.; Loomis, Craig P.; Fitzgerald, Roger E.; Peacock, Grant O.

2014-07-01

381

The Crimean CCD telescope for the asteroid observations  

NASA Astrophysics Data System (ADS)

The old 64-cm Richter-Slefogt telescope (F=90 cm) of the Crimean Astrophysical Observatory was reconstructed and equipped with the St-8 CCD camera supplied by the Planetary Society as the Eugene Shoemaker Near Earth Object Grant. The first observations of minor planets and comets were made with the telescope in 2000. The CCD matrix of St-8 camera in the focus of our telescope covers field of 52'.7×35'.1. With 120-second exposure we obtain the images of stars up to the limiting magnitude of 20.5 mag within S/N=3. The first phase of automation of the telescope was completed in May of 2002. According to our estimations, the telescope will be able to cover the sky area of 20 square deg with threefold overlapping during the night. The software for object localization, image parameters determination, stars identification, astrometric reduction, identification and cataloguing of asteroids is worked up. The first observation results obtained with the 64-cm CCD telescope are discussed.

Chernykh, Nikolaj; Rumyantsev, Vasilij

2002-11-01

382

The Crimean CCD Telescope for the asteroid observations  

NASA Astrophysics Data System (ADS)

The old 64-cm Richter-Slefogt telescope (F=90 cm) of the Crimean Astrophysical Observatory was reconstructed and equipped with the SBIG ST-8 CCD camera received from the Planetary Society for Eugene Shoemaker's Near Earth Object Grant. First observations of minor planets and comets were made with it. The CCD matrix of the St-8 camera in the focus of our telescope covers a field of 52'.7 x 35'.1. The 120 - second exposure yields stars up to the limiting magnitude of 20.5 for S/N=3. According to preliminary estimations, the telescope of today state enables us to cover, during the year, the sky area of not more than 600 sq. deg. with threefold overlaps. An automation of the telescope can increase the productivity up to 20000 sq. deg. per year. The software for object localization, image parameters determination, stars identification, astrometric reduction, identification and catalogue of asteroids is worked up. The first results obtained with the Crimean CCD 64-cm telescope are discussed.

Chernykh, N. S.; Rumyantsev, V. V.

2002-09-01

383

Camera designs for the Keck Observatory LRIS and HIRES spectrometers  

Microsoft Academic Search

The optical designs of three CCD cameras for the Keck Observatory spectrometers are described and illustrated with drawings and diagrams. The 8-element camera for the Low-Resolution Imaging Spectrometer (LRIS) has focal length 12.0 inches, entrance aperture 8.9 inches, and flat FOV diameter 2.93 inches; the camera is color corrected without refocusing over the spectral range 390-1000 nm. For the High-Resolution

Harland W. Epps

1990-01-01

384

Cameras Monitor Spacecraft Integrity to Prevent Failures  

NASA Technical Reports Server (NTRS)

The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

2014-01-01

385

Optimising Camera Traps for Monitoring Small Mammals  

PubMed Central

Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790

Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

2013-01-01

386

Case Study of Live View System using Union-Camera  

Microsoft Academic Search

Multimedia communications such as video streaming have been used recently. Seeing the video from cameras located at strage place for people is one of ways of using for streaming. In keeping with this situation, we had a demonstration of live streaming between Korea Advanced Institute of Science and Technology (KAIST) and Kyushu University. We set cameras at Fukuoka Tower that

Hiroto HARADA; Koji OKAMURA

387

Snowfall Retrivals Using a Video Disdrometer  

NASA Astrophysics Data System (ADS)

A video disdrometer has been recently developed at NASA/Wallops Flight Facility in an effort to improve surface precipitation measurements. One of the goals of the upcoming Global Precipitation Measurement (GPM) mission is to provide improved satellite-based measurements of snowfall in mid-latitudes. Also, with the planned dual-polarization upgrade of US National Weather Service weather radars, there is potential for significant improvements in radar-based estimates of snowfall. The video disdrometer, referred to as the Rain Imaging System (RIS), was deployed in Eastern North Dakota during the 2003-2004 winter season to measure size distributions, precipitation rate, and density estimates of snowfall. The RIS uses CCD grayscale video camera with a zoom lens to observe hydrometers in a sample volume located 2 meters from end of the lens and approximately 1.5 meters away from an independent light source. The design of the RIS may eliminate sampling errors from wind flow around the instrument. The RIS operated almost continuously in the adverse conditions often observed in the Northern Plains. Preliminary analysis of an extended winter snowstorm has shown encouraging results. The RIS was able to provide crystal habit information, variability of particle size distributions for the lifecycle of the storm, snowfall rates, and estimates of snow density. Comparisons with coincident snow core samples and measurements from the nearby NWS Forecast Office indicate the RIS provides reasonable snowfall measurements. WSR-88D radar observations over the RIS were used to generate a snowfall-reflectivity relationship from the storm. These results along with several other cases will be shown during the presentation.

Newman, A. J.; Kucera, P. A.

2004-12-01

388

CCD photometry of bright stars using objective wire mesh  

E-print Network

Obtaining accurate photometry of bright stars from the ground remains tricky because of the danger of overexposure of the target and/or lack of suitable nearby comparison star. The century-old method of the objective wire mesh used to produce multiple stellar images seems attractive for precision CCD photometry of such stars. Our tests on beta Cep and its comparison star differing by 5 magnitudes prove very encouraging. Using a CCD camera and a 20 cm telescope with objective covered with a plastic wire mesh, located in poor weather conditions we obtained differential photometry of precision 4.5 mmag per 2 min exposure. Our technique is flexible and may be tuned to cover as big magnitude range as 6 - 8 magnitudes. We discuss the possibility of installing a wire mesh directly in the filter wheel.

Kami?ski, Krzysztof; Zgórz, Marika

2014-01-01

389

Detection of moving objects in airborne thermal videos  

Microsoft Academic Search

\\u000aThermal infrared cameras have the capability to operate day and night and to display moving objects in image sequences sampled with video frame rate or better. However, compared to standard video in the visual domain, these cameras have disadvantages concerning the geometric resolution. In this paper, a method for the detection of moving objects in airborne thermal videos is presented.

M. Kirchhof; U. Stilla

2006-01-01

390

Exposing Digital Forgeries in Interlaced and De-Interlaced Video  

E-print Network

tampering can disturb this relationship. Both algorithms are then adapted slightly to detect frame rate number of video surveillance cameras are also giving rise to an enormous amount of video data-interlaced video, we quantify the correlations introduced by the camera or software de-interlacing algorithms

Farid, Hany

391

CCD photometry of 625 Xenia  

NASA Astrophysics Data System (ADS)

615 Xenia was observed for eight nights using CCD photometry during the months of April and May 1998. The period of rotation was 21.101 ± 0.032 hours, and the lightcurve had amplitude of 0.50 ± 0.05 magnitude.

Worman, W. E.; Fieber, Sherry; Creason, Mary A.

392

CCD Readout Electronics for the Subaru Prime Focus Spectrograph  

E-print Network

We present details of the design for the CCD readout electronics for the Subaru Telescope Prime Focus Spectrograph (PFS). The spectrograph is comprised of four identical spectrograph modules, each collecting roughly 600 spectra. The spectrograph modules provide simultaneous wavelength coverage over the entire band from 380 nm to 1260 nm through the use of three separate optical channels: blue, red, and near infrared (NIR). A camera in each channel images the multi-object spectra onto a 4k x 4k, 15 um pixel, detector format. The two visible cameras use a pair of Hamamatsu 2k x 4k CCDs with readout provided by custom electronics, while the NIR camera uses a single Teledyne HgCdTe 4k x 4k detector and ASIC Sidecar to read the device. The CCD readout system is a custom design comprised of three electrical subsystems: the Back End Electronics (BEE), the Front End Electronics (FEE), and a Pre-amplifier. The BEE is an off-the-shelf PC104 computer, with an auxiliary Xilinx FPGA module. The computer serves as the main...

Hope, Stephen C; Loomis, Craig P; Fitzgerald, Roger E; Peacock, Grant O

2014-01-01

393

Flame recognition in video  

Microsoft Academic Search

This paper presents an automatic system for fire detection in video sequences. There are many previous methods to detect fire, however, all except two use spectroscopy or particle sensors. The two that use visual information suffer from the inability to cope with a moving camera or a moving scene. One of these is not able to work on general data,

Walter Phillips; M. Shah; N. Da Vitoria Lobo

2000-01-01

394

Optical stereo video signal processor  

NASA Technical Reports Server (NTRS)

An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

Craig, G. D. (inventor)

1985-01-01

395

Robotic CCD Microscope for Enhanced Crystal Recognition.  

National Technical Information Service (NTIS)

A robotic CCD microscope and procedures to automate crystal recognition are studied. The robotic CCD microscope and procedures enables more accurate crystal recognition, leading to fewer false negative and fewer false positives, and enable detection of sm...

B. W. Segeike, D. Toppani

2006-01-01

396

Movable Cameras And Monitors For Viewing Telemanipulator  

NASA Technical Reports Server (NTRS)

Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

Diner, Daniel B.; Venema, Steven C.

1993-01-01

397

Parallelization Techniques for Spatial-Temporal Occupancy Maps from Multiple Video Streams  

E-print Network

environments. 2 Distributed Sensing For this work, a network of video cameras resembling a security video cameras to create a spatial-temporal oc- cupancy map. Instead of tracking objects, the algorithm operates network is assumed. The cameras are all connected to a single computer that processes the video feeds

Jones, William Michael

398

Innovative camera system developed for Sprint vehicle  

SciTech Connect

A new inspection system for the Sprint 101 ROV eliminates parallax errors because all three camera modules use a single lens for viewing. Parallax is the apparent displacement of an object when it is viewed from two points not in the same line of sight. The central camera is a Pentax 35-mm single lens reflex with a 28-mm lens. It comes with 250-shot film cassettes, an automatic film wind-on, and a data chamber display. An optical transfer assembly on the stills camera viewfinder transmits the image to one of the two video camera modules. The video picture transmitted to the surface is exactly the same as the stills photo. The surface operator can adjust the focus by viewing the video display.

Not Available

1985-04-01

399

VIDEO-TO-3D Marc Pollefeys  

E-print Network

acquisition systems. This stimulates the use of consumer photo- or video cameras. The approach presentedVIDEO-TO-3D Marc Pollefeys ¢¡ £¤¡¦¥ , Luc Van Gool , Maarten Vergauwen , Kurt Cornelis/V KEY WORDS: 3D modeling, video sequences, structure from motion, self-calibration, stereo matching

Pollefeys, Marc

400

Appendix E: Software Video Analysis of Motion  

E-print Network

using a computer and data acquisition software. This appendix will guide a person somewhat familiar: SOFTWARE E - 2 Using video to analyze motion is a two-step process. The first step is recording a video. This process uses the video software to record the images from the camera and compress the file. The second

Minnesota, University of

401

Design of high speed camera based on CMOS technology  

NASA Astrophysics Data System (ADS)

The capacity of a high speed camera in taking high speed images has been evaluated using CMOS image sensors. There are 2 types of image sensors, namely, CCD and CMOS sensors. CMOS sensor consumes less power than CCD sensor and can take images more rapidly. High speed camera with built-in CMOS sensor is widely used in vehicle crash tests and airbag controls, golf training aids, and in bullet direction measurement in the military. The High Speed Camera System made in this study has the following components: CMOS image sensor that can take about 500 frames per second at a resolution of 1280*1024; FPGA and DDR2 memory that control the image sensor and save images; Camera Link Module that transmits saved data to PC; and RS-422 communication function that enables control of the camera from a PC.

Park, Sei-Hun; An, Jun-Sick; Oh, Tae-Seok; Kim, Il-Hwan

2007-12-01

402

Performance of commercial CMOS cameras for high-speed multicolor photometry  

E-print Network

We present some results of testing of commercial color CMOS cameras for astronomical applications. CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in CMOS sensors is close to the Johnson BVR system. We demonstrate transformation from the Bayer color system to the Johnson one. Our photometric measurements with color CMOS cameras coupled to small telescopes (11 - 30 inch) reveal that in video mode stars up to V $\\sim$ 9 can be shot at 24 frames per second. Using a high-speed CMOS camera with short exposure times (10 - 20 ms) we can perform an imaging mode called "lucky imaging". We can pick out high quality frames and combine them into a single image using "shift-and-add" technique. This allows us obtain an image with much higher resolution than would be possible shooting a single image with long exposure. For image selection we use the Strehl-selection method. We demonstrates advan...

Pokhvala, S M; Reshetnyk, V M

2013-01-01

403

Real-time full-field photoacoustic imaging using an ultrasonic camera  

NASA Astrophysics Data System (ADS)

A photoacoustic imaging system that incorporates a commercial ultrasonic camera for real-time imaging of two-dimensional (2-D) projection planes in tissue at video rate (30 Hz) is presented. The system uses a Q-switched frequency-doubled Nd:YAG pulsed laser for photoacoustic generation. The ultrasonic camera consists of a 2-D 12×12 mm CCD chip with 120×120 piezoelectric sensing elements used for detecting the photoacoustic pressure distribution radiated from the target. An ultrasonic lens system is placed in front of the chip to collect the incoming photoacoustic waves, providing the ability for focusing and imaging at different depths. Compared with other existing photoacoustic imaging techniques, the camera-based system is attractive because it is relatively inexpensive and compact, and it can be tailored for real-time clinical imaging applications. Experimental results detailing the real-time photoacoustic imaging of rubber strings and buried absorbing targets in chicken breast tissue are presented, and the spatial resolution of the system is quantified.

Balogun, Oluwaseyi; Regez, Brad; Zhang, Hao F.; Krishnaswamy, Sridhar

2010-03-01

404

Observational Astronomy Gain of a CCD  

E-print Network

Observational Astronomy ASTR 310 Fall 2010 Project 1 Gain of a CCD 1 Introduction The electronics associated with a CCD typically include clocking circuits to move the charge in each pixel over to a shift (both the shift register and the analog amplifier are usually part of the CCD chip itself

Harrington, J. Patrick

405

Observational Astronomy Gain of a CCD  

E-print Network

Observational Astronomy ASTR 310 Fall 2005 Project 1 Gain of a CCD 1 Introduction The electronics associated with a CCD typically include clocking circuits to move the charge in each pixel over to a shift (both the shift register and the analog amplifier are usually part of the CCD chip itself

Veilleux, Sylvain

406

CCD temperature control CTIO 60 inches Chiron  

E-print Network

CCD temperature control CTIO 60 inches Chiron CHI60HF4.1 La Serena, November 2009 #12;Contents...........................................................................................................9 CTIO 60 inches Chiron / CCD temperature control CHI60HF1.1 2 #12;Introduction The goal of this brief report is to summarize the CCD temperature control of the Chiron at the 60 inches telescope

Tokovinin, Andrei A.

407

Observational Astronomy Gain of a CCD  

E-print Network

Observational Astronomy ASTR 310 Fall 2006 Project 1 Gain of a CCD 1 Introduction The electronics associated with a CCD typically include clocking circuits to move the charge in each pixel over to a shift (both the shift register and the analog amplifier are usually part of the CCD chip itself

Veilleux, Sylvain

408

CCD DEVELOPMENT PROGRESS AT LAWRENCE BERKELEY NATIONAL  

E-print Network

CCD DEVELOPMENT PROGRESS AT LAWRENCE BERKELEY NATIONAL LABORATORY W. F. Kolbe, S. E. Holland and C. J. Bebek Lawrence Berkeley National Laboratory Abstract P-channel CCD imagers, 200-300 µm thick. INTRODUCTION We have developed fully depleted, back-illuminated CCD imagers fabricated on high-resistivity, n

409

Demosaicing: Image Reconstruction from Color CCD Samples  

E-print Network

Demosaicing: Image Reconstruction from Color CCD Samples Ron Kimmel1 Computer Science Department an algorithm for image reconstruction from CCD sensors samples. The proposed method involves two successive color CCD sensors' information. We limit our discussion to Bayer color filter array (CFA) pattern

Salvaggio, Carl

410

CCD Observing Manual 49 Bay State Road  

E-print Network

The AAVSO CCD Observing Manual AAVSO 49 Bay State Road Cambridge, MA 02138 phone: +1 617 354 to using CCDs to make variable star observations. The target audience is beginner to intermediate level CCD observers, although advanced CCD users who have not done any photometry will also find this helpful

Ellingson, Steven W.

411

Effects on beam alignment due to neutron-irradiated CCD images at the National Ignition Facility  

NASA Astrophysics Data System (ADS)

The 192 laser beams in the National Ignition Facility (NIF) are automatically aligned to the target-chamber center using images obtained through charged-coupled-device (CCD) cameras. Several of these cameras are in and around the target chamber during an experiment. Current experiments for the National Ignition Campaign are attempting to achieve nuclear fusion. Neutron yields from these high-energy fusion shots expose the alignment cameras to neutron radiation. The present work explores modeling and predicting laser alignment performance degradation due to neutron radiation effects, and introduces techniques to mitigate performance degradation. Camera performance models have been created based on the predicted camera noise from the cumulative neutron fluence at the camera location. We have found that the effect of the neutron-generated noise for all shots to date have been well within the alignment tolerance of half a pixel, and image processing techniques can be utilized to reduce the effect even further on the beam alignment to target.

Awwal, Abdul A. S.; Manuel, Anastacia; Datte, Philip; Eckart, Mark; Jackson, Mark; Azevedo, Steve; Burkhart, Scott

2011-09-01

412

Video compressive sensing using gaussian mixture models.  

PubMed

A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression. PMID:25095253

Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

2014-11-01

413

Holistic video detection  

NASA Astrophysics Data System (ADS)

There are large amount of CCTV cameras collecting colossal amounts of video data about people and their behaviour. However, this overwhelming amount of data also causes overflow of information if their content is not analysed in a wider context to provide selective focus and automated alert triggering. To date, truly semantics based video analytic systems do not exist. There is an urgent need for the development of automated systems to monitor holistically the behaviours of people, vehicles and the whereabout of objects of interest in public space. In this work, we highlight the challenges and recent progress towards building computer vision systems for holistic video detection in a distributed network of multiple cameras based on object localisation, categorisation and tagging from different views in highly cluttered scenes.

Gong, Shaogang

2007-10-01

414

Global Trajectory Construction across Multi-cameras via Graph Matching  

Microsoft Academic Search

Behavior analysis across multi-cameras becomes more and more popular with the rapid development of camera network in video surveillance. In this paper, we propose a novel unsupervised graph matching framework to associate trajectories across partially overlapping cameras. Firstly, trajectory extraction is based on object extraction and tracking and is followed by a homographic projection to a mosaic-plane. And we extract

Xiaobin Zhu; Jing Liu; Jinqiao Wang; Wei Fu; Hanqing Lu; Yikai Fang

2011-01-01

415

Football players and ball trajectories projection from single camera's image  

Microsoft Academic Search

In this paper, we propose a method to track multiple players in a football match video which is captured by a single camera. The camera pans and brings the players and the ball into the view, enabling to record the whole pitch. The players' trajectories in frames and camera movement are obtained to estimate the trajectories on the pitch. Moreover,

Hirokatsu Kataoka; Yoshimitsu Aoki

2011-01-01

416

High-resolution camera for SAM: instrument manual Author:A. Tokovinin  

E-print Network

of the detector, mechanical structure, filter wheel and optics. The optics forms an enlarged image on the CCD Vendor Cost, USD Luca DL658 CCD camera andor.com 9000 Filter wheel CFW10-SA sbig.com 995 UBVRI 1. At this temperature, some 540 "hot pixels" can be identified with high dark currents. About 80% of hot pixels have

Tokovinin, Andrei A.

417

The AXAF CCD imaging spectrometer  

NASA Technical Reports Server (NTRS)

The current status of the instrument design and the status of the CCDs being fabricated for the AXAF CCD Imaging Spectrometer (ACIS) are summarized. The instrument consists of an image recording array of CCDs and a linear arrangement of CCDs to record the spectra formed by the objective grating spectrometer. Both arrays employ CCDs with pixel dimensions which correspond to about 0.5 arcsec samples of the image. The CCDs provide moderate spectral resolution and good detection efficiency over the energy range 0.5 to 10 keV. Spectral resolution of 200 or more is achievable using the objective grating with the grating array. Radiation damage effects are shown to degrade the energy resolution of CCDs. Specially designed CCD pixel architecture is employed together with shielding and low temperature operation to slow the effects of radiation damage.

Garmire, G. P.; Ricker, G. R.; Bautz, M. W.; Burke, B.; Burrows, D. N.; Collins, S. A.; Doty, J. P.; Gendreau, K.; Lumb, D. H.; Nousek, J. A.

1992-01-01

418

Automated Meteor Fluxes with a Wide-Field Meteor Camera Network  

NASA Technical Reports Server (NTRS)

Within NASA, the Meteoroid Environment Office (MEO) is charged to monitor the meteoroid environment in near ]earth space for the protection of satellites and spacecraft. The MEO has recently established a two ]station system to calculate automated meteor fluxes in the millimeter ]size ]range. The cameras each consist of a 17 mm focal length Schneider lens on a Watec 902H2 Ultimate CCD video camera, producing a 21.7 x 16.3 degree field of view. This configuration has a red ]sensitive limiting meteor magnitude of about +5. The stations are located in the South Eastern USA, 31.8 kilometers apart, and are aimed at a location 90 km above a point 50 km equidistant from each station, which optimizes the common volume. Both single station and double station fluxes are found, each having benefits; more meteors will be detected in a single camera than will be seen in both cameras, producing a better determined flux, but double station detections allow for non ]ambiguous shower associations and permit speed/orbit determinations. Video from the cameras are fed into Linux computers running the ASGARD (All Sky and Guided Automatic Real ]time Detection) software, created by Rob Weryk of the University of Western Ontario Meteor Physics Group. ASGARD performs the meteor detection/photometry, and invokes the MILIG and MORB codes to determine the trajectory, speed, and orbit of the meteor. A subroutine in ASGARD allows for the approximate shower identification in single station meteors. The ASGARD output is used in routines to calculate the flux in units of #/sq km/hour. The flux algorithm employed here differs from others currently in use in that it does not assume a single height for all meteors observed in the common camera volume. In the MEO system, the volume is broken up into a set of height intervals, with the collecting areas determined by the radiant of active shower or sporadic source. The flux per height interval is summed to obtain the total meteor flux. As ASGARD also computes the meteor mass from the photometry, a mass flux can be also calculated. Weather conditions in the southeastern United States are seldom ideal, which introduces the difficulty of a variable sky background. First a weather algorithm indicates if sky conditions are clear enough to calculate fluxes, at which point a limiting magnitude algorithm is employed. The limiting magnitude algorithm performs a fit of stellar magnitudes vs camera intensities. The stellar limiting magnitude is derived from this and easily converted to a limiting meteor magnitude for the active shower or sporadic source.

Blaauw, R. C.; Campbell-Brown, M. D.; Cooke, W.; Weryk, R. J.; Gill, J.; Musci, R.

2013-01-01

419

COMPRESSION INDEPENDENT OBJECT ENCRYPTION FOR ENSURING PRIVACY IN VIDEO SURVEILLANCE  

E-print Network

implications. The same privacy issues arise when surveillance cameras routinely record highway traffic. This allows the use of standard video encoders and decoders and also enables smart-cameras that output

Kalva, Hari

420

Synchronizing A Television Camera With An External Reference  

NASA Technical Reports Server (NTRS)

Improvement in genlock subsystem consists in incorporation of controllable delay circuit into path of composite synchronization signal obtained from external video source. Delay circuit helps to eliminate potential jitter in video display and ensures setup requirements for digital timing circuits of video camera satisfied.

Rentsch, Edward M.

1993-01-01

421

Recognizing physical activity from ego-motion of a camera  

Microsoft Academic Search

A new image based activity recognition method for a person wearing a video camera below the neck is presented in this paper. The wearable device is used to capture video data in front of the wearer. Although the wearer never appears in the video, his or her physical activity is analyzed and recognized using the recorded scene changes resulting from

Hong Zhang; Lu Li; Wenyan Jia; John D. Fernstrom; Robert J. Sclabassi; Mingui Sun

2010-01-01

422

Online camera-gyroscope autocalibration for cell phones.  

PubMed

The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values. PMID:25265608

Jia, Chao; Evans, Brian L

2014-12-01

423

Method to implement the CCD timing generator based on FPGA  

NASA Astrophysics Data System (ADS)

With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

2010-07-01

424

Innovative Solution to Video Enhancement  

NASA Technical Reports Server (NTRS)

Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

2001-01-01

425

An environmental change detection and analysis tool using terrestrial video  

E-print Network

We developed a prototype system to detect and flag changes between pairs of geo-tagged videos of the same scene with similar camera trajectories. The purpose of the system is to help human video analysts detect threats ...

Velez, Javier, M. Eng. Massachusetts Institute of Technology

2006-01-01

426

The Dark Energy Camera (DECam)  

E-print Network

In this paper we describe the Dark Energy Camera (DECam), which will be the primary instrument used in the Dark Energy Survey. DECam will be a 3 sq. deg. mosaic camera mounted at the prime focus of the Blanco 4m telescope at the Cerro-Tololo International Observatory (CTIO). It consists of a large mosaic CCD focal plane, a five element optical corrector, five filters (g,r,i,z,Y), a modern data acquisition and control system and the associated infrastructure for operation in the prime focus cage. The focal plane includes of 62 2K x 4K CCD modules (0.27"/pixel) arranged in a hexagon inscribed within the roughly 2.2 degree diameter field of view and 12 smaller 2K x 2K CCDs for guiding, focus and alignment. The CCDs will be 250 micron thick fully-depleted CCDs that have been developed at the Lawrence Berkeley National Laboratory (LBNL). Production of the CCDs and fabrication of the optics, mechanical structure, mechanisms, and control system for DECam are underway; delivery of the instrument to CTIO is scheduled for 2010.

K. Honscheid; D. L. DePoy; for the DES Collaboration

2008-10-20

427

HDTV camera using digital contour  

NASA Astrophysics Data System (ADS)

The authers have developed the HSC-100 solid-state High-Definition TV(Camera. The canra promises a 6dB S/N and +6dB sensilivity far superior to conventional IIDTV cameras due to an imaging device construction. It also improves picture quality throughusing a digital contour unit. To satisfy IiDTV (SMPTE 240M) requirements a photo-conductive layered semiconductor imaging device (PSID) with 2 pixels has been developed. An amorphous silicon (a-Si) layeris added to the CCD scanner in this device. The a-Si layer carries out photoelectric conversion then interline transfer CCD reads out the photo induced electric charges. This configuraon provides a pixel aperture ratio of 100 thereby improving sensitivity comparedwith existing models. The layer structure also permits a wide dynamic range. A digital contour unit was developed to improve contour corrector characteristics. S/N and frequency response are improved by introducing digital signal processing. The 56dB S/N value is achieved with an 8 bit A/D converter. This S/N is about 10 dB better than that for conventional ultra-sonic delay lines. In addilion digital processing improves frequency response and delay time stability. A more natural contour correction characteristic has been attained with a contour correction signal derived from the luminance signal. 1.

Sugiki, Tadashi; Nakao, Akria; Uchida, Tomoyuki

1992-08-01

428

Hazmat Cam Wireless Video System  

SciTech Connect

This paper describes the Hazmat Cam Wireless Video System and its application to emergency response involving chemical, biological or radiological contamination. The Idaho National Laboratory designed the Hazmat Cam Wireless Video System to assist the National Guard Weapons of Mass Destruction - Civil Support Teams during their mission of emergency response to incidents involving weapons of mass destruction. The lightweight, handheld camera transmits encrypted, real-time video from inside a contaminated area, or hot-zone, to a command post located a safe distance away. The system includes a small wireless video camera, a true-diversity receiver, viewing console, and an optional extension link that allows the command post to be placed up to five miles from danger. It can be fully deployed by one person in a standalone configuration in less than 10 minutes. The complete system is battery powered. Each rechargeable camera battery powers the camera for 3 hours with the receiver and video monitor battery lasting 22 hours on a single charge. The camera transmits encrypted, low frequency analog video signals to a true-diversity receiver with three antennas. This unique combination of encryption and transmission technologies delivers encrypted, interference-free images to the command post under conditions where other wireless systems fail. The lightweight camera is completely waterproof for quick and easy decontamination after use. The Hazmat Cam Wireless Video System is currently being used by several National Guard Teams, the US Army, and by fire fighters. The system has been proven to greatly enhance situational awareness during the crucial, initial phase of a hazardous response allowing commanders to make better, faster, safer decisions.

Kevin L. Young

2006-02-01

429

The Development of the Spanish Fireball Network Using a New All-Sky CCD System  

NASA Astrophysics Data System (ADS)

We have developed an all-sky charge coupled devices (CCD) automatic system for detecting meteors and fireballs that will be operative in four stations in Spain during 2005. The cameras were developed following the BOOTES-1 prototype installed at the El Arenosillo Observatory in 2002, which is based on a CCD detector of 4096 × 4096 pixels with a fish-eye lens that provides an all-sky image with enough resolution to make accurate astrometric measurements. Since late 2004, a couple of cameras at two of the four stations operate for 30 s in alternate exposures, allowing 100% time coverage. The stellar limiting magnitude of the images is +10 in the zenith, and +8 below ~ 65° of zenithal angle. As a result, the images provide enough comparison stars to make astrometric measurements of faint meteors and fireballs with an accuracy of ~ 2°arcminutes. Using this prototype, four automatic all-sky CCD stations have been developed, two in Andalusia and two in the Valencian Community, to start full operation of the Spanish Fireball Network. In addition to all-sky coverage, we are developing a fireball spectroscopy program using medium field lenses with additional CCD cameras. Here we present the first images obtained from the El Arenosillo and La Mayora stations in Andalusia during their first months of activity. The detection of the Jan 27, 2003 superbolide of ± 17 ± 1 absolute magnitude that overflew Algeria and Morocco is an example of the detection capability of our prototype.

Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.; Llorca, J.; Fabregat, J.; Martínez, V. J.; Reglero, V.; Jelínek, M.; Kubánek, P.; Mateo, T.; Postigo, A. De Ugarte

2004-12-01

430

Video in Second Language Teaching: Using, Selecting, and Producing Video for the Classroom.  

ERIC Educational Resources Information Center

This text provides articles on the practical and principled uses of video cameras and VCRs in the English-as-a-Second-Language classrooms. Articles in this volume include: (1) "Teaching Communication Skills with Authentic Video" (Susan Stempleski); (2) "Using Video in Theme-Based Curricula" (Fredricka L. Stoller); (3) "Teaching Young Children with…

Stempleski, Susan, Ed.; Arcario, Paul, Ed.

431

Appendix D: Video Analysis of Motion Analyzing pictures (movies or videos) is a powerful tool for understanding how objects move.  

E-print Network

D - 1 Appendix D: Video Analysis of Motion Analyzing pictures (movies or videos) is a powerful tool on the desktop labeled VideoRECORDER. A window similar to the picture on the previous page should appear on the video camera, you can alter both the magnification and the sharpness of the image until the picture

Minnesota, University of

432

InSb arrays: Astronomy with a 32x32 CCD/development of a 58x62 DRO  

NASA Technical Reports Server (NTRS)

Experience gained in operating infrared detector arrays for high sensitivity astronomical applications at the University of Rochester are summarized. Progress made in operating the 32 x 32 InSb array with bump-bonded Silicon CCD readout is described. Astronomical work done with the 32 x 32 camera is also described. Plans for the future, including improvements for the 32 x 32 camera system as well as implementing a new generation of 58 x 62 InSb array using switched-MOSFET direct readout multiplexing system in the place of the older CCD technology is discussed.

Forrest, W. J.; Pipher, J. L.

1986-01-01

433

Control of a remotely operated quadrotor aerial vehicle and camera unit using a fly-the-camera perspective  

Microsoft Academic Search

This paper presents a mission-centric approach to controlling the optical axis of a video camera mounted on a camera manipulator and fixed to a quadrotor remotely operated vehicle. A four-DOF quadrotor, UAV model will be combined with a two-DOF camera kinematic model to create a single system to provide a full six DOF actuation of the camera view. This work

DongBin Lee; Vilas Chitrakaran; T imothy Burg; Darren Dawson; Bin Xian

2007-01-01

434

Using a Web Cam CCD to do V Band Photometry  

NASA Astrophysics Data System (ADS)

With the plethora of cheap web cam based CCD cameras in the market today, it seemed expedient to find out if they can be used to do photometry. An experiment was planned to determine if it was possible to do this kind of exacting measurement. Arne Henden (AAVSO) believed it would be possible to do V band photometry to 0.05 mag accuracy with a web cam CCD. Using a 6" refractor, the heart of M42 was repeatedly imaged. Theta 2 and SAO 132322 were the comparison stars and V361 Orion was the target variable. Since the 1/4 HAD CCD chip only allows for a field of 10x7 arc minutes using the 6" refractor, the number targets was limited. The RGB on the chip itself provides the filters needed for photometry. The G band pass on the chip ranges from 425-650 nm with a peak band pass at 540, V band pass is 475-645 with a peak at 525. The results indicate that a web cam CCD can be used for V band photometry. With a 10 second calibrated exposure without the Peltier cooling being engaged, the results for the 2 target stars were ± 0.18 mag. The star Theta 2 was 0.18 brighter in V than the actual measurement from the Tycho catalog. SAO 132322 was -0.012 mag dimmer than the listed Tycho measurement. Then using SAO 132322 and Theta 2 as comparison stars, V361 Orion was estimated at 7.786 magnitudes. This is inline with visual estimates received before and after this date. With more estimates of known magnitude comparison stars, a correction factor should be estimated and applied to the variable work that will make it more accurate. This correction factor should bring it close to Arne Henden's estimate of 0.05 mag accuracy.

Temple, Paul

2009-05-01

435

Development of a camera casing suited for cryogenic and vacuum applications  

E-print Network

We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

S. C. Delaquis; R. Gornea; S. Janos; M. Lüthi; Ch. Rudolf von Rohr; M. Schenk; J. -L. Vuilleumier

2013-10-24

436

Camera Projector  

NSDL National Science Digital Library

In this activity (posted on March 14, 2011), learners follow the steps to construct a camera projector to explore lenses and refraction. First, learners use relatively simple materials to construct the projector. Then, learners discover that lenses project images upside down and backwards. They explore this phenomenon by creating their own slides (must be drawn upside down and backwards to appear normally). Use this activity to also introduce learners to spherical aberration and chromatic aberration.

Center, Oakland D.

2011-01-01

437

Aerial Video Imaging  

NASA Technical Reports Server (NTRS)

When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

1991-01-01

438

Design of area array CCD image acquisition and display system based on FPGA  

NASA Astrophysics Data System (ADS)

With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

2014-09-01

439

A low-cost, high-resolution, video-rate imaging optical radar  

SciTech Connect

Sandia National Laboratories has developed a unique type of portable low-cost range imaging optical radar (laser radar or LADAR). This innovative sensor is comprised of an active floodlight scene illuminator and an image intensified CCD camera receiver. It is a solid-state device (no moving parts) that offers significant size, performance, reliability, and simplicity advantages over other types of 3-D imaging sensors. This unique flash LADAR is based on low cost, commercially available hardware, and is well suited for many government and commercial uses. This paper presents an update of Sandia`s development of the Scannerless Range Imager technology and applications, and discusses the progress that has been made in evolving the sensor into a compact, low, cost, high-resolution, video rate Laser Dynamic Range Imager.

Sackos, J.T.; Nellums, R.O.; Lebien, S.M.; Diegert, C.F. [Sandia National Labs., Albuquerque, NM (United States); Grantham, J.W.; Monson, T. [Air Force Research Lab., Eglin AFB, FL (United States)

1998-04-01

440

Mobile Video in Everyday Social Interactions  

NASA Astrophysics Data System (ADS)

Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

441

Intelligent real-time CCD data processing system based on variable frame rate  

NASA Astrophysics Data System (ADS)

In order to meet the need of image shooting with CCD in unmanned aerial vehicles, a real-time high resolution CCD data processing system based on variable frame rate is designed. The system is consisted of three modules: CCD control module, data processing module and data display module. In the CCD control module, real-time flight parameters (e.g. flight height, velocity and longitude) should be received from GPS through UART (Universal Asynchronous Receiver Transmitter) and according to the corresponding flight parameters, the variable frame rate is calculated. Based on the calculated variable frame rate, CCD external synchronization control impulse signal is generated in the control of FPGA and then CCD data is read out. In the data processing module, data segmentation is designed to extract ROI (region of interest), whose resolution is equal to valid data resolution of HDTV standard conforming to SMPTE (1080i). On one hand, Ping-pong SRAM storage controller is designed in FPGA to real-time store ROI data. On the other hand, according to the need of intelligent observing, changeable window position is designed, and a flexible area of interest is obtained. In the real-time display module, a special video encoder is used to accomplish data format conversion. Data after storage is packeted to HDTV format by creating corresponding format information in FPGA. Through inner register configuration, high definition video analog signal is implemented. The entire system has been implemented in FPGA and validated. It has been used in various real-time CCD data processing situations.

Chen, Su-ting

2009-07-01

442

Interventional video tomography  

NASA Astrophysics Data System (ADS)

Interventional Video Tomography (IVT) is a new imaging modality for Image Directed Surgery to visualize in real-time intraoperatively the spatial position of surgical instruments relative to the patient's anatomy. The video imaging detector is based on a special camera equipped with an optical viewing and lighting system and electronic 3D sensors. When combined with an endoscope it is used for examining the inside of cavities or hollow organs of the body from many different angles. The surface topography of objects is reconstructed from a sequence of monocular video or endoscopic images. To increase accuracy and speed of the reconstruction the relative movement between objects and endoscope is continuously tracked by electronic sensors. The IVT image sequence represents a 4D data set in stereotactic space and contains image, surface topography and motion data. In ENT surgery an IVT image sequence of the planned and so far accessible surgical path is acquired prior to surgery. To simulate the surgical procedure the cross sectional imaging data is superimposed with the digitally stored IVT image sequence. During surgery the video sequence component of the IVT simulation is substituted by the live video source. The IVT technology makes obsolete the use of 3D digitizing probes for the patient image coordinate transformation. The image fusion of medical imaging data with live video sources is the first practical use of augmented reality in medicine. During surgery a head-up display is used to overlay real-time reformatted cross sectional imaging data with the live video image.

Truppe, Michael J.; Pongracz, Ferenc; Ploder, Oliver; Wagner, Arne; Ewers, Rolf

1995-05-01

443

MECHANICAL ADVANCING HANDLE THAT SIMPLIFIES MINIRHIZOTRON CAMERA REGISTRATION AND IMAGE COLLECTION  

EPA Science Inventory

Minirkizotrons in conjunction with a minirkizotron video camera system are becoming widely used tools for investigating root production and survical in a variety of ecosystems. Image collection with a minirhizotron camera can be time consuming and tedious particularly when hundre...

444

Multi-Camera Activity Correlation Analysis Chen Change Loy, Tao Xiang and Shaogang Gong  

E-print Network

-camera activity analysis methods [6, 12, Figure 1. Three consecutive frames from a typical public space CCTV video, both assumptions are largely invalid for activities captured by CCTV cameras in public spaces typ

Gong, Shaogang

445

Noise characterization of the linear CCD  

SciTech Connect

Work in evaluating the noise performance of the linear charge-coupled devices (CCD) is summarized. The noise determines the minimum amplitude signal the CCD is capable of capturing and therefore is one of the prime parameters in determining dynamic range. The noise after correlated double sampling and correction of some ground loops will be shown equivalent to 150 rms electrons at the floating diffusion sensor on the CCD.

McConaghy, C.F.

1980-05-01

446

A CCD\\/CMOS image motion sensor  

Microsoft Academic Search

Presents a 1D image motion sensor with a 115-pixel linear image sensor and analog CCD\\/CMOS processors that correlates two image frames that are spatially shifted between -5 and +5 pixels, to estimate object motion over a range of ±1 to ±5000 pixels\\/s. The CCD\\/CMOS smart sensor chip is fabricated with a standard double poly, double metal, 2-?m CMOS\\/CCD process available

Massimo Gottardi; Woodward Yang

1993-01-01

447

A large imaging array CCD program  

NASA Technical Reports Server (NTRS)

Test results on charge coupled device (CCD) imaging arrays (100 x 160 pixel and 400 x 400 pixel) employed as imaging detectors are reported, along with expected low light level (LLL) performance. Reasons for selection of a thinned backside-illuminated buried-channel three-phase CCD variant are indicated. The LLL performance and long storage time capability of the CCD imaging array recommend it for stellar photometry, detection and tracking of faint objects, and other astronomical applications.

Vescelus, F. E.; Antcliffe, G. A.

1976-01-01

448