These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Machine vision: recent advances in CCD video camera technology  

Microsoft Academic Search

This paper describes four state-of-the-art digital video cameras, which provide advanced features that benefit computer image enhancement, manipulation, and analysis. These cameras were designed to reduce the complexity of imaging systems while increasing the accuracy, dynamic range, and detail enhancement of product inspections. Two cameras utilize progressive scan CCD sensors enabling the capture of high- resolution image of moving objects

Richard A. Easton; Ronald J. Hamilton

1997-01-01

2

CCD Camera  

DOEpatents

A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

Roth, Roger R. (Minnetonka, MN)

1983-01-01

3

CCD Camera  

DOEpatents

A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

Roth, R.R.

1983-08-02

4

CCD Luminescence Camera  

NASA Technical Reports Server (NTRS)

New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

Janesick, James R.; Elliott, Tom

1987-01-01

5

Upgrading a CCD camera for astronomical use  

E-print Network

Existing charge-coupled device (CCD) video cameras have been modified to be used for astronomical imaging on telescopes in order to improve imaging times over those of photography. An astronomical CCD camera at the Texas A&M Observatory would...

Lamecker, James Frank

1993-01-01

6

Transmission electron microscope CCD camera  

DOEpatents

In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

Downing, Kenneth H. (Lafayette, CA)

1999-01-01

7

Biofeedback control analysis using a synchronized system of two CCD video cameras and a force-plate sensor  

NASA Astrophysics Data System (ADS)

The biofeedback control analysis of human movement has become increasingly important in rehabilitation, sports medicine and physical fitness. In this study, a synchronized system was developed for acquiring sequential data of a person's movement. The setup employs a video recorder system linked with two CCD video cameras and fore-plate sensor system, which are configured to stop and start simultaneously. The feedback control movement of postural stability was selected as a subject for analysis. The person's center of body gravity (COG) was calculated by measured 3-D coordinates of major joints using videometry with bundle adjustment and self-calibration. The raw serial data of COG and foot pressure by measured force plate sensor are difficult to analyze directly because of their complex fluctuations. Utilizing auto regressive modeling, the power spectrum and the impulse response of movement factors, enable analysis of their dynamic relations. This new biomedical engineering approach provides efficient information for medical evaluation of a person's stability.

Tsuruoka, Masako; Shibasaki, Ryosuke; Murai, Shunji

1999-01-01

8

Interline transfer CCD camera  

SciTech Connect

An interline CCD sensing device for use in a camera system, includes an imaging area sensitive to impinging light, for generating charges corresponding to the intensity of the impinging light. Sixteen independent registers R1 - R16 sequentially receive the interline data from the imaging area, corresponding to the generated charges. Sixteen output amplifiers S1 - S16 and sixteen ports P1 - P16 for sequentially transferring the interline data, one pixel at a time, in order to supply a desired image transfer speed. The imaging area is segmented into sixteen independent imaging segments A1 - A16, each of which corresponds to one register, on output amplifier, and one output port. Each one of the imaging segments A1 - A16 includes an array of rows and columns of pixels. Each pixel includes a photogate area, an interline CCD channel area, and an anti-blooming area. The anti-blooming area is, in turn, divided into an anti-blooming barrier and an anti-blooming drain.

Prokop, M.S.; McCurnin, T.W.; Stump, C.J.; Stradling, G.L.

1993-12-31

9

CCD Video Photography and Analysis of Comet Schwassman-Wachman 73P Fragments B & C Using the Low-Cost Meade DSI Camera  

NASA Astrophysics Data System (ADS)

Near Earth Object Comet Schwassman-Wachman 73P had its closed approach to Earth on May 12, 2006. 874 8-second CCD exposures were collected for fragment B using the un-cooled low-cost Meade DSI Camera Model 1. These images have been extensively processed and made into a dramatic video that captures over 2 hour's movement of the comet through a star field. This paper discusses image processing techniques such as image registration, automatic flat fielding, image restoration and estimation of the comet's position using tracking algorithms.

Gifford, S.

2007-05-01

10

Omnifocus video camera  

NASA Astrophysics Data System (ADS)

The omnifocus video camera takes videos, in which objects at different distances are all in focus in a single video display. The omnifocus video camera consists of an array of color video cameras combined with a unique distance mapping camera called the Divcam. The color video cameras are all aimed at the same scene, but each is focused at a different distance. The Divcam provides real-time distance information for every pixel in the scene. A pixel selection utility uses the distance information to select individual pixels from the multiple video outputs focused at different distances, in order to generate the final single video display that is everywhere in focus. This paper presents principle of operation, design consideration, detailed construction, and over all performance of the omnifocus video camera. The major emphasis of the paper is the proof of concept, but the prototype has been developed enough to demonstrate the superiority of this video camera over a conventional video camera. The resolution of the prototype is high, capturing even fine details such as fingerprints in the image. Just as the movie camera was a significant advance over the still camera, the omnifocus video camera represents a significant advance over all-focus cameras for still images.

Iizuka, Keigo

2011-04-01

11

3D - Laser Scanning: Integration of Point Cloud and CCD Camera Video Data for the Production of High Resolution and Precision RGB Textured Models: Archaeological Monuments Surveying Application in Ancient Ilida  

Microsoft Academic Search

SUMMARY In this project, techniques of integration of 3D - Laser Scanning point cloud data and the video produced by the CCD camera are explored. This integration is employed to the produc- tion of high - accuracy and resolution RGB textured models and ortho - photo diagrams of archaeological monuments. The \\

Vaios BALIS; Spyros KARAMITSOS; Ioannis KOTSIS; Christos LIAPAKIS

12

An auto-focusing CCD camera mount  

NASA Astrophysics Data System (ADS)

The traditional methods of focusing a CCD camera are either time consuming, difficult or, more importantly, indecisive. This paper describes a device designed to allow the observer to be confident that the camera will always be properly focused by sensing a selected star image and automatically adjusting the camera's focal position.

Arbour, R. W.

1994-08-01

13

Protocols conversion in remote controlling for CCD camera  

NASA Astrophysics Data System (ADS)

In the industrial and network monitoring field, there are several protocols such as Pelco D/P used for remote operation and widely applied to control the pan/tilt/zoom (PTZ) camera systems. But for universal CCD camera, a lot of incompatible communication protocols have be developed by different manufacturers. To extend these cameras' application in remote monitoring field and improving its compatibility with controlling terminal, it's necessary to design a reliable protocols conversion module. This paper aimed at realizing the conversion and recognize of different protocols for CCD camera. Protocol conversion principle and algorithm are analyzed to implement instruction transformation for any camera protocols. An example is demonstrated by converting Protocol Pelco D/P into Protocol 54G30 using Micro Controller Unit (MCU). High performance hardware and rapid software algorithm was designed for high efficient conversion process. By means of serial communication assistant, Video Server and PTZ controlling keyboard, the stability and reliability of this module were finally validated.

Lin, Jiaming; Liu, Jinhua; Wang, Yanqin; Yang, Longrong

2008-03-01

14

Vacuum compatible miniature CCD camera head  

DOEpatents

A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

Conder, Alan D. (Tracy, CA)

2000-01-01

15

Some applications for amateur CCD cameras  

NASA Astrophysics Data System (ADS)

Many amateurs now have access to CCD cameras. Attached to a typical amateur telescope such devices can potentially form a very powerful scientific instrument. This paper reviews a number of areas where the suitably equipped amateur can contribute to professional programmes. It also contains a number of warnings, particularly in the field of photometry, and some new results on the use of the Hubble Guide Star Catalog for CCD based astrometry.

James, N. D.

1994-08-01

16

Security camera video authentication  

Microsoft Academic Search

The ability to authenticate images captured by a security camera, and localise any tampered areas, will increase the value of these images as evidence in a court of law. This paper outlines the challenges in security camera video authentication, and discusses the reasons why fingerprinting, a robust type of digital signature, provides a solution preferable to semi-fragile watermarking. A fingerprint

D. K. Roberts

2002-01-01

17

Jack & the Video Camera  

ERIC Educational Resources Information Center

This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

Charlan, Nathan

2010-01-01

18

Timing generator of scientific grade CCD camera and its implementation based on FPGA technology  

NASA Astrophysics Data System (ADS)

The Timing Generator's functions of Scientific Grade CCD Camera is briefly presented: it generates various kinds of impulse sequence for the TDI-CCD, video processor and imaging data output, acting as the synchronous coordinator for time in the CCD imaging unit. The IL-E2TDI-CCD sensor produced by DALSA Co.Ltd. use in the Scientific Grade CCD Camera. Driving schedules of IL-E2 TDI-CCD sensor has been examined in detail, the timing generator has been designed for Scientific Grade CCD Camera. FPGA is chosen as the hardware design platform, schedule generator is described with VHDL. The designed generator has been successfully fulfilled function simulation with EDA software and fitted into XC2VP20-FF1152 (a kind of FPGA products made by XILINX). The experiments indicate that the new method improves the integrated level of the system. The Scientific Grade CCD camera system's high reliability, stability and low power supply are achieved. At the same time, the period of design and experiment is sharply shorted.

Si, Guoliang; Li, Yunfei; Guo, Yongfei

2010-10-01

19

Design of the KMTNet large format CCD camera  

NASA Astrophysics Data System (ADS)

We present the design for the 340 Mpixel KMTNet CCD camera comprising four newly developed e2v CCD290-99 imaging sensors mounted to a common focal plane assembly. The high performance CCDs have 9k x 9k format, 10 micron pixels, and multiple outputs for rapid readout time. The camera Dewar is cooled using closed cycle coolers and vacuum is maintained with a cryosorption pump. The CCD controller electronics, the electronics cooling system, and the camera control software are also described.

Atwood, Bruce; O'Brien, Thomas P.; Colarosa, Christopher; Mason, Jerry; Johnson, Mark O.; Pappalardo, Dan; Derwent, Mark; Schaller, Skip; Lee, Chung-Uk; Kim, Seung-Lee; Park, Byeong-Gon; Cha, Sang-Mok; Jorden, Paul; Darby, Steve; Walker, Alex; Renshaw, Ryan

2012-09-01

20

The SXI: CCD camera onboard the NeXT mission  

E-print Network

The Soft X-ray Imager (SXI) is the X-ray CCD camera on board the NeXT mission that is to be launched around 2013. We are going to employ the CCD chips developed at Hamamatsu Photonics, K.K. We have been developing two types ...

Bautz, Marshall W.

21

Ultrahigh-speed, high-sensitivity color camera with 300,000-pixel single CCD  

NASA Astrophysics Data System (ADS)

We have developed an ultrahigh-speed, high-sensitivity portable color camera with a new 300,000-pixel single CCD. The 300,000-pixel CCD, which has four times the number of pixels of our initial model, was developed by seamlessly joining two 150,000-pixel CCDs. A green-red-green-blue (GRGB) Bayer filter is used to realize a color camera with the single-chip CCD. The camera is capable of ultrahigh-speed video recording at up to 1,000,000 frames/sec, and small enough to be handheld. We also developed a technology for dividing the CCD output signal to enable parallel, highspeed readout and recording in external memory; this makes possible long, continuous shots up to 1,000 frames/second. As a result of an experiment, video footage was imaged at an athletics meet. Because of high-speed shooting, even detailed movements of athletes' muscles were captured. This camera can capture clear slow-motion videos, so it enables previously impossible live footage to be imaged for various TV broadcasting programs.

Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Ohtake, H.; Kurita, T.; Tanioka, K.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Etoh, T. G.

2007-01-01

22

Solid state, CCD-buried channel, television camera study and design  

NASA Technical Reports Server (NTRS)

An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.

Hoagland, K. A.; Balopole, H.

1976-01-01

23

Printed circuit board for a CCD camera head  

DOEpatents

A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

Conder, Alan D. (Tracy, CA)

2002-01-01

24

Feasibility study of CCD-based gamma camera  

NASA Astrophysics Data System (ADS)

Conventional gamma cameras which uses photomultiplier tubes(PMT) is very heavy, bulky, and expensive. In addition, its spatial resolution is low because of geometrical limitation of PMTs. This low resolution and large size is not efficient for the small animal imaging system which is useful in preclinical imaging application. We have developed a small size but high spatial resolution gamma ray detector, based on charge-coupled device(CCD) which is useful to develop a prototype model of small animal gamma camera. Recently the sensitivity of CCD was improved and the peltier cooling system helped to minimize the dark currents of CCD significantly. The enhanced sensitivity and high intrinsic resolution of CCD enabled researchers to develop the small size gamma camera with low cost. In this study we used peltier cooled CCD sensor which has about 70% of quantum efficiency at 650nm wave length. CsI(Tl) scintillator was also used to convert the gamma ray to visible lights. These light photons from the scintillator have been collected to the CCD surface by Nikorr macro lens to enhance the collection efficiency. The experimental results showed that the proposed CCD-based detection system is feasible for gamma ray detection.

Lee, Hakjae; Jeong, Young-Jun; Yoon, Joochul; Kang, Jungwon; Lee, Sangjoon; Shin, Hyungsup; Lee, Kisung

2010-08-01

25

Low smear CCD camera for high frame rates  

SciTech Connect

A small versatile CCD camera is described. Frame readout times from one second to milliseconds can be changed continuously or at random by a programmable clock. A very large range of light intensity is thus covered without the need for a mechanical aperture control. The camera can be reset at any time and then restarted at a different frame rate after the pause of a selected duration. Sony, Inc. ICX016AL sensor was found especially suitable for clock rates up to 50 MHz. Image smear, blooming and noise are substantially lower than in earlier interline transfer CCD sensors. Critical circuits are also described and measured data presented. 5 refs., 8 figs.

Turko, B.T.; Yates, G.J.

1988-04-01

26

Low smear CCD camera for high frame rates  

SciTech Connect

A small versatile CCD camera is described. Frame readout times from one second to milliseconds can be changed continuously or at random by a programmable clock. A very large range of light intensity is thus covered without the need for a mechanical aperture control. The camera can be reset at any time and then restarted at a different frame rate after the pause of a selected duration. Sony, Inc. ICX016AL sensor was found especially suitable for clock rates up to 50 MHz. Image smear, blooming and noise are substantially lower than in earlier interline transfer CCD sensors. Critical circuits are also described and measured data presented.

Turko, B.T.; Yates, G.J.

1989-02-01

27

Portal imaging with flat-panel detector and CCD camera  

NASA Astrophysics Data System (ADS)

This paper provides a comparison of imaging parameters of two portal imaging systems at 6 MV: a flat panel detector and a CCD-camera based portal imaging system. Measurements were made of the signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. Both systems have a linear response with respect to exposure, and the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal- to-noise ratio, which is higher than that observed wit the CCD-camera based portal imaging system. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The paper also presents data on the screen's photon gain (the number of light-photons per interacting x-ray photon), as well as on the magnitude of the Swank-noise, (which describes fluctuation in the screen's photon gain). Images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center, were generated at an exposure of 1 MU. The CCD-camera based system permits detection of aluminum-holes of 0.01194 cm diameter and 0.228 mm depth while the flat-panel detector permits detection of aluminum holes of 0.01194 cm diameter and 0.1626 mm depth, indicating a better signal-to-noise ratio. Rank order filtering was applied to the raw images from the CCD-based system in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-camera and generate 'salt and pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise.

Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Wai; Dallas, William J.

1997-07-01

28

Estimation of helicopter navigation parameters with digital CCD camera  

Microsoft Academic Search

Digital navigation concept is introduced to the application of aircraft navigation parameter estimation firstly. In this paper, an estimation method for navigation parameter using digital CCD camera is presented, where navigation parameters represent the position and attitude information of a helicopter for autonomous navigation. The proposed method is composed of relative position estimation and absolute position estimation. Multi-image space orientation

Tiejun Li; Zhe Chen; Renxiang Wang

2001-01-01

29

Observation of capillary flow in human skin during tissue compression using CCD video-microscopy.  

PubMed

Recent technological advances of the CCD video-camera have made microscopes more compact and greatly improved their sensitivity. We newly designed a compact capillaroscopy which was composed with a CCD video-probe equipped a contact-type objective lens and illuminator. In the present study, we evaluated usefulness of the instrument for a bed-side human capillaroscopy to observe the capillary flow in various dermal regions. The influences of tissue compression on the dermal capillary blood flow were also investigated to confirm the utility for clinical applications. Our capillaroscopy visualized the nutritional capillary blood flow in almost all parts of skin surface. Our observations showed that a level of vertical stress similar to arterial pressure was required to stop the capillary flow. From these demonstrations the present CCD video-probe based capillaroscopy would be useful for clinical applications as a bed-side human capillaroscopy. PMID:21095817

Shibata, Masahiro; Yamakoshi, Takehiro; Yamakoshi, Ken-Ichi; Komeda, Takashi

2010-01-01

30

Developments in the EM-CCD camera for OGRE  

NASA Astrophysics Data System (ADS)

The Off-plane Grating Rocket Experiment (OGRE) is a sub-orbital rocket payload designed to advance the development of several emerging technologies for use on space missions. The payload consists of a high resolution soft X-ray spectrometer based around an optic made from precision cut and ground, single crystal silicon mirrors, a module of off-plane gratings and a camera array based around Electron Multiplying CCD (EM-CCD) technology. This paper gives an overview of OGRE with emphasis on the detector array; specifically this paper will address the reasons that EM-CCDs are the detector of choice and the advantages and disadvantages that this technology offers.

Tutt, James H.; McEntaffer, Randall L.; DeRoo, Casey; Schultz, Ted; Miles, Drew M.; Zhang, William; Murray, Neil J.; Holland, Andrew D.; Cash, Webster; Rogers, Thomas; O'Dell, Steve; Gaskin, Jessica; Kolodziejczak, Jeff; Evagora, Anthony M.; Holland, Karen; Colebrook, David

2014-07-01

31

Development of an all-in-one gamma camera/CCD system for safeguard verification  

NASA Astrophysics Data System (ADS)

For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.

Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo

2014-12-01

32

Design and application of TEC controller Using in CCD camera  

NASA Astrophysics Data System (ADS)

Thermoelectric cooler (TEC) is a kind of solid hot pump performed with Peltier effect. And it is small, light and noiseless. The cooling quantity is proportional to the TEC working current when the temperature difference between the hot side and the cold side keeps stable. The heating quantity and cooling quantity can be controlled by changing the value and direction of current of two sides of TEC. So, thermoelectric cooling technology is the best way to cool CCD device. The E2V's scientific image sensor CCD47-20 integrates TEC and CCD together. This package makes easier of electrical design. Software and hardware system of TEC controller are designed with CCD47-20 which is packaged with integral solid-state Peltier cooler. For hardware system, 80C51 MCU is used as CPU, 8-bit ADC and 8-bit DAC compose of closed-loop controlled system. Controlled quantity can be computed by sampling the temperature from thermistor in CCD. TEC is drove by MOSFET which consists of constant current driving circuit. For software system, advanced controlled precision and convergence speed of TEC system can be gotten by using PID controlled algorithm and tuning proportional, integral and differential coefficient. The result shows: if the heat emission of the hot side of TEC is good enough to keep the temperature stable, and when the sampling frequency is 2 seconds, temperature controlled velocity is 5°C/min. And temperature difference can reach -40°C controlled precision can achieve 0.3°C. When the hot side temperature is stable at °C, CCD temperature can reach -°C, and thermal noise of CCD is less than 1e-/pix/s. The controlled system restricts the dark-current noise of CCD and increases SNR of the camera system.

Gan, Yu-quan; Ge, Wei; Qiao, Wei-dong; Lu, Di; Lv, Juan

2011-08-01

33

Cooled CCD camera with tapered fibre optics for electron microscopy  

NASA Astrophysics Data System (ADS)

A CCD camera for use in electron microscopy, with 1286 × 1152, 37 ?m pixels and an input aperture of 60 mm diameter, is described in this paper. An attempt is made to optimise the phosphor resolution for 120 keV electrons using Monte-Carlo simulation methods. Incident electrons are converted to visible light in a polycrystalline phosphor (P43) deposited on tapered fibre optics and imaged on to a cooled slow-scan CCD which is controlled from a Sun sparc-station, running under a Unix platform, through a VME-based drive and read-out electronics system. The camera is attached to a Philips CM12 microscope and is used mainly for recording electron-diffraction patterns from two-dimensiónally ordered protein arrays. Data can be displayed rapidly on the Sun monitor and can also be transferred for further analysis to a Dec Alpha computer via Ethernet for application of various image-processing programs.

Faruqi, A. R.; Andrews, H. N.

1997-02-01

34

CCD camera response to diffraction patterns simulating particle images.  

PubMed

We present a statistical study of CCD (or CMOS) camera response to small images. Diffraction patterns simulating particle images of a size around 2-3 pixels were experimentally generated and characterized using three-point Gaussian peak fitting, currently used in particle image velocimetry (PIV) for accurate location estimation. Based on this peak-fitting technique, the bias and RMS error between locations of simulated and real images were accurately calculated by using a homemade program. The influence of the intensity variation of the simulated particle images on the response of the CCD camera was studied. The experimental results show that the accuracy of the position determination is very good and brings attention to superresolution PIV algorithms. Some tracks are proposed in the conclusion to enlarge and improve the study. PMID:23842270

Stanislas, M; Abdelsalam, D G; Coudert, S

2013-07-01

35

Initial laboratory evaluation of color video cameras  

Microsoft Academic Search

Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the

P. L. Terry

1991-01-01

36

System Synchronizes Recordings from Separated Video Cameras  

NASA Technical Reports Server (NTRS)

A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

2009-01-01

37

High frame rate CCD camera with fast optical shutter  

SciTech Connect

A high frame rate CCD camera coupled with a fast optical shutter has been designed for high repetition rate imaging applications. The design uses state-of-the-art microchannel plate image intensifier (MCPII) technology fostered/developed by Los Alamos National Laboratory to support nuclear, military, and medical research requiring high-speed imagery. Key design features include asynchronous resetting of the camera to acquire random transient images, patented real-time analog signal processing with 10-bit digitization at 40--75 MHz pixel rates, synchronized shutter exposures as short as 200pS, sustained continuous readout of 512 x 512 pixels per frame at 1--5Hz rates via parallel multiport (16-port CCD) data transfer. Salient characterization/performance test data for the prototype camera are presented, temporally and spatially resolved images obtained from range-gated LADAR field testing are included, an alternative system configuration using several cameras sequenced to deliver discrete numbers of consecutive frames at effective burst rates up to 5GHz (accomplished by time-phasing of consecutive MCPII shutter gates without overlap) is discussed. Potential applications including dynamic radiography and optical correlation will be presented.

Yates, G.J.; McDonald, T.E. Jr. [Los Alamos National Lab., NM (United States); Turko, B.T. [Lawrence Berkeley National Lab., CA (United States)

1998-09-01

38

The European Photon Imaging Camera on XMM-Newton: The pn-CCD camera  

Microsoft Academic Search

The European Photon Imaging Camera (EPIC) consortium has provided the focal plane instruments for the three X-ray mirror systems on XMM-Newton. Two cameras with a reflecting grating spectrometer in the optical path are equipped with MOS type CCDs as focal plane detectors (Turner \\\\cite{mturner}), the telescope with the full photon flux operates the novel pn-CCD as an imaging X-ray spectrometer.

L. Strüder; U. Briel; K. Dennerl; R. Hartmann; E. Kendziorra; N. Meidinger; E. Pfeffermann; C. Reppin; B. Aschenbach; W. Bornemann; H. Bräuninger; W. Burkert; M. Elender; M. Freyberg; F. Haberl; G. Hartner; F. Heuschmann; H. Hippmann; E. Kastelic; S. Kemmer; G. Kettenring; W. Kink; N. Krause; S. Müller; A. Oppitz; W. Pietsch; M. Popp; P. Predehl; A. Read; K. H. Stephan; D. Stötter; J. Trümper; P. Holl; J. Kemmer; H. Soltau; R. Stötter; U. Weber; U. Weichert; C. von Zanthier; D. Carathanassis; G. Lutz; R. H. Richter; P. Solc; H. Böttcher; M. Kuster; R. Staubert; A. Abbey; A. Holland; M. Turner; M. Balasini; G. F. Bignami; N. La Palombara; G. Villa; W. Buttler; F. Gianini; R. Lainé; D. Lumb; P. Dhez

2001-01-01

39

Digital video camera workshop Sony VX2000  

E-print Network

, slide this control to MANUAL. You will see this indicator in your screen. The outer ring on the lens is the focus ring. #12;Video Camera Operation Adjusting the Shutter Speed Sony VX2000 Slide the AUTO LOCK indicator on the screen. If not, roll the thumbwheel until you do. #12;Video Camera Operation Setting

40

Absolute calibration of a CCD camera with twin beams  

E-print Network

We report on the absolute calibration of a CCD camera by exploiting quantum correlation. This novel method exploits a certain number of spatial pairwise quantum correlated modes produced by spontaneous parametric-down-conversion. We develop a measurement model taking into account all the possible source of losses and noise that are not related to the quantum efficiency,accounting for all the uncertainty contributions, and we reach the relative uncertainty of 0.3% in low photon flux regime. This represents a significant step forward for the characterizaion of (scientific) CCDs used in mesoscopic light regime.

I. Ruo-Berchera; A. Meda; I. P. Degiovanni; G. Brida; M. L. Rastello; M. Genovese

2014-05-07

41

Wind dynamic range video camera  

NASA Technical Reports Server (NTRS)

A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

Craig, G. D. (inventor)

1985-01-01

42

Thomson scattering stray light reduction techniques using a CCD camera  

SciTech Connect

The DIII-D Thomson scattering system has been expanded to measure divertor plasma temperatures from 1-500 eV and densities from 0.05 to 8 X 10{sup 20} m{sup -3}. To complete this system, a difficult stray light problem was overcome to allow for an accurate Rayleigh scattering density calibration. The initial stray light levels were over 500 times higher than the expected Rayleigh scattered signal. Using a CCD camera, various portions of the vessel interior were examined while the laser was fired through the vessel in air at atmospheric pressure. Image relaying, exit window tilting, entrance and exit baffle modifications, and a beam polarizer were then used to reduce the stray light to acceptable levels. The CCD camera gave prompt feedback on the effectiveness of each modification, without the need to re-establish vacuum conditions required when using the normal avalanche Photodiode detectors (APD). Once the stray light was sufficiently reduced, the APD detectors provided the signal time history to more accurately identify the source location. We have also found that certain types of high reflectance dielectric coatings produce 10 to 15 times more scatter than other types of more conventional coatings. By using low-scatter mirror coatings and these new stray light reduction techniques, we now have more flexibility in the design of complex Thomson scattering configurations required to probe the central core and the new radiative divertor regions of the DIII-D vessel.

Nilson, D.G.; Hill, D.N.; Evans, J.C. [and others

1996-02-01

43

Research of fiber position measurement by multi CCD cameras  

NASA Astrophysics Data System (ADS)

Parallel controlled fiber positioner as an efficiency observation system, has been used in LAMOST for four years, and will be proposed in ngCFHT and rebuilt telescope Mayall. The fiber positioner research group in USTC have designed a new generation prototype by a close-packed module robotic positioner mechanisms. The prototype includes about 150 groups fiber positioning module plugged in 1 meter diameter honeycombed focal plane. Each module has 37 12mm diameter fiber positioners. Furthermore the new system promotes the accuracy from 40 um in LAMOST to 10um in MSDESI. That's a new challenge for measurement. Close-loop control system are to be used in new system. The CCD camera captures the photo of fiber tip position covered the focal plane, calculates the precise position information and feeds back to control system. After the positioner rotated several loops, the accuracy of all positioners will be confined to less than 10um. We report our component development and performance measurement program of new measuring system by using multi CCD cameras. With the stereo vision and image processing method, we precisely measure the 3-demension position of fiber tip carried by fiber positioner. Finally we present baseline parameters for the fiber positioner measurement as a reference of next generation survey telescope design.

Zhou, Zengxiang; Hu, Hongzhuan; Wang, Jianping; Zhai, Chao; Chu, Jiaru; Liu, Zhigang

2014-07-01

44

Advanced High-Definition Video Cameras  

NASA Technical Reports Server (NTRS)

A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

Glenn, William

2007-01-01

45

A CCD CAMERA-BASED HYPERSPECTRAL IMAGING SYSTEM FOR STATIONARY AND AIRBORNE APPLICATIONS  

Technology Transfer Automated Retrieval System (TEKTRAN)

This paper describes a charge coupled device (CCD) camera-based hyperspectral imaging system designed for both stationary and airborne remote sensing applications. The system consists of a high performance digital CCD camera, an imaging spectrograph, an optional focal plane scanner, and a PC comput...

46

Development of the analog ASIC for multi-channel readout X-ray CCD camera  

Microsoft Academic Search

We report on the performance of an analog application-specific integrated circuit (ASIC) developed aiming for the front-end electronics of the X-ray CCD camera system onboard the next X-ray astronomical satellite, ASTRO-H. It has four identical channels that simultaneously process the CCD signals. Distinctive capability of analog-to-digital conversion enables us to construct a CCD camera body that outputs only digital signals.

Hiroshi Nakajima; Daisuke Matsuura; Toshihiro Idehara; Naohisa Anabuki; Hiroshi Tsunemi; John P. Doty; Hirokazu Ikeda; Haruyoshi Katayama; Hisashi Kitamura; Yukio Uchihori

2011-01-01

47

Video Analysis with a Web Camera  

ERIC Educational Resources Information Center

Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

Wyrembeck, Edward P.

2009-01-01

48

Photogrammetric Applications of Immersive Video Cameras  

NASA Astrophysics Data System (ADS)

The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

Kwiatek, K.; Tokarczyk, R.

2014-05-01

49

VME image acquisition and processing using standard TV CCD cameras  

NASA Astrophysics Data System (ADS)

The ESRF has released the first version of a low-cost image acquisition and processing system based on a industrial VME board and commercial CCD TV cameras. The images from standard CCIR (625 lines) or EIA (525 lines) inputs are digitised with 8-bit dynamic range and stored in a general purpose frame buffer to be processed by the embedded firmware. They can also be transferred to a UNIX workstation through the network for display in a X11 window, or stored in a file for off-line processing with image analysis packages like KHOROS, IDL, etc. The front-end VME acquisition system can be controlled with a Graphic Users' Interface (GUI) based on X11/Motif running under UNIX. The first release of the system is in operation and allows one to observe and analyse beam spots around the accelerators. The system has been extended to make it possible to position a mu sample (less than 10 ?m 2) not visible to the naked eye. This system is a general purpose image acquisition system which may have wider applications.

Epaud, F.; Verdier, P.

1994-12-01

50

The in-flight spectroscopic performance of the Swift XRT CCD camera  

E-print Network

The Swift X-ray Telescope (XRT) focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 144 eV FWHM at 6.5 keV. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Loss of temperature control motivated a laboratory program to re-optimize the CCD substrate voltage, we describe the small changes in the CCD response that would result from use of a substrate voltage of 6V.

J. P. Osborne; A. P. Beardmore; O. Godet; A. F. Abbey; M. R. Goad; K. L. Page; A. A. Wells; L Angelini; D. N. Burrows; S. Campana; G. Chincarini; O. Citterio; G. Cusumano; P. Giommi; J. E. Hill; J. Kennea; V. La Parola; V. Mangano; T. Mineo; A. Moretti; J. A. Nousek; C. Pagani; M. Perri; P. Romano; G. Tagliaferri; F. Tamburelli

2005-10-17

51

Astronomical Station Vidojevica: In Situ Test of the Alta Apogee U42 CCD Camera  

NASA Astrophysics Data System (ADS)

Currently, the CCD camera most used by observers of the Astronomical Observatory of Belgrade is the ALTA Apogee U42. It is used for both photometric and astrometric observations. Therefore, it is very important to know different measurable parameters which describe the condition of the camera - linearity, gain, readout noise etc. In this paper, we present a thorough test of this camera.

Vince, O.

2012-12-01

52

Automatic Control of Video Surveillance Camera Sabotage  

Microsoft Academic Search

One of the main characteristics of a video surveillance system is its reliability. To this end, it is needed that the images captured by the videocameras are an accurate representation of the scene. Unfortunately, some activities can make the proper operation of the cameras fail, dis- torting in some way the images which are going to be processed. When these

P. Gil-Jimenez; R. Lopez-Sastre; P. Siegmann; J. Acevedo-Rodr; S. Maldonado-Bascon

2007-01-01

53

Pixel-by-pixel calibration of a CCD camera based thermoreflectance thermography system with nanometer resolution  

Microsoft Academic Search

This work presents for the first time a method for calibrating pixel-by-pixel and in-situ a CCD camera-based thermoreflectance thermography system with nanometer spatial resolution. Using the thermoreflectance method to determine the temperature map of an activated device requires two steps: first, the thermal image is acquired using a CCD camera or laser-diode set-up and, second, the obtained thermal image is

Mihai G. BURZO; Pavel L. KOMAROV; Peter E. RAAD

2009-01-01

54

Auto-measuring system of aero-camera lens focus using linear CCD  

NASA Astrophysics Data System (ADS)

The automatic and accurate focal length measurement of aviation camera lens is of great significance and practical value. The traditional measurement method depends on the human eye to read the scribed line on the focal plane of parallel light pipe by means of reading microscope. The method is of low efficiency and the measuring results are influenced by artificial factors easily. Our method used linear array solid-state image sensor instead of reading microscope to transfer the imaging size of specific object to be electrical signal pulse width, and used computer to measure the focal length automatically. In the process of measurement, the lens to be tested placed in front of the object lens of parallel light tube. A couple of scribed line on the surface of the parallel light pipe's focal plane were imaging on the focal plane of the lens to be tested. Placed the linear CCD drive circuit on the image plane, the linear CCD can convert the light intensity distribution of one dimension signal into time series of electrical signals. After converting, a path of electrical signals is directly brought to the video monitor by image acquisition card for optical path adjustment and focusing. The other path of electrical signals is processed to obtain the pulse width corresponding to the scribed line by electrical circuit. The computer processed the pulse width and output focal length measurement result. Practical measurement results showed that the relative error was about 0.10%, which was in good agreement with the theory.

Zhang, Yu-ye; Zhao, Yu-liang; Wang, Shu-juan

2014-09-01

55

Measuring neutron fluences and gamma/x-ray fluxes with CCD cameras  

SciTech Connect

The capability to measure bursts of neutron fluences and gamma/x-ray fluxes directly with charge coupled device (CCD) cameras while being able to distinguish between the video signals produced by these two types of radiation, even when they occur simultaneously, has been demonstrated. Volume and area measurements of transient radiation-induced pixel charge in English Electric Valve (EEV) Frame Transfer (FT) charge coupled devices (CCDs) from irradiation with pulsed neutrons (14 MeV) and Bremsstrahlung photons (4--12 MeV endpoint) are utilized to calibrate the devices as radiometric imaging sensors capable of distinguishing between the two types of ionizing radiation. Measurements indicate {approx}.05 V/rad responsivity with {ge}1 rad required for saturation from photon irradiation. Neutron-generated localized charge centers or peaks'' binned by area and amplitude as functions of fluence in the 10{sup 5} to 10{sup 7} n/cm{sup 2} range indicate smearing over {approx}1 to 10% of CCD array with charge per pixel ranging between noise and saturation levels.

Yates, G.J. (Los Alamos National Lab., NM (United States)); Smith, G.W. (Ministry of Defense, Aldermaston (United Kingdom). Atomic Weapons Establishment); Zagarino, P.; Thomas, M.C. (EG and G Energy Measurements, Inc., Goleta, CA (United States). Santa Barbara Operations)

1991-01-01

56

Measuring neutron fluences and gamma/x-ray fluxes with CCD cameras  

SciTech Connect

The capability to measure bursts of neutron fluences and gamma/x-ray fluxes directly with charge coupled device (CCD) cameras while being able to distinguish between the video signals produced by these two types of radiation, even when they occur simultaneously, has been demonstrated. Volume and area measurements of transient radiation-induced pixel charge in English Electric Valve (EEV) Frame Transfer (FT) charge coupled devices (CCDs) from irradiation with pulsed neutrons (14 MeV) and Bremsstrahlung photons (4--12 MeV endpoint) are utilized to calibrate the devices as radiometric imaging sensors capable of distinguishing between the two types of ionizing radiation. Measurements indicate {approx}.05 V/rad responsivity with {ge}1 rad required for saturation from photon irradiation. Neutron-generated localized charge centers or ``peaks`` binned by area and amplitude as functions of fluence in the 10{sup 5} to 10{sup 7} n/cm{sup 2} range indicate smearing over {approx}1 to 10% of CCD array with charge per pixel ranging between noise and saturation levels.

Yates, G.J. [Los Alamos National Lab., NM (United States); Smith, G.W. [Ministry of Defense, Aldermaston (United Kingdom). Atomic Weapons Establishment; Zagarino, P.; Thomas, M.C. [EG and G Energy Measurements, Inc., Goleta, CA (United States). Santa Barbara Operations

1991-12-01

57

Evaluating stereoscopic CCD still video imagery for determining object height in forestry applications  

E-print Network

above ground level. Accordingly, the model was designed for rods 40 to 100 cm long to represent poles measuring 40 to 100 feet in height. Absolute orientation of each stereoscopic image was obtained by surveying each nadir, camera location and camera... and quantitative analysis: 1) full frame CCD 2) frame transfer CCD 3) interline transfer CCD The full frame CCD (Fig. 2) is the classical and earliest type of CCD imaging plane. It collects a full array of the incoming photon light source, and must use some...

Jacobs, Dennis Murray

1990-01-01

58

Initial laboratory evaluation of color video cameras, phase 2  

Microsoft Academic Search

Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video

P. L. Terry

1993-01-01

59

Multi-tasking Smart Cameras for Intelligent Video Surveillance Systems  

E-print Network

Multi-tasking Smart Cameras for Intelligent Video Surveillance Systems Wiktor Starzyk Faculty. In this paper, we present a novel video surveillance sys- tem comprising passive and active cameras simul- taneously. These cameras enable the video surveillance sys- tem described herein to intelligently

Qureshi, Faisal Z.

60

Development of filter exchangeable 3CCD camera for multispectral imaging acquisition  

NASA Astrophysics Data System (ADS)

There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallel image signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.

Lee, Hoyoung; Park, Soo Hyun; Kim, Moon S.; Noh, Sang Ha

2012-05-01

61

Automatic Control of Video Surveillance Camera Sabotage  

Microsoft Academic Search

One of the main characteristics of a video surveillance system is its reliability. To this end, it is needed that the images\\u000a captured by the videocameras are an accurate representation of the scene. Unfortunately, some activities can make the proper\\u000a operation of the cameras fail, distorting in some way the images which are going to be processed. When these activities

Pedro Gil-jiménez; Roberto Javier López-sastre; Philip Siegmann; Javier Acevedo-rodríguez; Saturnino Maldonado-bascón

2007-01-01

62

Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera  

PubMed Central

3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets. PMID:22303163

Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

2009-01-01

63

Radiation damage of the PCO Pixelfly VGA CCD camera of the BES system on KSTAR tokamak  

NASA Astrophysics Data System (ADS)

A PCO Pixelfly VGA CCD camera which is part a of the Beam Emission Spectroscopy (BES) diagnostic system of the Korea Superconducting Tokamak Advanced Research (KSTAR) used for spatial calibrations, suffered from serious radiation damage, white pixel defects have been generated in it. The main goal of this work was to identify the origin of the radiation damage and to give solutions to avoid it. Monte Carlo N-Particle eXtended (MCNPX) model was built using Monte Carlo Modeling Interface Program (MCAM) and calculations were carried out to predict the neutron and gamma-ray fields in the camera position. Besides the MCNPX calculations pure gamma-ray irradiations of the CCD camera were carried out in the Training Reactor of BME. Before, during and after the irradiations numerous frames were taken with the camera with 5 s long exposure times. The evaluation of these frames showed that with the applied high gamma-ray dose (1.7 Gy) and dose rate levels (up to 2 Gy/h) the number of the white pixels did not increase. We have found that the origin of the white pixel generation was the neutron-induced thermal hopping of the electrons which means that in the future only neutron shielding is necessary around the CCD camera. Another solution could be to replace the CCD camera with a more radiation tolerant one for example with a suitable CMOS camera or apply both solutions simultaneously.

Náfrádi, Gábor; Kovácsik, Ákos; Pór, Gábor; Lampert, Máté; Un Nam, Yong; Zoletnik, Sándor

2015-01-01

64

An RS-170 to 700 frame per second CCD camera  

SciTech Connect

A versatile new camera, the Los Alamos National Laboratory (LANL) model GY6, is described. It operates at a wide variety of frame rates, from RS-170 to 700 frames per second. The camera operates as an NTSC compatible black and white camera when operating at RS-170 rates. When used for variable high-frame rates, a simple substitution is made of the RS-170 sync/clock generator circuit card with a high speed emitter-coupled logic (ECL) circuit card.

Albright, K.L.; King, N.S.P.; Yates, G.J.; McDonald, T.E. [Los Alamos National Lab., NM (United States); Turko, B.T. [Lawrence Berkeley Lab., CA (United States)

1993-08-01

65

Photometric Calibration of Consumer Video Cameras  

NASA Technical Reports Server (NTRS)

Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

Suggs, Robert; Swift, Wesley, Jr.

2007-01-01

66

Automated CCD camera characterization. 1998 summer research program for high school juniors at the University of Rochester`s Laboratory for Laser Energetics: Student research reports  

SciTech Connect

The OMEGA system uses CCD cameras for a broad range of applications. Over 100 video rate CCD cameras are used for such purposes as targeting, aligning, and monitoring areas such as the target chamber, laser bay, and viewing gallery. There are approximately 14 scientific grade CCD cameras on the system which are used to obtain precise photometric results from the laser beam as well as target diagnostics. It is very important that these scientific grade CCDs are properly characterized so that the results received from them can be evaluated appropriately. Currently characterization is a tedious process done by hand. The operator must manually operate the camera and light source simultaneously. Because more exposures means more accurate information on the camera, the characterization tests can become very length affairs. Sometimes it takes an entire day to complete just a single plot. Characterization requires the testing of many aspects of the camera`s operation. Such aspects include the following: variance vs. mean signal level--this should be proportional due to Poisson statistics of the incident photon flux; linearity--the ability of the CCD to produce signals proportional to the light it received; signal-to-noise ratio--the relative magnitude of the signal vs. the uncertainty in that signal; dark current--the amount of noise due to thermal generation of electrons (cooling lowers this noise contribution significantly). These tests, as well as many others, must be conducted in order to properly understand a CCD camera. The goal of this project was to construct an apparatus that could characterize a camera automatically.

Silbermann, J. [Penfield High School, NY (United States)

1999-03-01

67

Developments in high-speed inspection using intelligent CCD cameras  

Microsoft Academic Search

Described herein is intelligent camera technology suitable for a wide range of inspection applications including webs, widgets, gauging, etc. The system is modular whereby it can sue from one to fifteen intelligent cameras networked together to a personal computer, which in turn may be networked. The inspection system software is multi-threaded, autoconfigurable, and offers: friendly GUI, expert system illumination control,

Dan A. Lehotsky

1996-01-01

68

Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility  

SciTech Connect

The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effort was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.

Teruya, A. T. [LLNL; Palmer, N. E. [LLNL; Schneider, M. B. [LLNL; Bell, P. M. [LLNL; Sims, G. [Spectral Instruments; Toerne, K. [Spectral Instruments; Rodenburg, K. [Spectral Instruments; Croft, M. [Spectral Instruments; Haugh, M. J. [NSTec; Charest, M. R. [NSTec; Romano, E. D. [NSTec; Jacoby, K. D. [NSTec

2013-09-01

69

The In-flight Spectroscopic Performance of the Swift XRT CCD Camera During 2006-2007  

NASA Technical Reports Server (NTRS)

The Swift X-ray Telescope focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 135 eV FWHM at 5.9 keV as measured before launch. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Comparison of observed spectra with models folded through the instrument response produces negative residuals around and below the Oxygen edge. We discuss several possible causes for such residuals. Traps created by proton damage on the CCD increase the charge transfer inefficiency (CTI) over time. We describe the evolution of the CTI since the launch and its effect on the CCD spectral resolution and the gain.

Godet, O.; Beardmore, A.P.; Abbey, A.F.; Osborne, J.P.; Page, K.L.; Evans, P.; Starling, R.; Wells, A.A.; Angelini, L.; Burrows, D.N.; Kennea, J.; Campana, S.; Chincarini, G.; Citterio, O.; Cusumano, G.; LaParola, V.; Mangano, V.; Mineo, T.; Giommi, P.; Perri, M.; Capalbi, M.; Tamburelli, F.

2007-01-01

70

The in-flight spectroscopic performance of the Swift XRT CCD camera during 2006-2007  

E-print Network

The Swift X-ray Telescope focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 135 eV FWHM at 5.9 keV as measured before launch. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Comparison of observed spectra with models folded through the instrument response produces negative residuals around and below the Oxygen edge. We discuss several possible causes for such residuals. Traps created by proton damage on the CCD increase the charge transfer inefficiency (CTI) over time. We describe the evolution of the CTI since the launch and its effect on the CCD spectral resolution and the gain.

O. Godet; A. P. Beardmore; A. F. Abbey; J. P. Osborne; K. L. Page; L. Tyler; D. N. Burrows; P. Evans; R. Starling; A. A. Wells; L. Angelini; S. Campana; G. Chincarini; O. Citterio; G. Cusumano; P. Giommi; J. E. Hill; J. Kennea; V. LaParola; V. Mangano; T. Mineo; A. Moretti; J. A. Nousek; C. Pagani; M. Perri; M. Capalbi; P. Romano; G. Tagliaferri; F. Tamburelli

2007-08-22

71

Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing  

NASA Technical Reports Server (NTRS)

The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.

Crooke, Julie A.

2003-01-01

72

Wilbur: A low-cost CCD camera system for MDM Observatory  

NASA Technical Reports Server (NTRS)

The recent availability of several 'off-the-shelf' components, particularly CCD control electronics from SDSU, has made it possible to put together a flexible CCD camera system at relatively low cost and effort. The authors describe Wilbur, a complete CCD camera system constructed for the Michigan-Dartmouth-MIT Observatory. The hardware consists of a Loral 2048(exp 2) CCD controlled by the SDSU electronics, an existing dewar design modified for use at MDM, a Sun Sparcstation 2 with a commercial high-speed parallel controller, and a simple custom interface between the controller and the SDSU electronics. The camera is controlled from the Sparcstation by software that provides low-level I/O in real time, collection of additional information from the telescope, and a simple command interface for use by an observer. Readout of the 2048(exp 2) array is complete in under two minutes at 5 e(sup -) read noise, and readout time can be decreased at the cost of increased noise. The system can be easily expanded to handle multiple CCD's/multiple readouts, and can control other dewars/CCD's using the same host software.

Metzger, M. R.; Luppino, G. A.; Tonry, J. L.

1992-01-01

73

Design and performance of a metrology camera with 6- and 28-million pixel CCD sensors  

NASA Astrophysics Data System (ADS)

Digital Close-Range Photogrammetry has made tremendous improvements over the last years. Many of the elements contributing to those improvements were the advances in algorithms and their implementation into commercial systems. It has become customary that production personnel without photogrammetric know-how use systems. Another area of major change is the use of Digital Close-Range Photogrammetry systems in machine control. These applications require totally automated systems without human supervision. While the algorithmic performance of such systems is a difficult task other issues, such as ultra high-resolution cameras suitable for the rough environment and their response time, are critical elements for the acceptance of such systems. Cameras with much higher resolutions and better performance have been appearing on the market over the last years. There are Cameras with CCD-sensors containing 4kx4k pixels are available. While these cameras offer a larger sensor their performance is not dramatically better than that of the widely used cameras using 3kx2k sensors. Actually the price/performance of these cameras is poorer. A new camera series offering both 3kx2k and 7kx4k sensors with a custom optics and a high-performance CCD-sensor read-out was designed to break current accuracy barriers. Innovations included in these cameras include custom optics. This allows the optics to be tuned to the application and not just general photography needs. They take advantage of the full dynamic range offered by the CCD-sensor, i.e. use of a 12 bit analog-to-digital converter and 16 bit per pixel in order to take full advantage of the CCD-sensor. They have an optical/mechanical design to assure an extreme geometric stability of the camera. Finally they include an 'on-board' processor to perform all image computations within the camera.

Beyer, Horst A.

1998-12-01

74

Optical synthesizer for a large quadrant-array CCD camera: Center director's discretionary fund  

NASA Technical Reports Server (NTRS)

The objective of this program was to design and develop an optical device, an optical synthesizer, that focuses four contiguous quadrants of a solar image on four spatially separated CCD arrays that are part of a unique CCD camera system. This camera and the optical synthesizer will be part of the new NASA-Marshall Experimental Vector Magnetograph, and instrument developed to measure the Sun's magnetic field as accurately as present technology allows. The tasks undertaken in the program are outlined and the final detailed optical design is presented.

Hagyard, Mona J.

1992-01-01

75

Research on detecting heterogeneous fibre from cotton based on linear CCD camera  

NASA Astrophysics Data System (ADS)

The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton, advance the quality of cotton textile and reduce production cost. There are favorable market value and future development for this technology. An optical detecting system obtains the widespread application. In this system, we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first, then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.

Zhang, Xian-bin; Cao, Bing; Zhang, Xin-peng; Shi, Wei

2009-07-01

76

Sports video categorizing method using camera motion parameters  

NASA Astrophysics Data System (ADS)

In this paper, we propose a content based video categorizing method for broadcasted sports videos using camera motion parameters. We define and introduce two new features in the proposed method; "Camera motion extraction ratio" and "Camera motion transition". Camera motion parameters in the video sequence contain very significant information for categorization of broadcasted sports video, because in most of sports video, camera motions are closely related to the actions taken in the sports, which are mostly based on a certain rule depending on types of sports. Based on the charactersitics, we design a sports video categorization algorithm for identifying 6 major different sports types. In our algorithm, the features automatically extracted from videos are analysed statistically. The experimental results show a clear tendency and the applicability of the proposed method for sports genre identification.

Takagi, Shinichi; Hattori, Shinobu; Yokoyama, Kazumasa; Kodate, Akihisa; Tominaga, Hideyoshi

2003-06-01

77

Color Measurement of Printed Textile using CCD Cameras Harro Stokman Theo Gevers  

E-print Network

Color Measurement of Printed Textile using CCD Cameras Harro Stokman Theo Gevers Intelligent invariance Abstract Automated visual inspection of industrial textile printing has the potential to increase of homogenouesly colored textile patches are explained by the dichromatic reflec­ tion model. An extra clue

Gevers, Theo

78

Pixel correspondence calibration method of a 2CCD camera based on absolute phase calculation  

NASA Astrophysics Data System (ADS)

This paper presents a novel calibration method to build up pixel correspondence between the IR CCD sensor and the visible CCD sensor of a 2CCD camera by using absolute phase calculation. Vertical and horizontal sinusoidal fringe patterns are projected onto a white plate surface through the visible and infrared (IR) channels of a DLP projector. The visible and IR fringe patterns are captured by the IR sensor and visible sensor respectively. Absolute phase of each pixel at IR and visible channels is calculated by using the optimum three-fringe number selection method. The precise pixel relationship between the two channels can be determined by the obtained absolute phase data. Experimental results show the effectiveness and validity of the proposed 2CCD calibration method. Due to using continuous phase information, this method can accurately give pixel-to-pixel correspondence.

Zhang, Zonghua; Zheng, Guoquan; Huang, Shujun

2014-11-01

79

Security camera based on a single chip solution using a sharply outlined display algorithm and variable-clock video encoder  

Microsoft Academic Search

In this paper, we have proposed a security camera system that displays high-definition images by using a sharply outlined display algorithm (SODA), which generates less hardware complexity because of a modified video encoder. While the proposed system uses a charge coupled device (CCD) with a complementary filter that may cause some problems in representing vivid color, we have been able

Joohyun Kim; Jooyoung Ha; Shinki Jeong; Hoongee Yang; Bongsoon Kang

2006-01-01

80

Video Analysis in PTZ Camera Networks From master-slave to cooperative smart cameras  

E-print Network

. In modern surveillance systems, one of the major challenges in multi-camera tracking is the consistency needed. The introduction of Pan-Tilt-Zoom (PTZ) cameras brought new capabilities to surveillance networks a1 Video Analysis in PTZ Camera Networks From master-slave to cooperative smart cameras Christian

81

PIV camera response to high frequency signal: comparison of CCD and CMOS cameras using particle image simulation  

NASA Astrophysics Data System (ADS)

We present a quantitative comparison between FlowMaster3 CCD and Phantom V9.1 CMOS cameras’ response in the scope of application to particle image velocimetry (PIV). First, the subpixel response is characterized using a specifically designed set-up. The crosstalk between adjacent pixels for the two cameras is then estimated and compared. Then, the camera response is experimentally characterized using particle image simulation. Based on a three-point Gaussian peak fitting, the bias and RMS errors between locations of simulated and real images for the two cameras are accurately calculated using a homemade program. The results show that, although the pixel response is not perfect, the optical crosstalk between adjacent pixels stays relatively low and the accuracy of the position determination of an ideal PIV particle image is much better than expected.

Abdelsalam, D. G.; Stanislas, M.; Coudert, S.

2014-08-01

82

The University of Hawaii Institute for Astronomy CCD camera control system  

NASA Technical Reports Server (NTRS)

The University of Hawaii Institute for Astronomy CCD Camera Control System consists of a NeXT workstation, a graphical user interface, and a fiber optics communications interface which is connected to a San Diego State University CCD controller. The UH system employs the NeXT-resident Motorola DSP 56001 as a real time hardware controller. The DSP 56001 is interfaced to the Mach-based UNIX of the NeXT workstation by DMA and multithreading. Since the SDSU controller also uses the DPS 56001, the NeXT is used as a development platform for the embedded control software. The fiber optic interface links the two DSP 56001's through their Synchronous Serial Interfaces. The user interface is based on the NeXTStep windowing system. It is easy to use and features real-time display of image data and control over all camera functions. Both Loral and Tektronix 2048 x 2048 CCD's have been driven at full readout speeds, and the system is intended to be capable of simultaneous readout of four such CCD's. The total hardware package is compact enough to be quite portable and has been used on five different telescopes on Mauna Kea. The complete CCD control system can be assembled for a very low cost. The hardware and software of the control system has proven to be quite reliable, well adapted to the needs of astronomers, and extensible to increasingly complicated control requirements.

Jim, K. T. C.; Yamada, H. T.; Luppino, G. A.; Hlivak, R. J.

1992-01-01

83

Design principles and applications of a cooled CCD camera for electron microscopy.  

PubMed

Cooled CCD cameras offer a number of advantages in recording electron microscope images with CCDs rather than film which include: immediate availability of the image in a digital format suitable for further computer processing, high dynamic range, excellent linearity and a high detective quantum efficiency for recording electrons. In one important respect however, film has superior properties: the spatial resolution of CCD detectors tested so far (in terms of point spread function or modulation transfer function) are inferior to film and a great deal of our effort has been spent in designing detectors with improved spatial resolution. Various instrumental contributions to spatial resolution have been analysed and in this paper we discuss the contribution of the phosphor-fibre optics system in this measurement. We have evaluated the performance of a number of detector components and parameters, e.g. different phosphors (and a scintillator), optical coupling with lens or fibre optics with various demagnification factors, to improve the detector performance. The camera described in this paper, which is based on this analysis, uses a tapered fibre optics coupling between the phosphor and the CCD and is installed on a Philips CM12 electron microscope equipped to perform cryo-microscopy. The main use of the camera so far has been in recording electron diffraction patterns from two dimensional crystals of bacteriorhodopsin--from wild type and from different trapped states during the photocycle. As one example of the type of data obtained with the CCD camera a two dimensional Fourier projection map from the trapped O-state is also included. With faster computers, it will soon be possible to undertake this type of work on an on-line basis. Also, with improvements in detector size and resolution, CCD detectors, already ideal for diffraction, will be able to compete with film in the recording of high resolution images. PMID:9889815

Faruqi, A R

1998-01-01

84

Performance of a 2k CCD camera designed for electron crystallography at 400 kV.  

PubMed

We discuss the performance of a charge-coupled device (CCD) camera that has been designed for use in electron crystallographic studies of proteins. There have been many previous publications describing the characteristics and performance of CCD-based cameras in electron microscopy; here we focus on characteristics relevant to protein studies at 400 kV. The low exposure that must be used in such studies produces a very poor signal-to-noise ratio, so any loss of signal-to-noise ratio in the recording process must be avoided. Images must contain a sufficient number of molecules to allow identification of the reciprocal lattice, thus requiring a large image format. Electron diffraction patterns may contain some spots with intensity around 10(-7) times that of the central beam, so the largest possible dynamic range is helpful. Some of the characteristics we discuss are most easily measured with crystals, but the conclusions also apply for other work such as single-particle analyses. The camera has been optimized for work at 400 kV with a P43 scintillator fiber-optically coupled to a CCD with 24 microns pixels. The scintillator in this camera is thicker than generally used at lower voltages, which provides an adequate signal level but slightly degrades the resolution. Operation at 400 kV leads to a point spread function that is broader than the CCD pixel size. Images are thus binned by a factor of two to double the effective pixel size, with the resulting loss of a factor of two in the size of areas that can be recorded in a single frame. A large CCD with a 2048 x 2048 pixel array is used to compensate for this loss and provide a sufficient signal for the crystallographic image processing used in this work. Images and electron diffraction patterns recorded on the CCD are compared with data recorded on photographic film. While the quality of the images recorded on the CCD at the low exposures required in protein studies is not quite as good as that on film, electron diffraction data recorded on the CCD are superior to that on film. PMID:9919710

Downing, K H; Hendrickson, F M

1999-01-01

85

Automated Technology for Video Surveillance Vast numbers of surveillance cameras  

E-print Network

poor. For example, in London, security cameras captured footage of some of the July 2005 subway bombersAutomated Technology for Video Surveillance Vast numbers of surveillance cameras monitor public specific objects while a main camera continues to urvey the larger scene.s Rama Chellappa and Larry Davis

Hill, Wendell T.

86

Turning Municipal Video Surveillance Cameras Into Municipal Webcams  

Microsoft Academic Search

Increasingly, municipal administrations across the globe are operating video surveillance camera systems in public spaces, with the camera images available only to security personnel. This paper argues that it is possible and desirable to convert some municipal surveillance cameras into municipal webcams, with the images available not only to security personnel but also to everyone using the Internet. The authors

Susan O'Donnell; Mike Richard

2006-01-01

87

Demonstrations of Optical Spectra with a Video Camera  

ERIC Educational Resources Information Center

The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera

Kraftmakher, Yaakov

2012-01-01

88

Development of CCD cameras for soft x-ray imaging at the National Ignition Facility  

NASA Astrophysics Data System (ADS)

The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCDcamera to record timeintegrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing thetarget from above and below, and the X-ray energies of interest are 870 eV for the "soft" channel and 3 - 5 keV for the "hard" channels. The original cameras utilize a large formatbackilluminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor isno longer available, an effort was recently undertaken to build replacement cameras withsuitable new sensors. Three of the new cameras use a commercially available front-illuminatedCCD of similar size to the original, which has adequate sensitivity for the hard X-ray channelsbut not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned andconverted to back-illumination for use in the other two new cameras. In this paper we describethe characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the originalcameras.

Teruya, A. T.; Palmer, N. E.; Schneider, M. B.; Bell, P. M.; Sims, G.; Toerne, K.; Rodenburg, K.; Croft, M.; Haugh, M. J.; Charest, M. R.; Romano, E. D.; Jacoby, K. D.

2013-09-01

89

Calibration of CCD-Cameras for Machine Vision and Robotics  

NASA Astrophysics Data System (ADS)

The basic mathematical formulation of a general solution to the extraction of three-dimensional information from images and camera calibration is presented. Standard photogrammetric algorithms for the least squares estimation of relevant parameters are outlined together with terms and principal aspects of calibration and quality assessment. A second generation prototype system for "Real-Time Photogrammetry" developed as part of the "Digital Photogrammetric Station" of the Institute of Geodesy and Photogrammetry of ETH-Zurich is described. Two calibration tests with three-dimensional testfields and independently determined reference coordinates for quality assessment are presented. In a laboratory calibration with off the shelf equipment an accuracy of 1120th and 1150th of the pixel spacing in row and column direction respectively has been achieved. Problems of the hardware used in the test are outlined. The calibration of a vision system of a ping-pong playing high-speed robot led to an improvement of the accuracy of object coordinates by a factor of over 8. The vision system is tracking table-tennis balls with a 50 Hz rate.

Beyer, Horst A.

1990-02-01

90

A Hardware-Based Surveillance Video Camera Watermark  

Microsoft Academic Search

This paper arose out of a need for marking surveillance video in a simple manner that would allow the integrity of that video against later manipulation to be assured from the camera to the court room. We present a novel use of a video watermarking system. The system is based on an array construction method using seed sequences that allows

Ron van Schyndel

2010-01-01

91

Scintillator-CCD camera system light output response to dosimetry parameters for proton beam range measurement  

NASA Astrophysics Data System (ADS)

The purpose of this study is to investigate the luminescence light output response in a plastic scintillator irradiated by a 67.5 MeV proton beam using various dosimetry parameters. The relationship of the visible scintillator light with the beam current or dose rate, aperture size and the thickness of water in the water-column was studied. The images captured on a CCD camera system were used to determine optimal dosimetry parameters for measuring the range of a clinical proton beam. The method was developed as a simple quality assurance tool to measure the range of the proton beam and compare it to (a) measurements using two segmented ionization chambers and water column between them, and (b) with an ionization chamber (IC-18) measurements in water. We used a block of plastic scintillator that measured 5×5×5 cm3 to record visible light generated by a 67.5 MeV proton beam. A high-definition digital video camera Moticam 2300 connected to a PC via USB 2.0 communication channel was used to record images of scintillation luminescence. The brightness of the visible light was measured while changing beam current and aperture size. The results were analyzed to obtain the range and were compared with the Bragg peak measurements with an ionization chamber. The luminescence light from the scintillator increased linearly with the increase of proton beam current. The light output also increased linearly with aperture size. The relationship between the proton range in the scintillator and the thickness of the water column showed good linearity with a precision of 0.33 mm (SD) in proton range measurement. For the 67.5 MeV proton beam utilized, the optimal parameters for scintillator light output response were found to be 15 nA (16 Gy/min) and an aperture size of 15 mm with image integration time of 100 ms. The Bragg peak depth brightness distribution was compared with the depth dose distribution from ionization chamber measurements and good agreement was observed. The peak/plateau ratio observed for the scintillator was found to be 2.21 as compared to the ionization chamber measurements of 3.01. The response of a scintillator block-CCD camera in 67.5 MeV proton beam was investigated. A linear response was seen between light output and beam current as well as aperture size. The relation between the thickness of water in the water column and the measured range also showed linearity. The results from the scintillator response was used to develop a simple approach to measuring the range and the Bragg peak of a proton beam by recording the visible light from a scintillator block with an accuracy of less than 0.33 mm. Optimal dosimetry parameters for our proton beam were evaluated. It is observed that this method can be used to confirm the range of a proton beam during daily treatment and will be useful as daily QA measurement for proton beam therapy.

Daftari, Inder K.; Castaneda, Carlos M.; Essert, Timothy; Phillips, Theodore L.; Mishra, Kavita K.

2012-09-01

92

Subpixel characterization of a PIV-CCD camera using a laser spot  

NASA Astrophysics Data System (ADS)

We present a simple method for charge-coupled device (CCD; or CMOS) sensor characterization by using a subpixel laser spot. This method is used to measure the variations in sensitivity of the 2D sensor array systems equipped with a microlens array. The experimental results show that there is variation in the sensitivity for each position on the CCD of the camera, and the pixel optical center error with respect to the geometrical center is in the range of one-tenth that of a pixel. The disparity observed is attributed to the coherence of the laser light used that generates interference at the scale of the pixel. This may have significant consequences for coherent light imaging using CCD (or CMOS) such as particle image velocimetry.

Abdelsalam, D. G.; Stanislas, M.; Coudert, S.

2014-08-01

93

Grayscale adjustment method for CCD mosaic camera in surface defect detection system  

NASA Astrophysics Data System (ADS)

Based on microscopic imaging and sub-aperture stitching technology, Surface defect detection system realizes the automatic quantitative detection of submicron defects on the macroscopic surface of optical components, and solves quality control problems of numerous large- aperture precision optical components in ICF (Inertial Confinement Fusion) system. In order to improve the testing efficiency and reduce the number of sub-aperture images, the large format CCD (charged-coupled device) camera is employed to expand the field of view of the system. Large format CCD cameras are usually mosaicked by multi-channel CCD chips, but the differences among the intensity-grayscale functions of different channels will lead to the obvious gray gap among different regions of image. It may cause the shortening and fracture of defects in the process of the image binarization , and thereby lead to the misjudgment of defects. This paper analyzes the different gray characteristics in unbalance images, establishes gray matrix mode of image pixels, and finally proposes a new method to correct the gray gap of CCD self-adaptively. Firstly, by solving the inflection point of the pixel level curve in the gray histogram of the original image, the background threshold is set, and then the background of the image is obtained; Secondly, pixels are sampled from the background and calculated to get the gray gap among different regions of the image; Ultimately, the gray gap is compensated. With this method, an experiment is carried out to adjust 96 dual-channel images from testing a fused silica sample with aperture 180mm×120mm. The results show that the gray gap of the images on different channel is reduced from 3.64 to 0.70 grayscale on average. This method can be also applied to other CCD mosaic camera.

Yan, Lu; Yang, Yongying; Wang, Xiaodan; Wang, Shitong; Cao, Pin; Li, Lu; Liu, Dong

2014-09-01

94

The development of a high-speed 100 fps CCD camera  

SciTech Connect

This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512x512 pixel CCD as its sensor which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergoes correlated double sampling after which, they are digitized into 12 bits. The throughput of the system translates into 60 MB/second which is either stored directly in a PC or transferred to a custom designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for x-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed x-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from I MHz to 15 MHz. The noise was measure to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and x-ray photons.

Hoffberg, M.; Laird, R.; Lenkzsus, F. Liu, Chuande; Rodricks, B. [Argonne National Lab., IL (United States); Gelbart, A. [Rochester Institute of Technology, Rochester, NY (United States)

1996-09-01

95

Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera  

NASA Technical Reports Server (NTRS)

Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.

Stanojev, B. J.; Houts, M.

2004-01-01

96

Metrology for laser-structured microdevices by CCD-camera-based vision systems  

NASA Astrophysics Data System (ADS)

Recent developments in the are of micromachining and microfabrication are accelerating commercial awareness of microstructures. Product applications ranging form automotive and medical devices to industrial, chemical and consumer products show the necessity of adequate fabrication methods for microstructures. These fabrication methods include high resolution measurement technologies. Images of the machined area, recorded via videography by a CCD-camera based computer vision system are evaluated to obtain the two dimensions of the microstructured devices. Height measurement is performed by automatically focusing on two different levels of the workpiece. The achieved accuracy of the measurement data is evaluated. During structuring of microdevices an autofocus system is used to control the removal process by laser radiation to obtain the desired geometry. The mathematical algorithms used by the vision system to guide the focus of the CCD-camera are discussed. The designed measurement system is tested by microstructuring of hard metals and ceramics with short pulse laser radiation.

Ostlender, Andreas; Puetz, Udo; Kreutz, Ernst-Wolfgang

2000-08-01

97

The CFHT MegaCam 40 CCDs camera: cryogenic design and CCD integration  

NASA Astrophysics Data System (ADS)

MegaCam is an imaging camera with a 1 square degree field of view for the new prime focus of the 3.6 meter Canada-France-Hawaii Telescope. In building the MegaCam mosaic we encountered unprecedented challenges from both the large size of each CCD device (2K x 4.5K with 13.5 micron square pixels each) and the large size of the mosaic in which 40 devices have been assembled in a nearly 4-buttable edge manner on a cold plate. The CCD mosaic flatness of +/- 16 ?m has been optically checked at its nominal functioning temperature. The CCD mosaic is cooled at 153 K with a cryogenic unit; a close cycle pulsed tube with a power of 90 W at 140 K. A cold capacity, allows a slow warm-up during cooling shutdowns and a thermal dispatching leads to a temperature uniformity better than 3 K on the whole mosaic. The camera cryostat is designed in order to have easy access to the CCDs. The vacuum needed to avoid CCD contamination, leaded us to the use of low out-gassing materials in the cryostat. The instrument was delivered to the observatory on June 10, 2002 and first light is scheduled in October 2002.

Aune, Stephan; Boulade, Olivier; Charlot, Xavier; Abbon, P.; Borgeaud, Pierre; Carton, Pierre-Henri; Carty, M.; Da Costa, J.; Desforge, D.; Deschamps, H.; Eppellé, Dominique; Gallais, Pascal; Gosset, L.; Granelli, Remy; Gros, Michel; de Kat, Jean; Loiseau, Denis; Ritou, J. L.; Roussé, Jean Y.; Starzynski, Pierre; Vignal, Nicolas; Vigroux, Laurent G.

2003-03-01

98

Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples  

PubMed Central

Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

2014-01-01

99

Development of a portable 3CCD camera system for multispectral imaging of biological samples.  

PubMed

Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S

2014-01-01

100

New event analysis method with the x-ray CCD camera XIS for ASTRO-E  

Microsoft Academic Search

We introduce a new method of event analysis with the x-ray CCD camera (XIS) on board the next Japanese X-ray astronomical satellite, Astro-E. In the ordinary method, we used 'grade' classification; we distinguished the x-ray events from background events by referring the shape and the extent of the charge-split pixels, because non x-ray events spread to many pixels. However, at

Hiroshi Murakami; Takeshi G. Tsuru; Hisamitsu Awaki; Masaaki Sakano; Mamiko Nishiuchi; Kenji Hamaguchi; Katsuji Koyama; Hiroshi Tsunemi

1999-01-01

101

Fluorescent magnetic inspection system used by special CCD cameras to identify axles of railway vehicles  

Microsoft Academic Search

This paper has summarized the achievements in the research on the digital image sampling and processing system for the automation of the fluorescent magnetic inspection for the axles of railway vehicles. The hardware of the system consists of 3 line array CCD-cameras, a multiplex A\\/D converter and an advanced microcomputer and its software has the functions of waveform display, real-time

Xiang Yu; Xiulan Liu; Juheng Xing; Jianbin Gao; Yuhua Yin; Yueshan Pan; Fusheng Bian; Yijie Zhang; Yongzhong Xu; Tai'an Chang

1994-01-01

102

Cramer-Rao lower bound optimization of an EM-CCD-based scintillation gamma camera  

NASA Astrophysics Data System (ADS)

Scintillation gamma cameras based on low-noise electron multiplication (EM-)CCDs can reach high spatial resolutions. For further improvement of these gamma cameras, more insight is needed into how various parameters that characterize these devices influence their performance. Here, we use the Cramer-Rao lower bound (CRLB) to investigate the sensitivity of the energy and spatial resolution of an EM-CCD-based gamma camera to several parameters. The gamma camera setup consists of a 3 mm thick CsI(Tl) scintillator optically coupled by a fiber optic plate to the E2V CCD97 EM-CCD. For this setup, the position and energy of incoming gamma photons are determined with a maximum-likelihood detection algorithm. To serve as the basis for the CRLB calculations, accurate models for the depth-dependent scintillation light distribution are derived and combined with a previously validated statistical response model for the EM-CCD. The sensitivity of the lower bounds for energy and spatial resolution to the EM gain and the depth-of-interaction (DOI) are calculated and compared to experimentally obtained values. Furthermore, calculations of the influence of the number of detected optical photons and noise sources in the image area on the energy and spatial resolution are presented. Trends predicted by CRLB calculations agree with experiments, although experimental values for spatial and energy resolution are typically a factor of 1.5 above the calculated lower bounds. Calculations and experiments both show that an intermediate EM gain setting results in the best possible spatial or energy resolution and that the spatial resolution of the gamma camera degrades rapidly as a function of the DOI. Furthermore, calculations suggest that a large improvement in gamma camera performance is achieved by an increase in the number of detected photons or a reduction of noise in the image area. A large noise reduction, as is possible with a new generation of EM-CCD electronics, may improve the energy and spatial resolution by a factor of 1.5.

Korevaar, Marc A. N.; Goorden, Marlies C.; Beekman, Freek J.

2013-04-01

103

Modeling of the over-exposed pixel area of CCD cameras caused by laser dazzling  

NASA Astrophysics Data System (ADS)

A simple model has been developed and implemented in Matlab code, predicting the over-exposed pixel area of cameras caused by laser dazzling. Inputs of this model are the laser irradiance on the front optics of the camera, the Point Spread Function (PSF) of the used optics, the integration time of the camera, and camera sensor specifications like pixel size, quantum efficiency and full well capacity. Effects of the read-out circuit of the camera are not incorporated. The model was evaluated with laser dazzle experiments on CCD cameras using a 532 nm CW laser dazzler and shows good agreement. For relatively low laser irradiance the model predicts the over-exposed laser spot area quite accurately and shows the cube root dependency of spot diameter on laser irradiance, caused by the PSF as demonstrated before for IR cameras. For higher laser power levels the laser induced spot diameter increases more rapidly than predicted, which probably can be attributed to scatter effects in the camera. Some first attempts to model scatter contributions, using a simple scatter power function f(?), show good resemblance with experiments. Using this model, a tool is available which can assess the performance of observation sensor systems while being subjected to laser countermeasures.

Benoist, Koen W.; Schleijpen, Ric H. M. A.

2014-10-01

104

A range-resolved bistatic lidar using a high-sensitive CCD-camera  

NASA Technical Reports Server (NTRS)

Until now monostatic type lidar systems have been mainly utilized in the field of lidar measurements of the atmosphere. We propose here a range-resolved bistatic lidar system using a high-sensitive cooled charge coupled device (CCD) camera. This system has the ability to measure the three dimensional distributions of aerosol, atmospheric density, and cloud by processing the image data of the laser beam trajectory obtained by a CCD camera. Also, this lidar system has a feature that allows dual utilization of continuous wave (CW) lasers and pulse lasers. The scheme of measurement with this bistatic lidar is shown. A laser beam is emitted vertically and the image of its trajectory is taken with a remote high-sensitive CCD detector using an interference filter and a camera lens. The specifications of the bistatic lidar system used in the experiments are shown. The preliminary experimental results of our range-resolved bistatic lidar system suggest potential applications in the field of lidar measurements of the atmosphere.

Yamaguchi, K.; Nomura, A.; Saito, Y.; Kano, T.

1992-01-01

105

Online inspection of thermo-chemical heat treatment processes with CCD camera system  

NASA Astrophysics Data System (ADS)

Plasma nitriding belongs to the group of the thermo chemical surface heat treatments. During this process nitrogen is dissociated into the surface of the material increasing hardness, wear resistance, endurance strength and/or corrosion resistance. This paper presents a new inspection system based on a CCD camera system for monitoring such heat treatment processes (PACVD, plasma assisted chemical vapour deposition). Treatment temperatures commonly used are within the range of 350 °C to 600 °C. A near infrared enhanced CCD camera system equipped with specifically chosen spectral filters is used to measure spectral emittances during the surface modification. In particular the spectral operating range of 950nm to 1150nm of the silicon CCD camera is utilized. The measurement system is based on the principles of ratio pyrometry (dual-band method) known from non-contact temperature measurements, in which two images of the same scene, each taken at slightly different spectral bands, are used to determine the spectral light characteristics. This results in an improved relative sensitivity for spectral changes (i.e. deviations from the gray-body hypothesis) during the surface modification.

Zauner, Gerald; Darilion, Gerald; Pree, Ronald; Heim, Daniel; Hendorfer, G.

2005-11-01

106

Origins of the instrumental background of the x-ray CCD camera in space studied with Monte Carlo simulation  

Microsoft Academic Search

We report on the origin of the instrumental background of the X-ray CCD camera in space obtained from the Monte Carlo simulation with GEANT4. In the space environment, CCD detects many non-X-ray events, which are produced by the interactions of high-energy particles with the materials surrounding CCD. Most of these events are rejected through the analysis of the charge split

Hiroshi Murakami; Masaki Kitsunezuka; Masanobu Ozaki; Tadayasu Dotani; Takayasu Anada

2006-01-01

107

Data acquisition system based on the Nios II for a CCD camera  

NASA Astrophysics Data System (ADS)

The FPGA with Avalon Bus architecture and Nios soft-core processor developed by Altera Corporation is an advanced embedded solution for control and interface systems. A CCD data acquisition system with an Ethernet terminal port based on the TCP/IP protocol is implemented in NAOC, which is composed of a piece of interface board with an Altera's FPGA, 32MB SDRAM and some other accessory devices integrated on it, and two packages of control software used in the Nios II embedded processor and the remote host PC respectively. The system is used to replace a 7200 series image acquisition card which is inserted in a control and data acquisition PC, and to download commands to an existing CCD camera and collect image data from the camera to the PC. The embedded chip in the system is a Cyclone FPGA with a configurable Nios II soft-core processor. Hardware structure of the system, configuration for the embedded soft-core processor, and peripherals of the processor in the PFGA are described. The C program run in the Nios II embedded system is built in the Nios II IDE kits and the C++ program used in the PC is developed in the Microsoft's Visual C++ environment. Some key techniques in design and implementation of the C and VC++ programs are presented, including the downloading of the camera commands, initialization of the camera, DMA control, TCP/IP communication and UDP data uploading.

Li, Binhua; Hu, Keliang; Wang, Chunrong; Liu, Yangbing; He, Chun

2006-06-01

108

Video camera system for locating bullet holes in targets at a ballistics tunnel  

NASA Technical Reports Server (NTRS)

A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

Burner, A. W.; Rummler, D. R.; Goad, W. K.

1990-01-01

109

Are Video Cameras the Key to School Safety?  

ERIC Educational Resources Information Center

Describes one high school's use of video cameras as a preventive tool in stemming theft and violent episodes within schools. The top 10 design tips for preventing crime on campus are highlighted. (GR)

Maranzano, Chuck

1998-01-01

110

DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

111

DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

112

A new algorithm for automatic white balance based on CCD camera  

NASA Astrophysics Data System (ADS)

Auto white balance plays a key role in a digital camera system, and determines image quality to a large extent. If the white balance is not to be considered in the development of CCD camera, under the different color temperatures, this will cause chromatic aberration. A new effective automatic white balance algorithm for digital camera is proposed in this paper. With a new color temperature estimation method based on the extraction both skin and white regions, the algorithm can find more proper pixels to calculate the averaged chromatic aberration to improve the precision of the estimated color temperature. And to some extent, the algorithm solves the problem that the classical automatic white balance algorithm fails in estimating color temperature in the past in the case of the images have not white regions.

Xu, Zhaohui; Li, Han; Tian, Yan; Jiao, Guohua

2009-10-01

113

Camcorder 101: Buying and Using Video Cameras.  

ERIC Educational Resources Information Center

Lists nine practical applications of camcorders to theater companies and programs. Discusses the purchase of video gear, camcorder features, accessories, the use of the camcorder in the classroom, theater management, student uses, and video production. (PRA)

Catron, Louis E.

1991-01-01

114

800 x 800 charge-coupled device /CCD/ camera for the Galileo Jupiter Orbiter mission  

NASA Technical Reports Server (NTRS)

During January 1982 the NASA space transportation system will launch a Galileo spacecraft composed of an orbiting bus and an atmospheric entry probe to arrive at the planet Jupiter in July 1985. A prime element of the orbiter's scientific instrument payload will be a new generation slow-scan planetary imaging system based on a newly developed 800 x 800 charge-coupled device (CCD) image sensor. Following Jupiter orbit insertion, the single, narrow-angle, CCD camera, designated the Solid State Imaging (SSI) Subsystem, will operate for 20 months as the orbiter makes repeated encounters with Jupiter and its Galilean Satellites. During this period the SSI will acquire 40,000 images of Jupiter's atmosphere and the surfaces of the Galilean Satellites. This paper describes the SSI, its operational modes, and science objectives.

Clary, M. C.; Klaasen, K. P.; Snyder, L. M.; Wang, P. K.

1979-01-01

115

CCD camera-based analysis of thin film growth in industrial PACVD processes  

NASA Astrophysics Data System (ADS)

In this paper we present a method for the characterization of (semi-transparent) thin film growth during PACVD processes (plasma assisted chemical vapour deposition), based on analysis of thermal radiation by means of nearinfrared imaging. Due to interference effects during thin film growth, characteristic emissivity signal variations can be observed which allow very detailed spatio-temporal analysis of growth characteristics (e.g. relative growth rates). We use a standard CCD camera with a near-infrared band-pass filter (center wavelength 1030 nm, FWHM 10nm) as a thermal imaging device. The spectral sensitivity of a Si-CCD sensor at 1?m is sufficient to allow the imaging of thermal radiation at temperatures above approx. 400°C, whereas light emissions from plasma discharges (which mainly occur in the visible range of the electromagnetic spectrum) barely affect the image formation.

Zauner, G.; Schulte, T.; Forsich, C.; Heim, Daniel

2013-04-01

116

Review of intelligent video surveillance with single camera  

NASA Astrophysics Data System (ADS)

Intelligent video surveillance has found a wide range of applications in public security. This paper describes the state-of- the-art techniques in video surveillance system with single camera. This can serve as a starting point for building practical video surveillance systems in developing regions, leveraging existing ubiquitous infrastructure. In addition, this paper discusses the gap between existing technologies and the requirements in real-world scenario, and proposes potential solutions to reduce this gap.

Liu, Ying; Fan, Jiu-lun; Wang, DianWei

2012-01-01

117

Source video camera identification for multiply compressed videos originating from YouTube  

Microsoft Academic Search

The Photo Response Non-Uniformity is a unique sensor noise pattern that is present in each image or video acquired with a digital camera. In this work a wavelet-based technique used to extract these patterns from digital images is applied to compressed low resolution videos originating mainly from webcams. After recording these videos with a variety of codec and resolution settings,

Wiger van Houten; Zeno Geradts

2009-01-01

118

Digital imaging microscopy: the marriage of spectroscopy and the solid state CCD camera  

NASA Astrophysics Data System (ADS)

Biological samples have been imaged using microscopes equipped with slow-scan CCD cameras. Examples are presented of studies based on the detection of light emission signals in the form of fluorescence and phosphorescence. They include applications in the field of cell biology: (a) replication and topology of mammalian cell nuclei; (b) cytogenetic analysis of human metaphase chromosomes; and (c) time-resolved measurements of DNA-binding dyes in cells and on isolated chromosomes, as well as of mammalian cell surface antigens, using the phosphorescence of acridine orange and fluorescence resonance energy transfer of labeled lectins, respectively.

Jovin, Thomas M.; Arndt-Jovin, Donna J.

1991-12-01

119

Controlled Impact Demonstration (CID) tail camera video  

NASA Technical Reports Server (NTRS)

The Controlled Impact Demonstration (CID) was a joint research project by NASA and the FAA to test a survivable aircraft impact using a remotely piloted Boeing 720 aircraft. The tail camera movie is one shot running 27 seconds. It shows the impact from the perspective of a camera mounted high on the vertical stabilizer, looking forward over the fuselage and wings.

1984-01-01

120

A rehabilitation training system with double-CCD camera and automatic spatial positioning technique  

NASA Astrophysics Data System (ADS)

This study aimed to develop a computer game for machine vision integrated rehabilitation training system. The main function of the system is to allow users to conduct hand grasp-and-place movement through machine vision integration. Images are captured by a double-CCD camera, and then positioned on a large screen. After defining the right, left, upper, and lower boundaries of the captured images, an automatic spatial positioning technique is employed to obtain their correlation functions, and lookup tables are defined for cameras. This system can provide rehabilitation courses and games that allow users to exercise grasp-and-place movements, in order to improve their upper limb movement control, trigger trunk control, and balance training.

Lin, Chern-Sheng; Wei, Tzu-Chi; Lu, An-Tsung; Hung, San-Shan; Chen, Wei-Lung; Chang, Chia-Chang

2011-03-01

121

Striping Noise Removal of Images Acquired by Cbers 2 CCD Camera Sensor  

NASA Astrophysics Data System (ADS)

CCD Camera is a multi-spectral sensor that is carried by CBERS 2 satellite. Imaging technique in this sensor is push broom. In images acquired by the CCD Camera, some vertical striping noise can be seen. This is due to the detectors mismatch, inter detector variability, improper calibration of detectors and low signal-to-noise ratio. These noises are more profound in images acquired from the homogeneous surfaces, which are processed at level 2. However, the existence of these noises render the interpretation of the data and extracting information from these images difficult. In this work, spatial moment matching method is proposed to modify these images. In this method, the statistical moments such as mean and standard deviation of columns in each band are used to balance the statistical specifications of the detector array to those of reference values. After the removal of the noise, some periodic diagonal stripes remain in the image where their removal by using the aforementioned method seems impossible. Therefore, to omit them, frequency domain Butterworth notch filter was applied. Finally to evaluate the results, the image statistical moments such as the mean and standard deviation were deployed. The study proves the effectiveness of the method in noise removal.

Amraei, E.; Mobasheri, M. R.

2014-10-01

122

Performance Characterization of the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) CCD Cameras  

NASA Technical Reports Server (NTRS)

The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a sounding rocket instrument currently being developed by NASA's Marshall Space Flight Center (MSFC), the National Astronomical Observatory of Japan (NAOJ), and other partners. The goal of this instrument is to observe and detect the Hanle effect in the scattered Lyman-Alpha UV (121.6nm) light emitted by the Sun's chromosphere. The polarized spectrum imaged by the CCD cameras will capture information about the local magnetic field, allowing for measurements of magnetic strength and structure. In order to make accurate measurements of this effect, the performance characteristics of the three on- board charge-coupled devices (CCDs) must meet certain requirements. These characteristics include: quantum efficiency, gain, dark current, read noise, and linearity. Each of these must meet predetermined requirements in order to achieve satisfactory performance for the mission. The cameras must be able to operate with a gain of 2.0+/- 0.5 e--/DN, a read noise level less than 25e-, a dark current level which is less than 10e-/pixel/s, and a residual non- linearity of less than 1%. Determining these characteristics involves performing a series of tests with each of the cameras in a high vacuum environment. Here we present the methods and results of each of these performance tests for the CLASP flight cameras.

Joiner, Reyann; Kobayashi, Ken; Winebarger, Amy; Champey, Patrick

2014-01-01

123

Ball lightning observation: an objective video-camera analysis report  

E-print Network

In this paper we describe a video-camera recording of a (probable) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization. The results strongly support the BL nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible BL event for further analyses. Some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper.

Stefano Sello; Paolo Viviani; Enrico Paganini

2011-02-18

124

Ball lightning observation: an objective video-camera analysis report  

E-print Network

In this paper we describe a video-camera recording of a (probable) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization. The results strongly support the BL nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible BL event for further analyses. Some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper.

Sello, Stefano; Paganini, Enrico

2011-01-01

125

Charge-coupled device (CCD) television camera for NASA's Galileo mission to Jupiter  

NASA Technical Reports Server (NTRS)

The CCD detector under construction for use in the slow-scan television camera for the NASA Galileo Jupiter orbiter to be launched in 1985 is presented. The science objectives and the design constraints imposed by the earth telemetry link, platform residual motion, and the Jovian radiation environment are discussed. Camera optics are inherited from Voyager; filter wavelengths are chosen to enable discrimination of Galilean-satellite surface chemical composition. The CCO design, an 800 by 800-element 'virtual-phase' solid-state silicon image-sensor array with supporting electronics, is described with detailed discussion of the thermally generated dark current, quantum efficiency, signal-to-noise ratio, and resolution. Tests of the effect of ionizing radiation were performed and are analyzed statistically. An imaging mode using a 2-1/3-sec frame time and on-chip summation of the signal in 2 x 2 blocks of adjacent pixels is designed to limit the effects of the most extreme Jovian radiation. Smearing due to spacecraft/target relative velocity and platform instability will be corrected for via an algorithm maximizing spacial resolution at a given signal-to-noise level. The camera is expected to produce 40,000 images of Jupiter and its satellites during the 20-month mission.

Klaasen, K. P.; Clary, M. C.; Janesick, J. R.

1982-01-01

126

Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras  

NASA Technical Reports Server (NTRS)

Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

McDowell, Mark; Gray, Elizabeth

2004-01-01

127

Automatic dominant camera motion annotation for video retrieval  

NASA Astrophysics Data System (ADS)

An efficient method is derived to classify the dominant camera motions in video shots. Various 3-D camera motions including camera pan, tilt, zoom, Z-rotation, and translations are detected. The method is to analyze the optical flow in a decomposed manner. Images are divided into some sub-regions according to our camera model. The projected x and y components of optical flow are analyzed separately in the different sub-regions of the images. Different camera motions are recognized by comparing the computed result with the prior known patterns. The optical flow is computed by using the Lucas-Kanade method, which is quite efficient due to non- iteration computation. Our method is efficient and effective because only some mean values and standard deviations are used. The analysis and detailed description of our method is given in this paper. Experimental results are presented to show the effectiveness of our method.

Xiong, Wei; Lee, John C.

1997-12-01

128

Performance of front-end mixed-signal ASIC for onboard CCD cameras  

NASA Astrophysics Data System (ADS)

We report on the development status of the readout ASIC for an onboard X-ray CCD camera. The quick low- noise readout is essential for the pile-up free imaging spectroscopy with the future highly sensitive telescope. The dedicated ASIC for ASTRO-H/SXI has sufficient noise performance only at the slow pixel rate of 68 kHz. Then we have been developing the upgraded ASIC with the fourth-order ?? modulators. Upgrading the order of the modulator enables us to oversample the CCD signals less times so that we. The digitized pulse height is a serial bit stream that is decrypted with a decimation filter. The weighting coefficient of the filter is optimized to maximize the signal-to-noise ratio by a simulation. We present the performances such as the input equivalent noise (IEN), gain, effective signal range. The digitized pulse height data are successfully obtained in the first functional test up to 625 kHz. IEN is almost the same as that obtained with the chip for ASTRO-H/SXI. The residuals from the gain function is about 0.1%, which is better than that of the conventional ASIC by a factor of two. Assuming that the gain of the CCD is the same as that for ASTRO-H, the effective range is 30 keV in the case of the maximum gain. By changing the gain it can manage the signal charges of 100 ke-. These results will be fed back to the optimization of the pulse height decrypting filter.

Nakajima, Hiroshi; Inoue, Shota; Nagino, Ryo; Anabuki, Naohisa; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu

2014-07-01

129

High resolution three-dimensional photoacoutic tomography with CCD-camera based ultrasound detection  

PubMed Central

A photoacoustic tomograph based on optical ultrasound detection is demonstrated, which is capable of high resolution real-time projection imaging and fast three-dimensional (3D) imaging. Snapshots of the pressure field outside the imaged object are taken at defined delay times after photoacoustic excitation by use of a charge coupled device (CCD) camera in combination with an optical phase contrast method. From the obtained wave patterns photoacoustic projection images are reconstructed using a back propagation Fourier domain reconstruction algorithm. Applying the inverse Radon transform to a set of projections recorded over a half rotation of the sample provides 3D photoacoustic tomography images in less than one minute with a resolution below 100 µm. The sensitivity of the device was experimentally determined to be 5.1 kPa over a projection length of 1 mm. In vivo images of the vasculature of a mouse demonstrate the potential of the developed method for biomedical applications. PMID:25136491

Nuster, Robert; Slezak, Paul; Paltauf, Guenther

2014-01-01

130

CCD cameras and Spacewire interfaces for HERSCHEL/SCORE suborbital mission  

NASA Astrophysics Data System (ADS)

The HERSCHEL/SCORE is a suborbital mission which will observe the solar corona in UV and in visible light for measurements of solar corona. The coronagraph for such observation is an Italian instrument and, in particular, the CCD camera detectors are developed at the XUVLab of the Department of Astronomy and Space Science of Florence University. Such detectors communicate with the onboard computer by means the IEEE1355 Spacewire standard interface (developed in our laboratories) and implement a lot of smart and custom procedures for imaging. The main innovation of SCORE coronagraph is the first use in space of a variable retarder plate based on liquid crystals and the optical design capable of simultaneous observation in UV and Visible light.

Gherardi, A.; Romoli, M.; Pace, E.; Pancrazzi, M.; Rossi, G.; Paganini, D.; Focardi, M.

2009-04-01

131

ULTRACAM: an ultra-fast, triple-beam CCD camera for high-speed astrophysics  

E-print Network

ULTRACAM is a portable, high-speed imaging photometer designed to study faint astronomical objects at high temporal resolutions. ULTRACAM employs two dichroic beamsplitters and three frame-transfer CCD cameras to provide three-colour optical imaging at frame rates of up to 500 Hz. The instrument has been mounted on both the 4.2-m William Herschel Telescope on La Palma and the 8.2-m Very Large Telescope in Chile, and has been used to study white dwarfs, brown dwarfs, pulsars, black-hole/neutron-star X-ray binaries, gamma-ray bursts, cataclysmic variables, eclipsing binary stars, extrasolar planets, flare stars, ultra-compact binaries, active galactic nuclei, asteroseismology and occultations by Solar System objects (Titan, Pluto and Kuiper Belt objects). In this paper we describe the scientific motivation behind ULTRACAM, present an outline of its design and report on its measured performance.

V. S. Dhillon; T. R. Marsh; M. J. Stevenson; D. C. Atkinson; P. Kerry; P. T. Peacocke; A. J. A. Vick; S. M. Beard; D. J. Ives; D. W. Lunney; S. A. McLay; C. J. Tierney; J. Kelly; S. P. Littlefair; R. Nicholson; R. Pashley; E. T. Harlaftis; K. O'Brien

2007-04-19

132

A reflectance model for non-contact mapping of venous oxygen saturation using a CCD camera  

NASA Astrophysics Data System (ADS)

A method of non-contact mapping of venous oxygen saturation (SvO2) is presented. A CCD camera is used to image skin tissue illuminated alternately by a red (660 nm) and an infrared (800 nm) LED light source. Low cuff pressures of 30-40 mmHg are applied to induce a venous blood volume change with negligible change in the arterial blood volume. A hybrid model combining the Beer-Lambert law and the light diffusion model is developed and used to convert the change in the light intensity to the change in skin tissue absorption coefficient. A simulation study incorporating the full light diffusion model is used to verify the hybrid model and to correct a calculation bias. SvO2 in the fingers, palm, and forearm for five volunteers are presented and compared with results in the published literature. Two-dimensional maps of venous oxygen saturation are given for the three anatomical regions.

Li, Jun; Dunmire, Barbrina; Beach, Kirk W.; Leotta, Daniel F.

2013-11-01

133

0.25mm-thick CCD packaging for the Dark Energy Survey Camera array  

SciTech Connect

The Dark Energy Survey Camera focal plane array will consist of 62 2k x 4k CCDs with a pixel size of 15 microns and a silicon thickness of 250 microns for use at wavelengths between 400 and 1000 nm. Bare CCD die will be received from the Lawrence Berkeley National Laboratory (LBNL). At the Fermi National Accelerator Laboratory, the bare die will be packaged into a custom back-side-illuminated module design. Cold probe data from LBNL will be used to select the CCDs to be packaged. The module design utilizes an aluminum nitride readout board and spacer and an Invar foot. A module flatness of 3 microns over small (1 sqcm) areas and less than 10 microns over neighboring areas on a CCD are required for uniform images over the focal plane. A confocal chromatic inspection system is being developed to precisely measure flatness over a grid up to 300 x 300 mm. This system will be utilized to inspect not only room-temperature modules, but also cold individual modules and partial arrays through flat dewar windows.

Derylo, Greg; Diehl, H.Thomas; Estrada, Juan; /Fermilab

2006-06-01

134

67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST OF ASSISTANT LAUNCH CONDUCTOR PANEL SHOWN IN CA-133-1-A-66 - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

135

Using a Digital Video Camera to Study Motion  

ERIC Educational Resources Information Center

To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

Abisdris, Gil; Phaneuf, Alain

2007-01-01

136

Double Star Measurements at the Southern Sky with a 50 cm Reflector and a Fast CCD Camera in 2014  

NASA Astrophysics Data System (ADS)

A Ritchey-Chrétien reflector with 50 cm aperture was used in Namibia for recordings of double stars with a fast CCD camera and a notebook computer. From superposition of "lucky images", measurements of 91 pairings in 79 double and multiple systems were obtained and compared with literature data. Occasional deviations are discussed. Some images of noteworthy systems are also presented.

Anton, Rainer

2015-04-01

137

Scientific CCD technology at JPL  

NASA Technical Reports Server (NTRS)

Charge-coupled devices (CCD's) were recognized for their potential as an imaging technology almost immediately following their conception in 1970. Twenty years later, they are firmly established as the technology of choice for visible imaging. While consumer applications of CCD's, especially the emerging home video camera market, dominated manufacturing activity, the scientific market for CCD imagers has become significant. Activity of the Jet Propulsion Laboratory and its industrial partners in the area of CCD imagers for space scientific instruments is described. Requirements for scientific imagers are significantly different from those needed for home video cameras, and are described. An imager for an instrument on the CRAF/Cassini mission is described in detail to highlight achieved levels of performance.

Janesick, J.; Collins, S. A.; Fossum, E. R.

1991-01-01

138

An Exploratory Analysis Tool for a Long-Term Video from a Stationary Camera  

E-print Network

An Exploratory Analysis Tool for a Long-Term Video from a Stationary Camera Ryoji Nogami, Buntarou analysis of a long-term video from a stationary camera. The tool consists of three key methods: spatial in the exploratory analysis of a long-term video taken with a stationary camera. This exploratory approach enables

Tanaka, Jiro

139

Pattern Recognition Letters 00 (2012) 125 Intelligent Multi-Camera Video Surveillance: A Review  

E-print Network

Pattern Recognition Letters 00 (2012) 1­25 Journal Logo Intelligent Multi-Camera Video Surveillance, Hong Kong Abstract Intelligent multi-camera video surveillance is a multidisciplinary field related analysis and co- operative video surveillance both with active and static cameras. Detailed descriptions

Wang, Xiaogang

140

The Terrascope Dataset: A Scripted Multi-Camera Indoor Video Surveillance Dataset with Ground-truth  

E-print Network

The Terrascope Dataset: A Scripted Multi-Camera Indoor Video Surveillance Dataset with Ground introduces a new video surveillance dataset that was captured by a network of synchronized cameras placed on an emerging subproblem in the video-surveillance domain; multi-camera wide area surveillance

Kale, Amit

141

Unmanned Vehicle Guidance Using Video Camera/Vehicle Model  

NASA Technical Reports Server (NTRS)

A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

Sutherland, T.

1999-01-01

142

In-flight Video Captured by External Tank Camera System  

NASA Technical Reports Server (NTRS)

In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

2005-01-01

143

OP09O-OP404-9 Wide Field Camera 3 CCD Quantum Efficiency Hysteresis  

NASA Technical Reports Server (NTRS)

The HST/Wide Field Camera (WFC) 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. At the nominal operating temperature of -83C, the QEH feature contrast was typically 0.1-0.2% or less. The behavior was replicated using flight spare detectors. A visible light flat-field (540nm) with a several times full-well signal level can pin the detectors at both optical (600nm) and near-UV (230nm) wavelengths, suppressing the QEH behavior. We are characterizing the timescale for the detectors to become unpinned and developing a protocol for flashing the WFC3 CCDs with the instrument's internal calibration system in flight. The HST/Wide Field Camera 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. The first observed manifestation of QEH was the presence in a small percentage of flat-field images of a bowtie-shaped contrast that spanned the width of each chip. At the nominal operating temperature of -83C, the contrast observed for this feature was typically 0.1-0.2% or less, though at warmer temperatures contrasts up to 5% (at -50C) have been observed. The bowtie morphology was replicated using flight spare detectors in tests at the GSFC Detector Characterization Laboratory by power cycling the detector while cold. Continued investigation revealed that a clearly-related global QE suppression at the approximately 5% level can be produced by cooling the detector in the dark; subsequent flat-field exposures at a constant illumination show asymptotically increasing response. This QE "pinning" can be achieved with a single high signal flat-field or a series of lower signal flats; a visible light (500-580nm) flat-field with a signal level of several hundred thousand electrons per pixel is sufficient for QE pinning at both optical (600nm) and near-UV (230nm) wavelengths. We are characterizing the timescale for the detectors to become unpinned and developing a protocol for flashing the WFC3 CCDs with the instrument's internal calibration system in flight. A preliminary estimate of the decay timescale for one detector is that a drop of 0.1-0.2% occurs over a ten day period, indicating that relatively infrequent cal lamp exposures can mitigate the behavior to extremely low levels.

Collins, Nick

2009-01-01

144

Measuring the Flatness of Focal Plane for Very Large Mosaic CCD Camera  

SciTech Connect

Large mosaic multiCCD camera is the key instrument for modern digital sky survey. DECam is an extremely red sensitive 520 Megapixel camera designed for the incoming Dark Energy Survey (DES). It is consist of sixty two 4k x 2k and twelve 2k x 2k 250-micron thick fully-depleted CCDs, with a focal plane of 44 cm in diameter and a field of view of 2.2 square degree. It will be attached to the Blanco 4-meter telescope at CTIO. The DES will cover 5000 square-degrees of the southern galactic cap in 5 color bands (g, r, i, z, Y) in 5 years starting from 2011. To achieve the science goal of constraining the Dark Energy evolution, stringent requirements are laid down for the design of DECam. Among them, the flatness of the focal plane needs to be controlled within a 60-micron envelope in order to achieve the specified PSF variation limit. It is very challenging to measure the flatness of the focal plane to such precision when it is placed in a high vacuum dewar at 173 K. We developed two image based techniques to measure the flatness of the focal plane. By imaging a regular grid of dots on the focal plane, the CCD offset along the optical axis is converted to the variation the grid spacings at different positions on the focal plane. After extracting the patterns and comparing the change in spacings, we can measure the flatness to high precision. In method 1, the regular dots are kept in high sub micron precision and cover the whole focal plane. In method 2, no high precision for the grid is required. Instead, we use a precise XY stage moves the pattern across the whole focal plane and comparing the variations of the spacing when it is imaged by different CCDs. Simulation and real measurements show that the two methods work very well for our purpose, and are in good agreement with the direct optical measurements.

Hao, Jiangang; Estrada, Juan; Cease, Herman; Diehl, H.Thomas; Flaugher, Brenna L.; Kubik, Donna; Kuk, Keivin; Kuropatkine, Nickolai; Lin, Huan; Montes, Jorge; Scarpine, Vic; /Fermilab

2010-06-08

145

Robust camera calibration for sport videos using court models  

NASA Astrophysics Data System (ADS)

We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

2003-12-01

146

Maximum-likelihood scintillation detection for EM-CCD based gamma cameras  

NASA Astrophysics Data System (ADS)

Gamma cameras based on charge-coupled devices (CCDs) coupled to continuous scintillation crystals can combine a good detection efficiency with high spatial resolutions with the aid of advanced scintillation detection algorithms. A previously developed analytical multi-scale algorithm (MSA) models the depth-dependent light distribution but does not take statistics into account. Here we present and validate a novel statistical maximum-likelihood algorithm (MLA) that combines a realistic light distribution model with an experimentally validated statistical model. The MLA was tested for an electron multiplying CCD optically coupled to CsI(Tl) scintillators of different thicknesses. For 99mTc imaging, the spatial resolution (for perpendicular and oblique incidence), energy resolution and signal-to-background counts ratio (SBR) obtained with the MLA were compared with those of the MSA. Compared to the MSA, the MLA improves the energy resolution by more than a factor of 1.6 and the SBR is enhanced by more than a factor of 1.3. For oblique incidence (approximately 45°), the depth-of-interaction corrected spatial resolution is improved by a factor of at least 1.1, while for perpendicular incidence the MLA resolution does not consistently differ significantly from the MSA result for all tested scintillator thicknesses. For the thickest scintillator (3 mm, interaction probability 66% at 141 keV) a spatial resolution (perpendicular incidence) of 147 µm full width at half maximum (FWHM) was obtained with an energy resolution of 35.2% FWHM. These results of the MLA were achieved without prior calibration of scintillations as is needed for many statistical scintillation detection algorithms. We conclude that the MLA significantly improves the gamma camera performance compared to the MSA.

Korevaar, Marc A. N.; Goorden, Marlies C.; Heemskerk, Jan W. T.; Beekman, Freek J.

2011-08-01

147

The 2006 Orionid outburst imaged by all-sky CCD cameras from Spain: meteoroid spatial fluxes and orbital elements  

NASA Astrophysics Data System (ADS)

By using high-resolution low-scan-rate all-sky CCD cameras, the SPanish Meteor Network (SPMN) detected an outburst of Orionid meteors associated with comet 1P/Halley on 2006 October 20-21. This detection was made possible due to the operational concept of the SPMN that involves continuous monitoring of meteor activity throughout the year. Accurate heliocentric orbits have been obtained for three meteors imaged simultaneously from two stations during the outburst. Additional astrometry of 33 single-station meteors indicates that the activity was produced from a conspicuous geocentric radiant located at ? = 922 +/- 05 and ? = +154 +/- 06 which is similar to the radiant observed during the 1993 Orionid outburst despite the fact that the last one peaked on a different date. The radiant position obtained by the SPMN is consistent with that derived from digital pictures taken a few hours before from Ankara (Turkey). The extent of the outburst (a background of bright meteors was observed over several days), its absence in other years, and the orbital period of the three Orionid orbits suggest that the outburst could be produced by meteoroids trapped in resonances with Jupiter but additional data are required. The SPMN's continuous coverage of meteor activity allowed the identification of the main sources of meteors during 2006 October: mostly due to the Orionid stream, the two branches of the Taurid stream associated with comet 2P/Encke, and the ? Aurigids. Surprisingly, once a detailed analysis of the double-station video meteors was completed, some additional minor stream activity was discovered, that is, the ? Aurigids. In consequence, we also present two accurate orbits of this unexpected, but previously identified, minor shower.

Trigo-Rodríguez, Josep M.; Madiedo, José M.; Llorca, Jordi; Gural, Peter S.; Pujols, Pep; Tezel, Tunc

2007-09-01

148

Detection of multimode spatial correlation in PDC and application to the absolute calibration of a CCD camera  

E-print Network

We propose and demonstrate experimentally a new method based on the spatial entanglement for the absolute calibration of analog detector. The idea consists on measuring the sub-shot-noise intensity correlation between two branches of parametric down conversion, containing many pairwise correlated spatial modes. We calibrate a scientific CCD camera and a preliminary evaluation of the statistical uncertainty indicates the metrological interest of the method.

Brida, Giorgio; Genovese, Marco; Rastello, Maria Luisa; Ruo-Berchera, Ivano

2010-01-01

149

Detection of multimode spatial correlation in PDC and application to the absolute calibration of a CCD camera  

E-print Network

We propose and demonstrate experimentally a new method based on the spatial entanglement for the absolute calibration of analog detector. The idea consists on measuring the sub-shot-noise intensity correlation between two branches of parametric down conversion, containing many pairwise correlated spatial modes. We calibrate a scientific CCD camera and a preliminary evaluation of the statistical uncertainty indicates the metrological interest of the method.

Giorgio Brida; Ivo Pietro Degiovanni; Marco Genovese; Maria Luisa Rastello; Ivano Ruo-Berchera

2010-05-17

150

MOA-cam3: a wide-field mosaic CCD camera for a gravitational microlensing survey in New Zealand  

E-print Network

We have developed a wide-field mosaic CCD camera, MOA-cam3, mounted at the prime focus of the Microlensing Observations in Astrophysics (MOA) 1.8-m telescope. The camera consists of ten E2V CCD4482 chips, each having 2kx4k pixels, and covers a 2.2 deg^2 field of view with a single exposure. The optical system is well optimized to realize uniform image quality over this wide field. The chips are constantly cooled by a cryocooler at -80C, at which temperature dark current noise is negligible for a typical 1-3 minute exposure. The CCD output charge is converted to a 16-bit digital signal by the GenIII system (Astronomical Research Cameras Inc.) and readout is within 25 seconds. Readout noise of 2--3 ADU (rms) is also negligible. We prepared a wide-band red filter for an effective microlensing survey and also Bessell V, I filters for standard astronomical studies. Microlensing studies have entered into a new era, which requires more statistics, and more rapid alerts to catch exotic light curves. Our new system is a powerful tool to realize both these requirements.

T. Sako; T. Sekiguchi; M. Sasaki; K. Okajima; F. Abe; I. A. Bond; J. B. Hearnshaw; Y. Itow; K. Kamiya; P. M. Kilmartin; K. Masuda; Y. Matsubara; Y. Muraki; N. J. Rattenbury; D. J. Sullivan; T. Sumi; P. Tristram; T. Yanagisawa; P. C. M. Yock

2008-04-04

151

Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses  

ERIC Educational Resources Information Center

Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

Liu, Rong; Unger, John A.; Scullion, Vicki A.

2014-01-01

152

Video summarization based on camera motion and a subjective evaluation method  

E-print Network

Video summarization based on camera motion and a subjective evaluation method M. Guironnet a , D of video summarization based on camera motion. It consists in selecting frames according to the succession summaries more generally. Subjects were asked to watch a video and to create a summary manually. From

Paris-Sud XI, Université de

153

A BD Camera with One-H our HD Video Recording  

Microsoft Academic Search

A Blu-ray disc (BD) camera was developed with high definition (HD) video technologies; (1) 5 mega pixel CMOS imager, (2) H.264 video codec, and (3) BD compatible recording. The BD camera enables one-hour HD video recording.

T. Kato; H. Marumori; A. Watanabe; N. Shimoda; T. Okochi

2008-01-01

154

Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range  

NASA Astrophysics Data System (ADS)

The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/?E) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 ?m from the current 24 ?m spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 ?m square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 ?m and 3.9±0.1 ?m at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these worst-case resolution measurements, estimating the spatial resolution to be approximately 3.5 ?m and 3.0 ?m at 530 eV and 680 eV, well below the resolution limit of 5 ?m required to improve the spectral resolution by a factor of 2.

Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.

2013-12-01

155

Computer-vision-based weed identification of images acquired by 3CCD camera  

NASA Astrophysics Data System (ADS)

Selective application of herbicide to weeds at an earlier stage in crop growth is an important aspect of site-specific management of field crops. For approaches more adaptive in developing the on-line weed detecting application, more researchers involves in studies on image processing techniques for intensive computation and feature extraction tasks to identify the weeds from the other crops and soil background. This paper investigated the potentiality of applying the digital images acquired by the MegaPlus TM MS3100 3-CCD camera to segment the background soil from the plants in question and further recognize weeds from the crops using the Matlab script language. The image of the near-infrared waveband (center 800 nm; width 65 nm) was selected principally for segmenting soil and identifying the cottons from the thistles was achieved based on their respective relative area (pixel amount) in the whole image. The results show adequate recognition that the pixel proportion of soil, cotton leaves and thistle leaves were 78.24%(-0.20% deviation), 16.66% (+ 2.71% SD) and 4.68% (-4.19% SD). However, problems still exists by separating and allocating single plants for their clustering in the images. The information in the images acquired via the other two channels, i.e., the green and the red bands, need to be extracted to help the crop/weed discrimination. More optical specimens should be acquired for calibration and validation to establish the weed-detection model that could be effectively applied in fields.

Zhang, Yun; He, Yong; Fang, Hui

2006-09-01

156

Scientists Behind the Camera - Increasing Video Documentation in the Field  

NASA Astrophysics Data System (ADS)

Over the last two years, Skypunch Creative has designed and implemented a number of pilot projects to increase the amount of video captured by scientists in the field. The major barrier to success that we tackled with the pilot projects was the conflicting demands of the time, space, storage needs of scientists in the field and the demands of shooting high quality video. Our pilots involved providing scientists with equipment, varying levels of instruction on shooting in the field and post-production resources (editing and motion graphics). In each project, the scientific team was provided with cameras (or additional equipment if they owned their own), tripods, and sometimes sound equipment, as well as an external hard drive to return the footage to us. Upon receiving the footage we professionally filmed follow-up interviews and created animations and motion graphics to illustrate their points. We also helped with the distribution of the final product (http://climatescience.tv/2012/05/the-story-of-a-flying-hippo-the-hiaper-pole-to-pole-observation-project/ and http://climatescience.tv/2013/01/bogged-down-in-alaska/). The pilot projects were a success. Most of the scientists returned asking for additional gear and support for future field work. Moving out of the pilot phase, to continue the project, we have produced a 14 page guide for scientists shooting in the field based on lessons learned - it contains key tips and best practice techniques for shooting high quality footage in the field. We have also expanded the project and are now testing the use of video cameras that can be synced with sensors so that the footage is useful both scientifically and artistically. Extract from A Scientist's Guide to Shooting Video in the Field

Thomson, S.; Wolfe, J.

2013-12-01

157

Flat Field Anomalies in an X-ray CCD Camera Measured Using a Manson X-ray Source  

SciTech Connect

The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 ?m square pixels, and 15 ?m thick. A multi-anode Manson X-ray source, operating up to 10kV and 10W, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/?E?10. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within ±1% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation occurred at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was not observable below 4 keV. We were also able to observe debris, damage, and surface defects on the CCD chip. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.

M. J. Haugh and M. B. Schneider

2008-10-31

158

Nighttime Near Infrared Observations of Augustine Volcano Jan-Apr, 2006 Recorded With a Small Astronomical CCD Camera  

NASA Astrophysics Data System (ADS)

Nighttime observations of Augustine Volcano were made during Jan-Apr, 2006 using a small, unfiltered, astronomical CCD camera operating from Homer, Alaska. Time-lapse images of the volcano were made looking across the open water of the Cook Inlet over a slant range of ~105 km. A variety of volcano activities were observed that originated in near-infrared (NIR) 0.9-1.1 micron emissions, which were detectable at the upper limit of the camera passband but were otherwise invisible to the naked eye. These activities included various types of steam releases, pyroclastic flows, rockfalls and debris flows that were correlated very closely with seismic measurements made from instruments located within 4 km on the volcanic island. Specifically, flow events to the east (towards the camera) produced high amplitudes on the eastern seismic stations and events presumably to the west were stronger on western stations. The ability to detect nighttime volcanic emissions in the NIR over large horizontal distances using standard silicon CCD technology, even in the presence of weak intervening fog, came as a surprise, and is due to a confluence of several mutually reinforcing factors: (1) Hot enough (~1000K) thermal emissions from the volcano that the short wavelength portion of the Planck radiation curve overlaps the upper portions (0.9-1.1 micron) of the sensitivity of the silicon CCD detectors, and could thus be detected, (2) The existence of several atmospheric transmission windows within the NIR passband of the camera for the emissions to propagate with relatively small attenuation through more than 10 atmospheres, and (3) in the case of fog, forward Mie scattering.

Sentman, D.; McNutt, S.; Reyes, C.; Stenbaek-Nielsen, H.; Deroin, N.

2006-12-01

159

Acceptance/operational test report 103-SY and 101-SY tank camera purge system and 103-SY video camera system  

SciTech Connect

This Acceptance/Operational Test Report will document the satisfactory operation of the 103-SY/101-SY Purge Control System and the 103-SY Video Camera System after installation into riser 5B of tank 241-SY-103.

Castleberry, J.L.

1994-11-01

160

Photon-counting gamma camera based on columnar CsI(Tl) optically coupled to a back-illuminated CCD  

PubMed Central

Recent advances have been made in a new class of CCD-based, single-photon-counting gamma-ray detectors which offer sub-100 ?m intrinsic resolutions.1–7 These detectors show great promise in small-animal SPECT and molecular imaging and exist in a variety of configurations. Typically, a columnar CsI(Tl) scintillator or a radiography screen (Gd2O2S:Tb) is imaged onto the CCD. Gamma-ray interactions are seen as clusters of signal spread over multiple pixels. When the detector is operated in a charge-integration mode, signal spread across pixels results in spatial-resolution degradation. However, if the detector is operated in photon-counting mode, the gamma-ray interaction position can be estimated using either Anger (centroid) estimation or maximum-likelihood position estimation resulting in a substantial improvement in spatial resolution.2 Due to the low-light-level nature of the scintillation process, CCD-based gamma cameras implement an amplification stage in the CCD via electron multiplying (EMCCDs)8–10 or via an image intensifier prior to the optical path.1 We have applied ideas and techniques from previous systems to our high-resolution LumiSPECT detector.11, 12 LumiSPECT is a dual-modality optical/SPECT small-animal imaging system which was originally designed to operate in charge-integration mode. It employs a cryogenically cooled, high-quantum-efficiency, back-illuminated large-format CCD and operates in single-photon-counting mode without any intermediate amplification process. Operating in photon-counting mode, the detector has an intrinsic spatial resolution of 64 ?m compared to 134 ?m in integrating mode. PMID:20890397

Miller, Brian W.; Barber, H. Bradford; Barrett, Harrison H.; Chen, Liying; Taylor, Sean J.

2010-01-01

161

The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras  

ERIC Educational Resources Information Center

Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

Bird, Jo; Colliver, Yeshe; Edwards, Susan

2014-01-01

162

Video-Camera-Based Position-Measuring System  

NASA Technical Reports Server (NTRS)

A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white squares to an object of interest (see Figure 2). For other situations, where circular symmetry is more desirable, circular targets also can be created. Such a target can readily be generated and modified by use of commercially available software and printed by use of a standard office printer. All three relative coordinates (x, y, and z) of each target can be determined by processing the video image of the target. Because of the unique design of corresponding image-processing filters and targets, the vision-based position- measurement system is extremely robust and tolerant of widely varying fields of view, lighting conditions, and varying background imagery.

Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

2005-01-01

163

Deep-Sea Video Cameras Without Pressure Housings  

NASA Technical Reports Server (NTRS)

Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If power-supply regulators or filter capacitors were needed, these could be attached in chip form directly onto the back of, and potted with, the imager chip. Because CMOS imagers dissipate little power, the potting would not result in overheating. To minimize the cost of the camera, a fixed lens could be fabricated as part of the plastic case. For improved optical performance at greater cost, an adjustable glass achromatic lens would be mounted in a reservoir that would be filled with transparent oil and subject to the full hydrostatic pressure, and the reservoir would be mounted on the case to position the lens in front of the image sensor. The lens would by adjusted for focus by use of a motor inside the reservoir (oil-filled motors already exist).

Cunningham, Thomas

2004-01-01

164

Cryogenic design of the high speed CCD60 camera for wavefront sensing  

NASA Astrophysics Data System (ADS)

CCD60, developed by e2v technologies, is a 128x128 pixel frame-transfer back-illuminated sensor using the EMCCD technology. This kind of detector has some attractive characteristics, such as high frame rate, low noise and high quantum efficiency. So, it is suitable for Adaptive Optical Wave Front Sensor (AO WFS) applications. However, the performance of this detector is strongly depended on its temperature. In order to achieve high multiplication gain and low dark current noise, CCD60 should be cooled under -45°. For this reason, we had designed a cooling system to cool down the CCD60 detector base on thermoelectric cooler. Detail of the design, thermal analysis and the cooling experiment are presented in this paper. The performance of multiplication gain after cooling had been tested too. The result of cooling experiment shows that the thermoelectric cooler can cool the CCD to below -60 °C under air cooled operation and an air temperature of 20 °C. The multiplication gain test tell us the multiplication gain of CCD60 can exceed 500 times on -60°.

He, Kai; Ma, Wenli; Wang, Mingfu; Zhou, Xiangdong

2014-11-01

165

ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System  

SciTech Connect

This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

Werry, S.M.

1995-06-06

166

End-user viewpoint control of live video from a medical camera array  

E-print Network

End-user viewpoint control of live video from a medical camera array Jeffrey R. Blum, Haijian Sun and implementation of a camera array for real-time streaming of medical video across a high speed research networkGill's Medical Simulation Centre as one of the tools in a geographically distributed medical teaching application

Cooperstock, Jeremy R.

167

HDA dataset -DRAFT 1 A Multi-camera video data set for research on  

E-print Network

HDA dataset - DRAFT 1 A Multi-camera video data set for research on High-Definition surveillance Abstract: We present a fully labelled image sequence data set for benchmarking video surveillance algorithms. The data set was acquired from 13 indoor cameras distributed over three floors of one building

Instituto de Sistemas e Robotica

168

Full-disk solar Dopplergrams observed with a one-megapixel CCD camera and a sodium magneto-optical filter  

NASA Technical Reports Server (NTRS)

The paper presents here the first two full-disk solar Dopplergrams obtained with the new 1024 x 1024-pixel CCD camera which has recently been installed at the 60-Foot Tower Telescope of the Mt. Wilson Observatory. These Dopplergrams have a spatial resolution of 2.2 arcseconds and were obtained in a total of one minute of time. The Dopplergrams were obtained with a magnetooptical filter which was designed to obtain images in the two Na D lines. The filter and the camera were operated together as part of the development of a solar oscillations imager experiment which is currently being designed at JPL for the Joint NASA/ESA Solar and Heliospheric Observatory mission. Two different images obtained by subtracting two pairs of the Dopplergrams from the initial time series are also included.

Rhodes, Edward J., Jr.; Cacciani, Alessandro; Tomczyk, Steven

1987-01-01

169

Electro-optical testing of fully depleted CCD image sensors for the Large Synoptic Survey Telescope camera  

NASA Astrophysics Data System (ADS)

The LSST Camera science sensor array will incorporate 189 large format Charge Coupled Device (CCD) image sensors. Each CCD will include over 16 million pixels and will be divided into 16 equally sized segments and each segment will be read through a separate output amplifier. The science goals of the project require CCD sensors with state of the art performance in many aspects. The broad survey wavelength coverage requires fully depleted, 100 micrometer thick, high resistivity, bulk silicon as the imager substrate. Image quality requirements place strict limits on the image degradation that may be caused by sensor effects: optical, electronic, and mechanical. In this paper we discuss the design of the prototype sensors, the hardware and software that has been used to perform electro-optic testing of the sensors, and a selection of the results of the testing to date. The architectural features that lead to internal electrostatic fields, the various effects on charge collection and transport that are caused by them, including charge diffusion and redistribution, effects on delivered PSF, and potential impacts on delivered science data quality are addressed.

Doherty, Peter E.; Antilogus, Pierre; Astier, Pierre; Chiang, James; Gilmore, D. Kirk; Guyonnet, Augustin; Huang, Dajun; Kelly, Heather; Kotov, Ivan; Kubanek, Petr; Nomerotski, Andrei; O'Connor, Paul; Rasmussen, Andrew; Riot, Vincent J.; Stubbs, Christopher W.; Takacs, Peter; Tyson, J. Anthony; Vetter, Kurt

2014-07-01

170

Frequency identification of vibration signals using video camera image data.  

PubMed

This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

Jeng, Yih-Nen; Wu, Chia-Hung

2012-01-01

171

Frequency Identification of Vibration Signals Using Video Camera Image Data  

PubMed Central

This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

Jeng, Yih-Nen; Wu, Chia-Hung

2012-01-01

172

Structural Dynamics Analysis and Research for FEA Modeling Method of a Light High Resolution CCD Camera  

NASA Astrophysics Data System (ADS)

resolution and wide swath. In order to assure its high optical precision smoothly passing the rigorous dynamic load of launch, it should be of high structural rigidity. Therefore, a careful study of the dynamic features of the camera structure should be performed. Pro/E. An interference examination is performed on the precise CAD model of the camera for mending the structural design. for the first time in China, and the analysis of structural dynamic of the camera is accomplished by applying the structural analysis code PATRAN and NASTRAN. The main research programs include: 1) the comparative calculation of modes analysis of the critical structure of the camera is achieved by using 4 nodes and 10 nodes tetrahedral elements respectively, so as to confirm the most reasonable general model; 2) through the modes analysis of the camera from several cases, the inherent frequencies and modes are obtained and further the rationality of the structural design of the camera is proved; 3) the static analysis of the camera under self gravity and overloads is completed and the relevant deformation and stress distributions are gained; 4) the response calculation of sine vibration of the camera is completed and the corresponding response curve and maximum acceleration response with corresponding frequencies are obtained. software technique is accurate and efficient. sensitivity, the dynamic design and engineering optimization of the critical structure of the camera are discussed. fundamental technology in design of forecoming space optical instruments.

Sun, Jiwen; Wei, Ling; Fu, Danying

2002-01-01

173

Real-Time Video Analysis on an Embedded Smart Camera for Traffic Surveillance  

Microsoft Academic Search

A smart camera combines video sensing, high-level vid- eoprocessingandcommunicationwithin a single embedded device. Such cameras are key components in novel surveil- lance systems. This paper reports on a prototyping development of a smart camera for traffic surveillance. We present its scal- able architecture comprised of a CMOS sensor, digital sig- nal processors (DSP), and a network processor. We further discuss

Michael Bramberger; Josef Brunner; Bernhard Rinner; Helmut Schwabach

2004-01-01

174

Using Stationary-Dynamic Camera Assemblies for Wide-area Video Surveillance and Selective Attention  

E-print Network

Using Stationary-Dynamic Camera Assemblies for Wide-area Video Surveillance and Selective Attention solution is to cover an extended surveil- lance area by multiple stationary (or master) cameras with wide patterns in the surveillance zone. Based on some pre-specified crite- ria, the stationary cameras identify

Wang, Yuan-Fang

175

Video Surveillance using a Multi-Camera Tracking and Fusion System  

E-print Network

Video Surveillance using a Multi-Camera Tracking and Fusion System Zhong Zhang, Andrew Scanlon the architecture of a typical single camera surveillance system. Section 3 explains how this architecture Algorithms and Applications - M2SFA2 2008, Marseille : France (2008)" #12;2 Single Camera Surveillance System

Paris-Sud XI, Université de

176

Reconstruction of the Pose of Uncalibrated Cameras via User-Generated Videos  

E-print Network

of an uncalibrated video-based city ex- ploration system, where moving cameras allow the user to move between common points in purpose-captured videos, and to swap video at such a ‘portal’ via an aesthetically pleasing transition. A graph of the ways videos... . Sinha and M. Pollefeys. Camera network calibration and synchronization from silhouettes in archived video. International Journal of Computer Vision, 87(3):266–283, May 2010. [22] N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: exploring photo...

Bennett, Stuart; Lasenby, Joan; Kokaram, Anil; Inguva, Sasi; Birkbeck, Neil

2014-01-01

177

Camera View-based American Football Video Analysis Yi Ding and Guoliang Fan  

E-print Network

Camera View-based American Football Video Analysis Yi Ding and Guoliang Fan School of Electrical is proposed for American football video analysis, where semantic units are defined as latent or hi- dent analysis. Experimental results on several real foot- ball videos manifest the effectiveness of the proposed

Fan, Guoliang

178

Video-based Animal Behavior Analysis From Multiple Cameras Xinwei Xue and Thomas C. Henderson  

E-print Network

Abstract-- It has become increasingly popular to study an- imal behaviors with the assistance of video motion analysis. It has become increasingly popular to study behavior with the help of video recordingsVideo-based Animal Behavior Analysis From Multiple Cameras Xinwei Xue and Thomas C. Henderson

Henderson, Thomas C.

179

Testing the e2v CCD47-20 as the new sensor for the SOFIA target acquisition and tracking cameras  

NASA Astrophysics Data System (ADS)

The telescope of the Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras, the Wide Field Imager (WFI), Fine Field Imager (FFI) and Focal Plane Imager (FPI). All three cameras use Thompson TH7888A CCD sensors (now offered by e2v) which are quite suitable in terms of their geometry and readout speed. However, their quantum efficiency and dark current rate are not comparable to newer CCD sensors now widely used in astronomy. The Deutsches SOFIA Institut (DSI) under contract of the German Aerospace Center (DLR) has therefore initiated an upgrade project of the cameras with high-sensitivity and low dark current CCD sensors, the e2v CCD47-20 BI AIMO. The back-illuminated architecture allows for high quantum efficiency, while the inverted mode operation lowers the dark current significantly. Both features enable the cameras to use fainter stars for tracking. The expected improvements in sensitivity range between 1.2 and 2.5 stellar magnitudes for the three cameras. In this paper we present results of laboratory and on-sky tests with the new sensor, obtained with a commercial camera platform.

Wiedemann, Manuel; Wolf, Jürgen; Röser, Hans-Peter

2010-07-01

180

Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam  

NASA Astrophysics Data System (ADS)

When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2×107 cm s, which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300×1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200 rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points.

Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A. E.; Engelhardt, M.

2005-04-01

181

Variable high-resolution color CCD camera system with online capability for professional photo studio application  

NASA Astrophysics Data System (ADS)

Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

1998-04-01

182

Data acquisition system based on the Nios II for a CCD camera  

Microsoft Academic Search

The FPGA with Avalon Bus architecture and Nios soft-core processor developed by Altera Corporation is an advanced embedded solution for control and interface systems. A CCD data acquisition system with an Ethernet terminal port based on the TCP\\/IP protocol is implemented in NAOC, which is composed of a piece of interface board with an Altera's FPGA, 32MB SDRAM and some

Binhua Li; Keliang Hu; Chunrong Wang; Yangbing Liu; Chun He

2006-01-01

183

Night Vision Camera  

NASA Technical Reports Server (NTRS)

PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

1996-01-01

184

Liquid-crystal-display projector-based modulation transfer function measurements of charge-coupled-device video camera systems.  

PubMed

We demonstrate the ability to measure the system modulation transfer function (MTF) of both color and monochrome charge-coupled-device (CCD) video camera systems with a liquid-crystal-display (LCD) projector. Test matrices programmed to the LCD projector were chosen primarily to have a flat power spectral density (PSD) when averaged along one dimension. We explored several matrices and present results for a matrix produced with a random-number generator, a matrix of sequency-ordered Walsh functions, a pseudorandom Hadamard matrix, and a pseudorandom uniformly redundant array. All results are in agreement with expected filtering. The Walsh matrix and the Hadamard matrix show excellent agreement with the matrix from the random-number generator. We show that shift-variant effects between the LCD array and the CCD array can be kept small. This projector test method offers convenient measurement of the MTF of a low-cost video system. Such characterization is useful for an increasing number of machine vision applications and metrology applications. PMID:18337921

Teipen, B T; MacFarlane, D L

2000-02-01

185

[Research Award providing funds for a tracking video camera  

NASA Technical Reports Server (NTRS)

The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.

Collett, Thomas

2000-01-01

186

Use of a CCD camera for the thermographic study of a transient liquid phase bonding process in steel  

NASA Astrophysics Data System (ADS)

The bonding of steel pieces and the development of novel soldering methods, appropriate to the extended variety of applications of steels nowadays, bring the sensing of temperature an outstanding role in any metallurgical process. Transient liquid phase bonding (TLPB) processes have been successfully employed to join metals, among them steels. A thin layer of metal A, with a liquids temperature TLA, is located between two pieces of metal B, with a liquids temperature TLB higher than TLA. The joining zone is heated up to a temperature T(TLACCD camera with 752x582 pixels has been adapted for temperature measurements through the coil of the furnace. The output of the camera is digitized and visualized in a 14-inch monitor. The temperature is calculated using the correlation with the gray tone present in the monitor, which is measured by means of suitable software. The technical specifications of the camera and the modifications introduced to adapt it for this work are presented. The calibration of the camera and the method employed in the measurements are described. The measured temperatures are corrected by the effect of emissivity of the materials surfaces and the environment radiation reflected. Thermographs obtained are shown and results are discussed. We conclude that a low priced camera may be used to measure temperature in this range with acceptable accuracy.

Castro, Eduardo H.; Epelbaum, Carlos; Carnero, Angel; Arcondo, Bibiana

2001-03-01

187

Acceptance/operational test procedure 241-AN-107 Video Camera System  

SciTech Connect

This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer`s specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights.

Pedersen, L.T.

1994-11-18

188

A 2 million pixel FIT-CCD image sensor for HDTV camera system  

Microsoft Academic Search

The image area of the frame FIT (frame-interline-transfer)-CCD (charge-coupled-device) image sensor is 14.0 mm (H)*7.9 mm (V), the effective number of pixels is 1920 (H)*1036 (V) and the unit cell size of a pixel is 7.3 mu m (H)*7.6 mu m (V). These specifications are for the high-definition-television (HDTV) format. The horizontal shift register consists of dual-channel, two-phase CCDs driven

K. Yonemoto; T. Iizuku; S. Nakamura; K. Harada; K. Wada; M. Negishi; H. Yamada; T. Tsunakawa; K. Shinohara; T. Ishimaru; Y. Kamide; T. Yamasaki; M. Yamagishi

1990-01-01

189

Characterization of the luminance and shape of ash particles at Sakurajima volcano, Japan, using CCD camera images  

NASA Astrophysics Data System (ADS)

We develop a new method for characterizing the properties of volcanic ash at the Sakurajima volcano, Japan, based on automatic processing of CCD camera images. Volcanic ash is studied in terms of both luminance and particle shape. A monochromatic CCD camera coupled with a stereomicroscope is used to acquire digital images through three filters that pass red, green, or blue light. On single ash particles, we measure the apparent luminance, corresponding to 256 tones for each color (red, green, and blue) for each pixel occupied by ash particles in the image, and the average and standard deviation of the luminance. The outline of each ash particle is captured from a digital image taken under transmitted light through a polarizing plate. Also, we define a new quasi-fractal dimension ( D qf ) to quantify the complexity of the ash particle outlines. We examine two ash samples, each including about 1000 particles, which were erupted from the Showa crater of the Sakurajima volcano, Japan, on February 09, 2009 and January 13, 2010. The apparent luminance of each ash particle shows a lognormal distribution. The average luminance of the ash particles erupted in 2009 is higher than that of those erupted in 2010, which is in good agreement with the results obtained from component analysis under a binocular microscope (i.e., the number fraction of dark juvenile particles is lower for the 2009 sample). The standard deviations of apparent luminance have two peaks in the histogram, and the quasi-fractal dimensions show different frequency distributions between the two samples. These features are not recognized in the results of conventional qualitative classification criteria or the sphericity of the particle outlines. Our method can characterize and distinguish ash samples, even for ash particles that have gradual property changes, and is complementary to component analysis. This method also enables the relatively fast and systematic analysis of ash samples that is required for petrologic monitoring of ongoing activity, such as at the Sakurajima volcano.

Miwa, Takahiro; Shimano, Taketo; Nishimura, Takeshi

2015-01-01

190

Station Cameras Capture New Videos of Hurricane Katia - Duration: 5:36.  

NASA Video Gallery

Aboard the International Space Station, external cameras captured new video of Hurricane Katia as it moved northwest across the western Atlantic north of Puerto Rico at 10:35 a.m. EDT on September ...

191

Fused Six-Camera Video of STS-134 Launch - Duration: 1:19.  

NASA Video Gallery

Imaging experts funded by the Space Shuttle Program and located at NASA's Ames Research Center prepared this video by merging nearly 20,000 photographs taken by a set of six cameras capturing 250 i...

192

Fast CCD camera for x-ray photon correlation spectroscopy and time-resolved x-ray scattering and imaging  

SciTech Connect

A new, fast x-ray detector system is presented for high-throughput, high-sensitivity, time-resolved, x-ray scattering and imaging experiments, most especially x-ray photon correlation spectroscopy (XPCS). After a review of the architectures of different CCD chips and a critical examination of their suitability for use in a fast x-ray detector, the new detector hardware is described. In brief, its principal component is an inexpensive, commercial camera - the SMD1M60 - originally designed for optical applications, and modified for use as a direct-illumination x-ray detector. The remainder of the system consists of two Coreco Imaging PC-DIG frame grabber boards, located inside a Dell Power-edge 6400 server. Each frame grabber sits on its own PCI bus and handles data from 2 of the CCD's 4 taps. The SMD1M60 is based on a fast, frame-transfer, 4-tap CCD chip, read out at12-bit resolution at frame rates of up to 62 Hz for full frame readout and up to 500 Hz for one-sixteenth frame readout. Experiments to characterize the camera's suitability for XPCS and small-angle x-ray scattering (SAXS) are presented. These experiments show that single photon events are readily identified, and localized to within a pixel index or so. This is a sufficiently fine spatial resolution to maintain the speckle contrast at an acceptable value for XPCS measurements. The detective quantum efficiency of the SMD1M60 is 49% for directly-detected 6.3 keV x rays. The effects of data acquisition strategies that permit near-real-time data compression are also determined and discussed. Overall, the SMD1M60 detector system represents a major improvement in the technology for time-resolved x-ray experiments, that require an area detector with time-resolutions in few-milliseconds-to-few-seconds range, and it should have wide applications, extending beyond XPCS.

Falus, P.; Borthwick, M.A.; Mochrie, S.G.J. [Department of Physics, Yale University, New Haven, Connecticut 06520 and Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Departments of Physics and Applied Physics, Yale University, New Haven, Connecticut 06520 (United States)

2004-11-01

193

Using a Video Camera to Measure the Radius of the Earth  

ERIC Educational Resources Information Center

A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

Carroll, Joshua; Hughes, Stephen

2013-01-01

194

An Evaluation and Demonstrations of COTS Components to Implement Wearable Video Cameras on Spaceport Technicians  

Microsoft Academic Search

In this report we evaluate the feasibility of having Spaceport Florida technicians wear video cameras as part of their clothing to record and allow for live viewing, via wireless links, of everything they do in an operational procedure. Our focus is on low-cost, commercial-off-the-shelf (COTS) hardware and software components. With wearable video cameras, real-time consultation with offsite experts becomes possible.

Christine Bexley

195

Measuring Night-Sky Brightness with a Wide-Field CCD Camera  

E-print Network

We describe a system for rapidly measuring the brightness of the night sky using a mosaic of CCD images obtained with a low-cost automated system. The portable system produces millions of independent photometric measurements covering the entire sky, enabling the detailed characterization of natural sky conditions and light domes produced by cities. The measurements are calibrated using images of standard stars contained within the raw data, producing results closely tracking the Johnson V astronomical standard. The National Park Service has collected hundreds of data sets at numerous parks since 2001 and is using these data for the protection and monitoring of the night-sky visual resource. This system also allows comprehensive characterization of sky conditions at astronomical observatories. We explore photometric issues raised by the broadband measurement of the complex and variable night-sky spectrum, and potential indices of night-sky quality.

D. M. Duriscoe; C. B. Luginbuhl; C. A. Moore

2007-03-27

196

Method for separating video camera motion from scene motion for constrained 3D displacement measurements  

NASA Astrophysics Data System (ADS)

Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.

Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.

2014-09-01

197

Comparison of EM-CCD and scientific CMOS based camera systems for high resolution X-ray imaging and tomography applications  

NASA Astrophysics Data System (ADS)

We have developed an Electron Multiplying (EM) CCD based, high frame rate camera system using an optical lens system for X-ray imaging and tomography. The current state of the art systems generally use scientific CMOS sensors that have a readout noise of a few electrons and operate at high frame rates. Through the use of electron multiplication, the EM-CCD camera is able to operate with a sub-electron equivalent readout noise and a frame rate of up to 50 HZ (full-frame). The EM-CCD-based camera system has a major advantage over existing technology in that it has a high signal-to-noise ratio even at very low signal levels. This allows radiation-sensitive samples to be analysed with low flux X-ray beams which greatly reduces the beam damage. This paper shows that under the conditions of this experiment the EM-CCD camera system has a comparable spatial resolution performance to the scientific CMOS based imaging system and has a superior signal-to-noise ratio.

Tutt, J. H.; Hall, D. J.; Soman, M. R.; Holland, A. D.; Warren, A. J.; Connolley, T.; Evagora, A. M.

2014-06-01

198

NPS assessment of color medical displays using a monochromatic CCD camera  

NASA Astrophysics Data System (ADS)

This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.

Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

2012-02-01

199

NPS assessment of color medical image displays using a monochromatic CCD camera  

NASA Astrophysics Data System (ADS)

This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired

Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

2012-10-01

200

Performance Characterization of the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) CCD Cameras  

NASA Technical Reports Server (NTRS)

The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a sounding rocket instrument which is currently being developed by NASA's Marshall Space Flight Center (MSFC) and the National Astronomical Observatory of Japan (NAOJ). The goal of this instrument is to observe and detect the Hanle effect in the scattered Lyman-Alpha UV (121.6nm) light emitted by the Sun's Chromosphere to make measurements of the magnetic field in this region. In order to make accurate measurements of this effect, the performance characteristics of the three on-board charge-coupled devices (CCDs) must meet certain requirements. These characteristics include: quantum efficiency, gain, dark current, noise, and linearity. Each of these must meet predetermined requirements in order to achieve satisfactory performance for the mission. The cameras must be able to operate with a gain of no greater than 2 e(-)/DN, a noise level less than 25e(-), a dark current level which is less than 10e(-)/pixel/s, and a residual nonlinearity of less than 1%. Determining these characteristics involves performing a series of tests with each of the cameras in a high vacuum environment. Here we present the methods and results of each of these performance tests for the CLASP flight cameras.

Joiner, Reyann; Kobayashi, Ken; Winebarger, Amy; Champey, Patrick

2014-01-01

201

Designing an Embedded Video Processing Camera Using a 16-bit Microprocessor for Surveillance System  

E-print Network

Designing an Embedded Video Processing Camera Using a 16-bit Microprocessor for Surveillance System instructions from humans. In [6], Mahonen proposed a wireless intelligent surveillance camera system@mail.utexas.edu } Abstract This paper describes the design and implementation of a hybrid intelligent surveillance system

Evans, Brian L.

202

Automated Registration of High Resolution Images from Slide Presentation and Whiteboard Handwriting via a Video Camera  

E-print Network

Handwriting via a Video Camera Weihong Li+ , Hao Tang§ and Zhigang Zhu§+ * § Department of Computer SciencePoint© (PPT) slide presentation and a whiteboard handwriting capture system, when used together, could provide with printing notes and the other with handwriting notes, we use a low-cost digital camera as a bridge to align

Zhu, Zhigang

203

Algorithms for the Automatic Identification of MARFEs and UFOs in JET Database of Visible Camera Videos  

Microsoft Academic Search

MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The

A. Murari; M. Camplani; B. Cannas; D. Mazon; F. Delaunay; P. Usai; J. F. Delmond

2010-01-01

204

Content-adaptive high-resolution hyperspectral video acquisition with a hybrid camera system.  

PubMed

We present a hybrid camera system that combines optical designs with computational processing to achieve content-adaptive high-resolution hyperspectral video acquisition. In particular, we record two video streams: one high-spatial resolution RGB video and one low-spatial resolution hyperspectral video in which the recorded points are dynamically selected using a spatial light modulator (SLM). Then through video-frame registration and a spatio-temporal spreading of the co-located spectral/RGB information, video with high spatial and spectral resolution is produced. The sampling patterns on the SLM are generated on-the-fly according to the scene content, which fully exploits the self-adaptivity of the hybrid camera system. With an experimental prototype, we demonstrate significantly improved accuracy and efficiency as compared to the state-of-the-art. PMID:24562246

Ma, Chenguang; Cao, Xun; Wu, Rihui; Dai, Qionghai

2014-02-15

205

Experimental Comparison of the High-Speed Imaging Performance of an EM-CCD and sCMOS Camera in a Dynamic Live-Cell Imaging Test Case  

PubMed Central

The study of living cells may require advanced imaging techniques to track weak and rapidly changing signals. Fundamental to this need is the recent advancement in camera technology. Two camera types, specifically sCMOS and EM-CCD, promise both high signal-to-noise and high speed (>100 fps), leaving researchers with a critical decision when determining the best technology for their application. In this article, we compare two cameras using a live-cell imaging test case in which small changes in cellular fluorescence must be rapidly detected with high spatial resolution. The EM-CCD maintained an advantage of being able to acquire discernible images with a lower number of photons due to its EM-enhancement. However, if high-resolution images at speeds approaching or exceeding 1000 fps are desired, the flexibility of the full-frame imaging capabilities of sCMOS is superior. PMID:24404178

Beier, Hope T.; Ibey, Bennett L.

2014-01-01

206

Feasibility study of transmission of OTV camera control information in the video vertical blanking interval  

NASA Technical Reports Server (NTRS)

The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

White, Preston A., III

1994-01-01

207

ON RELATIVISTIC DISK SPECTROSCOPY IN COMPACT OBJECTS WITH X-RAY CCD CAMERAS  

SciTech Connect

X-ray charge-coupled devices (CCDs) are the workhorse detectors of modern X-ray astronomy. Typically covering the 0.3-10.0 keV energy range, CCDs are able to detect photoelectric absorption edges and K shell lines from most abundant metals. New CCDs also offer resolutions of 30-50 (E/{Delta}E), which is sufficient to detect lines in hot plasmas and to resolve many lines shaped by dynamical processes in accretion flows. The spectral capabilities of X-ray CCDs have been particularly important in detecting relativistic emission lines from the inner disks around accreting neutron stars and black holes. One drawback of X-ray CCDs is that spectra can be distorted by photon 'pile-up', wherein two or more photons may be registered as a single event during one frame time. We have conducted a large number of simulations using a statistical model of photon pile-up to assess its impacts on relativistic disk line and continuum spectra from stellar-mass black holes and neutron stars. The simulations cover the range of current X-ray CCD spectrometers and operational modes typically used to observe neutron stars and black holes in X-ray binaries. Our results suggest that severe photon pile-up acts to falsely narrow emission lines, leading to falsely large disk radii and falsely low spin values. In contrast, our simulations suggest that disk continua affected by severe pile-up are measured to have falsely low flux values, leading to falsely small radii and falsely high spin values. The results of these simulations and existing data appear to suggest that relativistic disk spectroscopy is generally robust against pile-up when this effect is modest.

Miller, J. M.; Cackett, E. M. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); D'Ai, A. [Dipartimento di Scienze Fisiche ed Astronomiche, Universita di Palermo, Palermo (Italy); Bautz, M. W.; Nowak, M. A. [Kavli Institute for Astrophysics and Space Research, MIT, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Bhattacharyya, S. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Mumbai 400005 (India); Burrows, D. N.; Kennea, J. [Department of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Lab, College Park, PA 16802 (United States); Fabian, A. C.; Reis, R. C. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 OHA (United Kingdom); Freyberg, M. J.; Haberl, F. [Max-Planck-Institut fuer extraterrestrische Physik, Giessenbachstrasse, 85748 Garching (Germany); Strohmayer, T. E. [Astrophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Tsujimoto, M., E-mail: jonmm@umich.ed [Japan Aerospace Exploration Agency, Institute of Space and Astronomical Sciences, 3-1-1 Yoshino-dai, Sagamihara, Kanagawa 229-8510 (Japan)

2010-12-01

208

Digital video technology and production 101: lights, camera, action.  

PubMed

Videos are powerful tools for enhancing the reach and effectiveness of health promotion programs. They can be used for program promotion and recruitment, for training program implementation staff/volunteers, and as elements of an intervention. Although certain brief videos may be produced without technical assistance, others often require collaboration and contracting with professional videographers. To get practitioners started and to facilitate interactions with professional videographers, this Tool includes a guide to the jargon of video production and suggestions for how to integrate videos into health education and promotion work. For each type of video, production principles and issues to consider when working with a professional videographer are provided. The Tool also includes links to examples in each category of video applications to health promotion. PMID:24335238

Elliot, Diane L; Goldberg, Linn; Goldberg, Michael J

2014-01-01

209

Arbitrary viewpoint video synthesis from multiple uncalibrated cameras.  

PubMed

We propose a method for arbitrary view synthesis from uncalibrated multiple camera system, targeting large spaces such as soccer stadiums. In Projective Grid Space (PGS), which is a three-dimensional space defined by epipolar geometry between two basis cameras in the camera system, we reconstruct three-dimensional shape models from silhouette images. Using the three-dimensional shape models reconstructed in the PGS, we obtain a dense map of the point correspondence between reference images. The obtained correspondence can synthesize the image of arbitrary view between the reference images. We also propose a method for merging the synthesized images with the virtual background scene in the PGS. We apply the proposed methods to image sequences taken by a multiple camera system, which installed in a large concert hall. The synthesized image sequences of virtual camera have enough quality to demonstrate effectiveness of the proposed method. PMID:15369084

Yaguchi, Satoshi; Saito, Hideo

2004-02-01

210

Lights, Cameras, Pencils! Using Descriptive Video to Enhance Writing  

ERIC Educational Resources Information Center

Students of various ages and abilities can increase their comprehension and build vocabulary with the help of a new technology, Descriptive Video. Descriptive Video (also known as described programming) was developed to give individuals with visual impairments access to visual media such as television programs and films. Described programs,…

Hoffner, Helen; Baker, Eileen; Quinn, Kathleen Benson

2008-01-01

211

Risk mitigation process for utilization of commercial off-the-shelf (COTS) parts in CCD camera for military applications  

NASA Astrophysics Data System (ADS)

This paper presents the lessons learned during the design and development of a high performance cooled CCD camera for military applications utilizing common commercial off the shelf (COTS) parts. Our experience showed that concurrent evaluation and testing of high risk COTS must be performed to assess their performance over the required temperature range and other special product requirements such as fuel vapor compatibility, EMI and shock susceptibility, etc. Technical, cost and schedule risks for COTS parts must also be carefully evaluated. The customer must be involved in the selection and evaluation of such parts so that the performance limitations of the selected parts are clearly understood. It is equally important to check with vendors on the availability and obsolescence of the COTS parts being considered since the electronic components are often replaced by newer, better and cheaper models in a couple of years. In summary, this paper addresses the major benefits and risks associated with using commercial and industrial parts in military products, and suggests a risk mitigation approach to ensure a smooth development phase, and predictable performance from the end product.

Ahmad, Anees; Batcheldor, Scott; Cannon, Steven C.; Roberts, Thomas E.

2002-09-01

212

Smart digital cameras for product quality inspection  

Microsoft Academic Search

This paper describes two examples of state of the art digital video cameras which provide features which can enhance the ease and\\/or accuracy of product quality inspection. One camera incorporates a progressive scan CCD and a built-in frame store allowing capture of high resolution still images of fast moving objects without the need for a strobe light, mechanical shutter, or

R. A. Easton

1996-01-01

213

3D Video Applications and Intelligent Video Surveillance Camera and its VLSI Design  

Microsoft Academic Search

In this demonstration, the core processing engines of two video applications, 3D video and intelligent video surveillance, are demonstrated. The developed algorithms and its VLSI design results are shown with hardware prototypes processing input video on-the-fly. In addition to the processing engine design, the development tools for efficiently designing these chips are also demonstrated.

Shao-yi Chien; Chi-sheng Shih; Mong-kai Ku; Chia-lin Yang; Yao-wen Chang; Tei-wei Kuo; Liang-gee Chen

2007-01-01

214

A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network  

NASA Astrophysics Data System (ADS)

Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

Li, Yiming; Bhanu, Bir

215

Single video camera method for using scene metrics to measure constrained 3D displacements  

NASA Astrophysics Data System (ADS)

There are numerous ways to use video cameras to measure 3D dynamic spatial displacements. When the scene geometry is unknown and the motion is unconstrained, two calibrated cameras are required. The data from both scenes are combined to perform the measurements using well known stereoscopic techniques. There are occasions where the measurement system can be simplified considerably while still providing a calibrated spatial measurement of a complex dynamic scene. For instance, if the sizes of objects in the scene are known a priori, these data may be used to provide scene specific spatial metrics to compute calibration coefficients. With this information, it is not necessary to calibrate the camera before use, nor is it necessary to precisely know the geometry between the camera and the scene. Field-ofview coverage and sufficient spatial and temporal resolution are the main camera requirements. Further simplification may be made if the 3D displacements of interest are small or constrained enough to allow for an accurate 2D projection of the spatial variables of interest. With proper camera orientation and scene marking, the apparent pixel movements can be expressed as a linear combination of the underlying spatial variables of interest. In many cases, a single camera may be used to perform complex 3D dynamic scene measurements. This paper will explain and illustrate a technique for using a single uncalibrated video camera to measure the 3D displacement of the end of a constrained rigid body subject to a perturbation.

Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.

2014-09-01

216

BOREAS RSS-3 Imagery and Snapshots from a Helicopter-Mounted Video Camera  

NASA Technical Reports Server (NTRS)

The BOREAS RSS-3 team collected helicopter-based video coverage of forested sites acquired during BOREAS as well as single-frame "snapshots" processed to still images. Helicopter data used in this analysis were collected during all three 1994 IFCs (24-May to 16-Jun, 19-Jul to 10-Aug, and 30-Aug to 19-Sep), at numerous tower and auxiliary sites in both the NSA and the SSA. The VHS-camera observations correspond to other coincident helicopter measurements. The field of view of the camera is unknown. The video tapes are in both VHS and Beta format. The still images are stored in JPEG format.

Walthall, Charles L.; Loechel, Sara; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor)

2000-01-01

217

Simulation of a Video Surveillance Network Using Remote Intelligent Security Cameras  

Microsoft Academic Search

The high continuous bit-rates carried by digital fiber-based video surveillance networks have prompted demands for intelligent\\u000a sensor devices to reduce bandwidth requirements. These devices detect and report only significant events, thus optimizing\\u000a the use of recording and transmission hardware. The Remote Intelligent Security Camera (R.I.S.C.) concept devolves local autonomy\\u000a to geographically distant cameras, enabling them to switch between tasks in

J. R. Renno; M. J. Tunnicliffe; Graeme A. Jones; David J. Parish

2001-01-01

218

Cinematized Reality: Cinematographic 3D Video System for Daily Life Using Multiple Outer\\/Inner Cameras  

Microsoft Academic Search

The purpose of Cinematized Reality is to record unexpected moments in people?s lives and create movies that look as if they were created from real, expertly captured film footage. The approach toward Cinematized Reality is to generate free-view video streams from multiple videos. The proposed system reconstructs 3D models of the capturing space using outer environmental cameras and an inner

Hansung Kim; Ryuuki Sakamoto; K. Kogure; I. Kitahara

2006-01-01

219

Video geographic information system using mobile mapping in mobilephone camera  

NASA Astrophysics Data System (ADS)

In this Paper is to develop core technologies such as automatic shape extraction from images (video), spatialtemporal data processing, efficient modeling, and then make it inexpensive and fast to build and process the huge 3D geographic data. The upgrade and maintenance of the technologies are also easy due to the component-based system architecture. Therefore, we designed and implemented the Video mobile GIS using a real-time database system, which consisted of a real-time GIS engine, a middleware, and a mobile client.

Kang, Jinsuk; Lee, Jae-Joon

2013-12-01

220

Camera/Video Phones in Schools: Law and Practice  

ERIC Educational Resources Information Center

The emergence of mobile phones with built-in digital cameras is creating legal and ethical concerns for school systems throughout the world. Users of such phones can instantly email, print or post pictures to other MMS1 phones or websites. Local authorities and schools in Britain, Europe, USA, Canada, Australia and elsewhere have introduced…

Parry, Gareth

2005-01-01

221

Laser Imaging Video Camera Sees Through Fire, Fog, Smoke  

NASA Technical Reports Server (NTRS)

Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

2015-01-01

222

Two Carriers Used to Suspend an Underwater Video Camera from a Boat  

Microsoft Academic Search

Two underwater video camera carriers were designed by modifying hydraulic sounding weights and suspension equipment normally used for stream gaging. Both carriers were suspended from the bow of a boat and were used in a river up to 13 m deep with velocities to 3 m\\/s. One carrier consisted of an aluminum casing mounted on a single hydraulic sounding weight.

Phillip A. Groves; Aaron P. Garcia

1998-01-01

223

Blu-ray Disc\\/DVD Compa tible Optical Slim Pickup for Video Camera Drives  

Microsoft Academic Search

A blu-ray disc (BD) \\/ DVD compatible optical slim pickup for video camera drives has been developed. In order to achieve both BD \\/ DVD compatibility and the compact size, a dual objective lens method, collimator lens actuation for spherical aberration compensation, and shared front monitor (FM) were newly developed.

K. Yamazaki; H. Mori; T. Kawamura; Y. Kitada; T. Kamisada

2008-01-01

224

Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera  

ERIC Educational Resources Information Center

Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

Levesque, Luc

2014-01-01

225

Tracing Handwriting on Paper Document under Video Camera Jae-Hyun Seok+  

E-print Network

Tracing Handwriting on Paper Document under Video Camera Jae-Hyun Seok+ Simon Levasseur++ Kee.levasseur@gmail.com, {kekim, jkim}@cs.kaist.ac.kr Abstract This paper describes a system that traces handwriting on paper the part makes a dark line. Detecting written inks is not simple when handwriting is made over printed

226

Observation of hydrothermal flows with acoustic video camera  

NASA Astrophysics Data System (ADS)

To evaluate hydrothermal discharging and its diffusion process along the ocean ridge is necessary for understanding balance of mass and flux in the ocean, ecosystem around hydrothermal fields and so on. However, it has been difficult for us to measure hydrothermal activities without disturbance caused by observation platform ( submersible, ROV, AUV ). We wanted to have some observational method to observe hydrothermal discharging behavior as it was. DIDSON (Dual-Frequency IDentification SONar) is acoustic lens-based sonar. It has sufficiently high resolution and rapid refresh rate that it can substitute for optical system in turbid or dark water where optical systems fail. DIDSON operates at two frequencies, 1.8MHz or 1.1MHz, and forms 96 beams spaced 0.3° apart or 48 beams spaced 0.6° apart respectively. It images out to 12m at 1.8MHz and 40m at 1.1MHz. The transmit and receive beams are formed with acoustic lenses with rectangular apertures and made of polymethylpentene plastic and FC-70 liquid. This physical beam forming allows DIDSON to consume only 30W of power. DIDSON updates its image between 20 to 1 frames/s depending on the operating frequency and the maximum range imaged. It communicates its host using Ethernet. Institute of Industrial Science, University of Tokyo ( IIS ) has understood DIDSON’s superior performance and tried to find new method for utilization of it. The observation systems that IIS has ever developed based on DIDSON are waterside surveillance system, automatic measurement system for fish length, automatic system for fish counting, diagnosis system for deterioration of underwater structure and so on. A next challenge is to develop an observation method based on DIDSON for hydrothermal discharging from seafloor vent. We expected DIDSON to reveal whole image of hydrothermal plume as well as detail inside the plume. In October 2009, we conducted seafloor reconnaissance using a manned deep-sea submersible Shinkai6500 in Central Indian Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. DIDSON was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. In this cruise, seven dives of Shinkai6500 were conducted. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. These are only a few acoustic video images of the hydrothermal plumes. Processing and analyzing the acoustic video image data are going on. We will report the overview of the acoustic video image of the hydrothermal plumes and discuss possibility of DIDSON as an observation tool for seafloor hydrothermal activity.

Mochizuki, M.; Asada, A.; Tamaki, K.; Scientific Team Of Yk09-13 Leg 1

2010-12-01

227

Composite video and graphics display for multiple camera viewing system in robotics and teleoperation  

NASA Technical Reports Server (NTRS)

A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

Diner, Daniel B. (inventor); Venema, Steven C. (inventor)

1991-01-01

228

Composite video and graphics display for camera viewing systems in robotics and teleoperation  

NASA Technical Reports Server (NTRS)

A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

Diner, Daniel B. (inventor); Venema, Steven C. (inventor)

1993-01-01

229

Free-viewpoint image generation from a video captured by a handheld camera  

NASA Astrophysics Data System (ADS)

In general, free-viewpoint image is generated by captured images by a camera array aligned on a straight line or circle. A camera array is able to capture synchronized dynamic scene. However, camera array is expensive and requires great care to be aligned exactly. In contrast to camera array, a handy camera is easily available and can capture a static scene easily. We propose a method that generates free-viewpoint images from a video captured by a handheld camera in a static scene. To generate free-viewpoint images, view images from several viewpoints and information of camera pose/positions of these viewpoints are needed. In a previous work, a checkerboard pattern has to be captured in every frame to calculate these parameters. And in another work, a pseudo perspective projection is assumed to estimate parameters. This assumption limits a camera movement. However, in this paper, we can calculate these parameters by "Structure From Motion". Additionally, we propose a selection method for reference images from many captured frames. And we propose a method that uses projective block matching and graph-cuts algorithm with reconstructed feature points to estimate a depth map of a virtual viewpoint.

Takeuchi, Kota; Fukushima, Norishige; Yendo, Tomohiro; Panahpour Tehrani, Mehrdad; Fujii, Toshiaki; Tanimoto, Masayuki

2011-03-01

230

An analysis of CCD camera noise and its effect on pressure sensitive paint instrumentation system signal-to-noise ratio  

Microsoft Academic Search

Quantitative pressure measurements can be acquired by pressure sensitive paint (PSP) instrumentation systems incorporating charge-coupled devices (CCD) for PSP photoluminescence image detection. However, intrinsic CCD noise corrupts the PSP image, manifesting in erroneous measurements of pressure and the corresponding coefficient of pressure (Cp). This manifestation is analyzed and quantified in terms of PSP image signal-to-noise ratio (SNR). The image acquisition

D. R. Mendoza

1997-01-01

231

A refrigerated web camera for photogrammetric video measurement inside biomass boilers and combustion analysis.  

PubMed

This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes. PMID:22319349

Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

2011-01-01

232

A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis  

PubMed Central

This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes. PMID:22319349

Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

2011-01-01

233

Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy  

NASA Technical Reports Server (NTRS)

Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

1984-01-01

234

Development of a high-resolution surveillance camera with 520 TV lines  

Microsoft Academic Search

We have newly developed a high-resolution surveillance camera that has 520 TV lines horizontal resolution using a single-chip 410 k pixels color CCD. This camera realizes about 10% improvement of horizontal resolution compared with conventional camera having 470 TV lines horizontal resolution. This paper describes three technologies to improve horizontal resolution. These technologies can be also applied to a video

Y. Mori; S. Okada; T. Mise; H. Murata; E. Azuma

2001-01-01

235

Video and acoustic camera techniques for studying fish under ice: a review and comparison  

SciTech Connect

Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures. This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes, rivers, and streams with ice cover. Methods are provided for determining fish density and size, identifying species, and measuring swimming speed and successful applications of previous surveys of fish under the ice are described. These include drilling ice holes, selecting batteries and generators, deploying pan and tilt cameras, and using paired colored lasers to determine fish size and habitat associations. We also discuss use of infrared and white light to enhance image-capturing capabilities, deployment of digital recording systems and time-lapse techniques, and the use of imaging software. Data are presented from initial surveys with video and acoustic cameras in the Sagavanirktok River Delta, Alaska, during late winter 2004. These surveys represent the first known successful application of a dual-frequency identification sonar (DIDSON) acoustic camera under the ice that achieved fish detection and sizing at camera ranges up to 16 m. Feasibility tests of video and acoustic cameras for determining fish size and density at various turbidity levels are also presented. Comparisons are made of the different techniques in terms of suitability for achieving various fisheries research objectives. This information is intended to assist researchers in choosing the equipment that best meets their study needs.

Mueller, Robert P.; Brown, Richard S.; Hop, Haakon H.; Moulton, Larry

2006-09-05

236

Video observations of the 2011 Draconids by the all-sky camera AMOS  

NASA Astrophysics Data System (ADS)

Our contribution to the 2011 Draconids campaign by using the all-sky camera AMOS of the Slovak Video Meteor Network (SVMN) is presented. The ground-based observations were performed in cooperation with the Central European Meteor Network (CEMeNt), the Polish Fireball Network (PFN) and the Italian Meteor and TLE Network (IMTN). The airborne observations were performed in cooperation with the Astronomical Institute of the Czech Academy of Sciences and the Deutsches Zentrum fur Luft- und Raumfahrt, Germany, within the EUFAR program. The processing of the data obtained by the AMOS camera during the Airborne DLR expedition is described.

Toth, Juraj; Gajdos, Stefan; Vilagi, Jozef; Zigo, Pavol; Kalmancok, Dusan; Duris, Frantisek; Kornos, Leonard

2013-01-01

237

A novel method to reduce time investment when processing videos from camera trap studies.  

PubMed

Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs. PMID:24918777

Swinnen, Kristijn R R; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

2014-01-01

238

A Novel Method to Reduce Time Investment When Processing Videos from Camera Trap Studies  

PubMed Central

Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs. PMID:24918777

Swinnen, Kristijn R. R.; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

2014-01-01

239

A digital underwater video camera system for aquatic research in regulated rivers  

USGS Publications Warehouse

We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

Martin, Benjamin M.; Irwin, Elise R.

2010-01-01

240

Human Daily Activities Indexing in Videos from Wearable Cameras for Monitoring of Patients with Dementia Diseases  

E-print Network

Our research focuses on analysing human activities according to a known behaviorist scenario, in case of noisy and high dimensional collected data. The data come from the monitoring of patients with dementia diseases by wearable cameras. We define a structural model of video recordings based on a Hidden Markov Model. New spatio-temporal features, color features and localization features are proposed as observations. First results in recognition of activities are promising.

Karaman, Svebor; Mégret, Rémi; Dovgalecs, Vladislavs; Dartigues, Jean-François; Gaëstel, Yann

2010-01-01

241

Video and acoustic camera techniques for studying fish under ice: a review and comparison  

Microsoft Academic Search

Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates\\u000a during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures.\\u000a This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes,\\u000a rivers, and streams

Robert P. Mueller; Richard S. Brown; Haakon H. Hop; Larry Moulton

2006-01-01

242

Operation and maintenance manual for the high resolution stereoscopic video camera system (HRSVS) system 6230  

SciTech Connect

The High Resolution Stereoscopic Video Cameral System (HRSVS),system 6230, is a stereoscopic camera system that will be used as an end effector on the LDUA to perform surveillance and inspection activities within Hanford waste tanks. It is attached to the LDUA by means of a Tool Interface Plate (TIP), which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate.

Pardini, A.F., Westinghouse Hanford

1996-07-16

243

Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras  

NASA Technical Reports Server (NTRS)

The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

1973-01-01

244

Low-complexity camera digital signal imaging for video document projection system  

NASA Astrophysics Data System (ADS)

We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

Hsia, Shih-Chang; Tsai, Po-Shien

2011-04-01

245

Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)  

SciTech Connect

A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

Strehlow, J.P.

1994-08-24

246

An efficient coding scheme for surveillance videos captured by stationary cameras  

NASA Astrophysics Data System (ADS)

In this paper, a new scheme is presented to improve the coding efficiency of sequences captured by stationary cameras (or namely, static cameras) for video surveillance applications. We introduce two novel kinds of frames (namely background frame and difference frame) for input frames to represent the foreground/background without object detection, tracking or segmentation. The background frame is built using a background modeling procedure and periodically updated while encoding. The difference frame is calculated using the input frame and the background frame. A sequence structure is proposed to generate high quality background frames and efficiently code difference frames without delay, and then surveillance videos can be easily compressed by encoding the background frames and difference frames in a traditional manner. In practice, the H.264/AVC encoder JM 16.0 is employed as a build-in coding module to encode those frames. Experimental results on eight in-door and out-door surveillance videos show that the proposed scheme achieves 0.12 dB~1.53 dB gain in PSNR over the JM 16.0 anchor specially configured for surveillance videos.

Zhang, Xianguo; Liang, Luhong; Huang, Qian; Liu, Yazhou; Huang, Tiejun; Gao, Wen

2010-07-01

247

Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis  

NASA Technical Reports Server (NTRS)

Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

1994-01-01

248

Flat Field Anomalies in an X-Ray CCD Camera Measured Using a Manson X-Ray Source  

SciTech Connect

The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. It determines how accurately NIF can point the laser beams and is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 ?m square pixels, and 15 ?m thick. A multi-anode Manson X-ray source, operating up to 10kV and 2mA, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/?E?12. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within ±1.5% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. The efficiency pattern follows the properties of Si. The maximum quantum efficiency is 0.71. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation was >8% at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was less than the measurement uncertainty below 4 keV. We were also able to observe debris on the CCD chip. The debris showed maximum contrast at the lowest energy used, 930 eV, and disappeared by 4 keV. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.

Michael Haugh

2008-03-01

249

My camera, my buddy? legal and sociological assessment of the potential of video surveillance in eHomeCare  

Microsoft Academic Search

Cameras will soon be installed inside patients' homes for health purposes. Although the positive effects of the use of cameras on patients in homecare has been demonstrated, it also makes care more privacy intrusive, capturing us on film in our most personal, most intimate environment. This paper examines the possibilities, opportunities and challenges video surveillance presents in eHomeCare from two

G. Verhenneman; A. Veys

2009-01-01

250

Combined video and laser camera for inspection of old mine shafts L. Cauvin (INERIS, Institut National de l'Environnement industriel et des RISques)  

E-print Network

1 Combined video and laser camera for inspection of old mine shafts L. Cauvin (INERIS, Institut inspection is not possible or is difficult for safety reasons. This camera can reach cavities from is known but not its fundamentals characteristics, INERIS has developed a combined video and laser camera

Boyer, Edmond

251

Developments of engineering model of the X-ray CCD camera of the MAXI experiment onboard the International Space Station  

Microsoft Academic Search

MAXI, Monitor of All-sky X-ray Image, is an X-ray observatory on the Japanese Experimental Module (JEM) Exposed Facility (EF) on the International Space Station (ISS). MAXI is a slit scanning camera which consists of two kinds of X-ray detectors: one is a one-dimensional position-sensitive proportional counter with a total area of ?5000cm2, the Gas Slit Camera (GSC), and the other

Emi Miyata; Chikara Natsukari; Tomoyuki Kamazuka; Daisuke Akutsu; Hirohiko Kouno; Hiroshi Tsunemi; Masaru Matsuoka; Hiroshi Tomida; Shiro Ueno; Kenji Hamaguchi; Isao Tanaka

2002-01-01

252

A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.  

PubMed

Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration. PMID:22356964

Carlson, Jay; Kowalczuk, J?drzej; Psota, Eric; Pérez, Lance C

2012-01-01

253

Real time speed estimation of moving vehicles from side view images from an uncalibrated video camera.  

PubMed

In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

Do?an, Sedat; Temiz, Mahir Serhan; Külür, Sitki

2010-01-01

254

Traffic camera system development  

NASA Astrophysics Data System (ADS)

The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

Hori, Toshi

1997-04-01

255

Modelling the spectral response of the Swift-XRT CCD camera: experience learnt from in-flight calibration  

NASA Astrophysics Data System (ADS)

Context: Since its launch in November 2004, Swift has revolutionised our understanding of gamma-ray bursts. The X-ray telescope (XRT), one of the three instruments on board Swift, has played a key role in providing essential positions, timing, and spectroscopy of more than 300 GRB afterglows to date. Although Swift was designed to observe GRB afterglows with power-law spectra, Swift is spending an increasing fraction of its time observing more traditional X-ray sources, which have more complex spectra. Aims: The aim of this paper is a detailed description of the CCD response model used to compute the XRT RMFs (redistribution matrix files), the changes implemented to it based on measurements of celestial and on-board calibration sources, and current caveats in the RMFs for the spectral analysis of XRT data. Methods: The RMFs are computed via Monte-Carlo simulations based on a physical model describing the interaction of photons within the silicon bulk of the CCD detector. Results: We show that the XRT spectral response calibration was complicated by various energy offsets in photon counting (PC) and windowed timing (WT) modes related to the way the CCD is operated in orbit (variation in temperature during observations, contamination by optical light from the sunlit Earth and increase in charge transfer inefficiency). We describe how these effects can be corrected for in the ground processing software. We show that the low-energy response, the redistribution in spectra of absorbed sources, and the modelling of the line profile have been significantly improved since launch by introducing empirical corrections in our code when it was not possible to use a physical description. We note that the increase in CTI became noticeable in June 2006 (i.e. 14 months after launch), but the evidence of a more serious degradation in spectroscopic performance (line broadening and change in the low-energy response) due to large charge traps (i.e. faults in the Si crystal) became more significant after March 2007. We describe efforts to handle such changes in the spectral response. Finally, we show that the commanded increase in the substrate voltage from 0 to 6 V on 2007 August 30 reduced the dark current, enabling the collection of useful science data at higher CCD temperature (up to -50 °C). We also briefly describe the plan to recalibrate the XRT response files at this new voltage. Conclusions: We show that the XRT spectral response is described well by the public response files for line and continuum spectra in the 0.3-10 keV band in both PC and WT modes.

Godet, O.; Beardmore, A. P.; Abbey, A. F.; Osborne, J. P.; Cusumano, G.; Pagani, C.; Capalbi, M.; Perri, M.; Page, K. L.; Burrows, D. N.; Campana, S.; Hill, J. E.; Kennea, J. A.; Moretti, A.

2009-02-01

256

CCD camera systems and support electronics for a White Light Coronagraph and X-ray XUV solar telescope  

NASA Technical Reports Server (NTRS)

Two instruments, a White Light Coronagraph and an X-ray XUV telescope built into the same housing, share several electronic functions. Each instrument uses a CCD as an imaging detector, but due to different spectral requirements, each uses a different type. Hardware reduction, required by the stringent weight and volume allocations of the interplanetary mission, is made possible by the use of a microprocessor. Most instrument functions are software controlled with the end use circuits treated as peripherals to the microprocessor. The instruments are being developed for the International Solar Polar Mission.

Harrison, D. C.; Kubierschky, K.; Staples, M. H.; Carpenter, C. H.

1980-01-01

257

A video camera is mounted on the second stage of a Delta II rocket  

NASA Technical Reports Server (NTRS)

At Launch Pad 17-A, Cape Canaveral Air Station, workers finish mounting a video camera on the second stage of a Boeing Delta II rocket that will launch the Stardust spacecraft on Feb. 6. Looking toward Earth, the camera will record the liftoff and separation of the first stage. Stardust is destined for a close encounter with the comet Wild 2 in January 2004. Using a silicon- based substance called aerogel, Stardust will capture comet particles flying off the nucleus of the comet. The spacecraft also will bring back samples of interstellar dust. These materials consist of ancient pre-solar interstellar grains and other remnants left over from the formation of the solar system. Scientists expect their analysis to provide important insights into the evolution of the sun and planets and possibly into the origin of life itself. The collected samples will return to Earth in a sample return capsule to be jettisoned as Stardust swings by Earth in January 2006.

1999-01-01

258

Acute gastroenteritis and video camera surveillance: a cruise ship case report.  

PubMed

A 'faecal accident' was discovered in front of a passenger cabin of a cruise ship. After proper cleaning of the area the passenger was approached, but denied having any gastrointestinal symptoms. However, when confronted with surveillance camera evidence, she admitted having the accident and even bringing the towel stained with diarrhoea back to the pool towels bin. She was isolated until the next port where she was disembarked. Acute gastroenteritis (AGE) caused by Norovirus is very contagious and easily transmitted from person to person on cruise ships. The main purpose of isolation is to avoid public vomiting and faecal accidents. To quickly identify and isolate contagious passengers and crew and ensure their compliance are key elements in outbreak prevention and control, but this is difficult if ill persons deny symptoms. All passenger ships visiting US ports now have surveillance video cameras, which under certain circumstances can assist in finding potential index cases for AGE outbreaks. PMID:24677123

Diskin, Arthur L; Caro, Gina M; Dahl, Eilif

2014-01-01

259

Scalable software architecture for on-line multi-camera video processing  

NASA Astrophysics Data System (ADS)

In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

Camplani, Massimo; Salgado, Luis

2011-03-01

260

A semantic autonomous video surveillance system for dense camera networks in Smart Cities.  

PubMed

This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607

Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

2012-01-01

261

System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)  

SciTech Connect

The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition.

Pardini, A.F.

1998-01-27

262

A stroboscopic technique for using CCD cameras in flow visualization systems for continuous viewing and stop action photography  

NASA Technical Reports Server (NTRS)

A technique for synchronizing a pulse light source to charge coupled device cameras is presented. The technique permits the use of pulse light sources for continuous as well as stop action flow visualization. The technique has eliminated the need to provide separate lighting systems at facilities requiring continuous and stop action viewing or photography.

Franke, John M.; Rhodes, David B.; Jones, Stephen B.; Dismond, Harriet R.

1992-01-01

263

Aug 7, 2008 Researchers in the US unveil a silicon-based CCD camera that mimics the  

E-print Network

systems) into biomedical devices that can be implanted into the human body. "We would like to explore the shape of a human eye. Electronic eye camera mimics the shape of the human eye Scientists have overcome systems, in which not only the lenses but also the geometrical layouts of the detector arrays can

Rogers, John A.

264

Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis  

NASA Technical Reports Server (NTRS)

A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

2002-01-01

265

Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis  

NASA Technical Reports Server (NTRS)

A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

2002-01-01

266

Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras  

USGS Publications Warehouse

GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.

Harris, A.J.L.; Thornber, C.R.

1999-01-01

267

Multiformat video and laser cameras: history, design considerations, acceptance testing, and quality control. Report of AAPM Diagnostic X-Ray Imaging Committee Task Group No. 1.  

PubMed

Acceptance testing and quality control of video and laser cameras is relatively simple, especially with the use of the SMPTE test pattern. Photographic quality control is essential if one wishes to be able to maintain the quality of video and laser cameras. In addition, photographic quality control must be carried out with the film used clinically in the video and laser cameras, and with a sensitometer producing a light spectrum similar to that of the video or laser camera. Before the end of the warranty period a second acceptance test should be carried out. At this time the camera should produce the same results as noted during the initial acceptance test. With the appropriate acceptance and quality control the video and laser cameras should produce quality images throughout the life of the equipment. PMID:8497235

Gray, J E; Anderson, W F; Shaw, C C; Shepard, S J; Zeremba, L A; Lin, P J

1993-01-01

268

Visual fatigue modeling for stereoscopic video shot based on camera motion  

NASA Astrophysics Data System (ADS)

As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

2014-11-01

269

Fusion of video cameras with laser range scanners- the coastal monitoring system of the future?  

NASA Astrophysics Data System (ADS)

Coastal video monitoring systems have proven to be the most efficient way to follow the multiple scales of coastal hydro- and morpho-dynamic processes, and have resulted in important scientific contributions during the past 3 decades. The present contribution reports on recent developments in optical monitoring techniques, using sensor arrays which combine digital video cameras with laser range scanners; an approach which can improve the performance of several field and laboratory applications for nearshore measurements. Extensive testing during large-scale experiments, simulating highly erosive storm events and the consecutive post-storm recovery, has shown that the hybrid approach can reduce geo-rectification errors by an order of magnitude and in several cases, can facilitate the extraction of quantitative information from coastal imagery. The system provided wave-by-wave, water and beach surface elevation measurements in the swash zone, and has great potential for several other applications, such as detailed monitoring of wave breaking and other complex, three-dimensional wave propagation processes, as well as of complex morphologies without many of the artefacts of monoscopic video systems. Finally, apart from the laboratory, stationary version, it has been successfully implemented on a mobile platform, suitable for field application and capable of monitoring coastal areas of several km.

Vousdoukas, Michalis

2014-05-01

270

Dual charge-coupled device /CCD/, astronomical spectrometer and direct imaging camera. II - Data handling and control systems  

NASA Astrophysics Data System (ADS)

The data collection system for the MASCOT (MIT Astronomical Spectrometer/Camera for Optical Telescopes) is described. The system relies on an RCA 1802 microprocessor-based controller, which serves to collect and format data, to present data to a scan converter, and to operate a device communication bus. A NOVA minicomputer is used to record and recall frame images and to perform refined image processing. The RCA 1802 also provides instrument mode control for the MASCOT. Commands are issued using STOIC, a FORTH-like language. Sufficient flexibility has been provided so that a variety of CCDs can be accommodated.

Dewey, D.; Ricker, G. R.

271

STEM strain analysis at sub-nanometre scale using millisecond frames from a direct electron read-out CCD camera  

NASA Astrophysics Data System (ADS)

We report on strain analysis by nano-beam electron diffraction with a spatial resolution of 0.5nm and a strain precision in the 4-7·10-4 range. Series of up to 160000 CBED patterns have been acquired in STEM mode with a semi-convergence angle of the incident probe of 2.6mrad, which enhances the spatial resolution by a factor of 5 compared to nearly parallel illumination. Firstly, we summarise 3 different algorithms to detect CBED disc positions accurately: selective edge detection and circle fitting, radial gradient maximisation and cross-correlation with masks. They yield equivalent strain profiles in growth direction for a stack of 5 InxGa1-xNyAs1-y/GaAs layers with tensile and compressive strain. Secondly, we use a direct electron read-out pnCCD detector with ultrafast readout hardware and a quantum efficiency close to 1 both to show that the same strain profiles are obtained at 200 times higher readout rates of 1kHz and to enhance strain precision to 3.5·10-4 by recording the weak 008 disc.

Müller, K.; Ryll, H.; Ordavo, I.; Schowalter, M.; Zweck, J.; Soltau, H.; Ihle, S.; Strüder, L.; Volz, K.; Potapov, P.; Rosenauer, A.

2013-11-01

272

Identifying predators and fates of grassland passerine nests using miniature video cameras  

USGS Publications Warehouse

Nest fates, causes of nest failure, and identities of nest predators are difficult to determine for grassland passerines. We developed a miniature video-camera system for use in grasslands and deployed it at 69 nests of 10 passerine species in North Dakota during 1996-97. Abandonment rates were higher at nests 1 day or night (22-116 hr) at 6 nests, 5 of which were depredated by ground squirrels or mice. For nests without cameras, estimated predation rates were lower for ground nests than aboveground nests (P = 0.055), but did not differ between open and covered nests (P = 0.74). Open and covered nests differed, however, when predation risk (estimated by initial-predation rate) was examined separately for day and night using camera-monitored nests; the frequency of initial predations that occurred during the day was higher for open nests than covered nests (P = 0.015). Thus, vulnerability of some nest types may depend on the relative importance of nocturnal and diurnal predators. Predation risk increased with nestling age from 0 to 8 days (P = 0.07). Up to 15% of fates assigned to camera-monitored nests were wrong when based solely on evidence that would have been available from periodic nest visits. There was no evidence of disturbance at nearly half the depredated nests, including all 5 depredated by large mammals. Overlap in types of sign left by different predator species, and variability of sign within species, suggests that evidence at nests is unreliable for identifying predators of grassland passerines.

Pietz, P.J.; Granfors, D.A.

2000-01-01

273

Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera:  

SciTech Connect

Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed for I-131. Plant uptake was greater when I-131 was applied at soil depths with higher root concentrations. When I-131 was applied at soil depths with lower root concentrations, plant uptake was less. However, the relationship between root concentration and plant uptake was not a direct one. When I-131 was delivered to deeper soil depths with low root concentrations, the quantity of roots there appeared to be less effective in uptake than the same quantity of roots at shallow soil depths with high root concentration. 29 refs., 6 figs., 11 tabs.

Moss, K.J.

1990-09-01

274

Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera  

NASA Astrophysics Data System (ADS)

A novel and low-cost embedded hardware architecture for real-time refocusing based on a standard plenoptic camera is presented in this study. The proposed layout design synthesizes refocusing slices directly from micro images by omitting the process for the commonly used sub-aperture extraction. Therefore, intellectual property cores, containing switch controlled Finite Impulse Response (FIR) filters, are developed and applied to the Field Programmable Gate Array (FPGA) XC6SLX45 from Xilinx. Enabling the hardware design to work economically, the FIR filters are composed of stored product as well as upsampling and interpolation techniques in order to achieve an ideal relation between image resolution, delay time, power consumption and the demand of logic gates. The video output is transmitted via High-Definition Multimedia Interface (HDMI) with a resolution of 720p at a frame rate of 60 fps conforming to the HD ready standard. Examples of the synthesized refocusing slices are presented.

Hahne, Christopher; Aggoun, Amar

2014-03-01

275

CCD TV focal plane guider development and comparison to SIRTF applications  

NASA Technical Reports Server (NTRS)

It is expected that the SIRTF payload will use a CCD TV focal plane fine guidance sensor to provide acquisition of sources and tracking stability of the telescope. Work has been done to develop CCD TV cameras and guiders at Lick Observatory for several years and have produced state of the art CCD TV systems for internal use. NASA decided to provide additional support so that the limits of this technology could be established and a comparison between SIRTF requirements and practical systems could be put on a more quantitative basis. The results of work carried out at Lick Observatory which was designed to characterize present CCD autoguiding technology and relate it to SIRTF applications is presented. Two different design types of CCD cameras were constructed using virtual phase and burred channel CCD sensors. A simple autoguider was built and used on the KAO, Mt. Lemon and Mt. Hamilton telescopes. A video image processing system was also constructed in order to characterize the performance of the auto guider and CCD cameras.

Rank, David M.

1989-01-01

276

A versatile digital video engine for safeguards and security applications  

SciTech Connect

The capture and storage of video images have been major engineering challenges for safeguard and security applications since the video camera provided a method to observe remote operations. The problems of designing reliable video cameras were solved in the early 1980`s with the introduction of the CCD (charged couple device) camera. The first CCD cameras cost in the thousands of dollars but have now been replaced by cameras costing in the hundreds. The remaining problem of storing and viewing video images in both attended and unattended video surveillance systems and remote monitoring systems is being solved by sophisticated digital compression systems. One such system is the PC-104 three card set which is literally a ``video engine`` that can provide power for video storage systems. The use of digital images in surveillance systems makes it possible to develop remote monitoring systems, portable video surveillance units, image review stations, and authenticated camera modules. This paper discusses the video card set and how it can be used in many applications.

Hale, W.R.; Johnson, C.S. [Sandia National Labs., Albuquerque, NM (United States); DeKeyser, P. [Fast Forward Video, Irvine, CA (United States)

1996-08-01

277

Mobile eye tracking as a basis for real-time control of a gaze driven head-mounted video camera  

Microsoft Academic Search

Eye trackers based on video-oculographic (VOG) methods are a convenient means for oculomotor research. This work focused on the development of a VOG device that allows mobile eye tracking. It was especially designed to support a head-mounted gaze driven camera system presented in a companion paper [Wagner et al. 2006] (see Figure 1). The target applications of such a device

Guido Boening; Klaus Bartl; Thomas Dera; Stanislavs Bardins; Erich Schneider; Thomas Brandt

2006-01-01

278

A cooled CCD camera-based protocol provides an effective solution for in vitro monitoring of luciferase.  

PubMed

Luciferase assay has become an increasingly important technique to monitor a wide range of biological processes. However, the mainstay protocols require a luminometer to acquire and process the data, therefore limiting its application to specialized research labs. To overcome this limitation, we have developed an alternative protocol that utilizes a commonly available cooled charge-coupled device (CCCD), instead of a luminometer for data acquiring and processing. By measuring activities of different luciferases, we characterized their substrate specificity, assay linearity, signal-to-noise levels, and fold-changes via CCCD. Next, we defined the assay parameters that are critical for appropriate use of CCCD for different luciferases. To demonstrate the usefulness in cultured mammalian cells, we conducted a case study to examine NF?B gene activation in response to inflammatory signals in human embryonic kidney cells (HEK293 cells). We found that data collected by CCCD camera was equivalent to those acquired by luminometer, thus validating the assay protocol. In comparison, The CCCD-based protocol is readily amenable to live-cell and high-throughput applications, offering fast simultaneous data acquisition and visual and quantitative data presentation. In conclusion, the CCCD-based protocol provides a useful alternative for monitoring luciferase reporters. The wide availability of CCCD will enable more researchers to use luciferases to monitor and quantify biological processes. PMID:25677617

Afshari, Amirali; Uhde-Stone, Claudia; Lu, Biao

2015-03-13

279

Single-Camera Panoramic-Imaging Systems  

NASA Technical Reports Server (NTRS)

Panoramic detection systems (PDSs) are developmental video monitoring and image-data processing systems that, as their name indicates, acquire panoramic views. More specifically, a PDS acquires images from an approximately cylindrical field of view that surrounds an observation platform. The main subsystems and components of a basic PDS are a charge-coupled- device (CCD) video camera and lens, transfer optics, a panoramic imaging optic, a mounting cylinder, and an image-data-processing computer. The panoramic imaging optic is what makes it possible for the single video camera to image the complete cylindrical field of view; in order to image the same scene without the benefit of the panoramic imaging optic, it would be necessary to use multiple conventional video cameras, which have relatively narrow fields of view.

Lindner, Jeffrey L.; Gilbert, John

2007-01-01

280

A video camera is mounted on the second stage of a Delta II rocket  

NASA Technical Reports Server (NTRS)

At Launch Pad 17-A, Cape Canaveral Air Station, a worker (left) runs a wire through a mounting hole on the second stage of a Boeing Delta II rocket in order to affix an external video camera held by the worker at right. The Delta II will launch the Stardust spacecraft on Feb. 6. Looking toward Earth, the camera will record the liftoff and separation of the first stage. Stardust is destined for a close encounter with the comet Wild 2 in January 2004. Using a silicon-based substance called aerogel, Stardust will capture comet particles flying off the nucleus of the comet. The spacecraft also will bring back samples of interstellar dust. These materials consist of ancient pre-solar interstellar grains and other remnants left over from the formation of the solar system. Scientists expect their analysis to provide important insights into the evolution of the sun and planets and possibly into the origin of life itself. The collected samples will return to Earth in a sample return capsule to be jettisoned as Stardust swings by Earth in January 2006.

1999-01-01

281

A video camera is mounted on the second stage of a Delta II rocket  

NASA Technical Reports Server (NTRS)

At Launch Pad 17-A, Cape Canaveral Air Station, a worker holds the video camera to be mounted on the second stage of a Boeing Delta II rocket that will launch the Stardust spacecraft on Feb. 6. His co-worker (right) makes equipment adjustments. Looking toward Earth, the camera will record the liftoff and separation of the first stage. Stardust is destined for a close encounter with the comet Wild 2 in January 2004. Using a silicon-based substance called aerogel, Stardust will capture comet particles flying off the nucleus of the comet. The spacecraft also will bring back samples of interstellar dust. These materials consist of ancient pre-solar interstellar grains and other remnants left over from the formation of the solar system. Scientists expect their analysis to provide important insights into the evolution of the sun and planets and possibly into the origin of life itself. The collected samples will return to Earth in a sample return capsule to be jettisoned as Stardust swings by Earth in January 2006.

1999-01-01

282

A video camera is mounted on the second stage of a Delta II rocket  

NASA Technical Reports Server (NTRS)

At Launch Pad 17-A, Cape Canaveral Air Station, workers check the mounting on a video camera on the second stage of a Boeing Delta II rocket that will launch the Stardust spacecraft on Feb. 6. Looking toward Earth, the camera will record the liftoff and separation of the first stage. Stardust is destined for a close encounter with the comet Wild 2 in January 2004. Using a silicon- based substance called aerogel, Stardust will capture comet particles flying off the nucleus of the comet. The spacecraft also will bring back samples of interstellar dust. These materials consist of ancient pre-solar interstellar grains and other remnants left over from the formation of the solar system. Scientists expect their analysis to provide important insights into the evolution of the sun and planets and possibly into the origin of life itself. The collected samples will return to Earth in a sample return capsule to be jettisoned as Stardust swings by Earth in January 2006.

1999-01-01

283

Hand contour detection in wearable camera video using an adaptive histogram region of interest  

PubMed Central

Background Monitoring hand function at home is needed to better evaluate the effectiveness of rehabilitation interventions. Our objective is to develop wearable computer vision systems for hand function monitoring. The specific aim of this study is to develop an algorithm that can identify hand contours in video from a wearable camera that records the user’s point of view, without the need for markers. Methods The two-step image processing approach for each frame consists of: (1) Detecting a hand in the image, and choosing one seed point that lies within the hand. This step is based on a priori models of skin colour. (2) Identifying the contour of the region containing the seed point. This is accomplished by adaptively determining, for each frame, the region within a colour histogram that corresponds to hand colours, and backprojecting the image using the reduced histogram. Results In four test videos relevant to activities of daily living, the hand detector classification accuracy was 88.3%. The contour detection results were compared to manually traced contours in 97 test frames, and the median F-score was 0.86. Conclusion This algorithm will form the basis for a wearable computer-vision system that can monitor and log the interactions of the hand with its environment. PMID:24354542

2013-01-01

284

The design and realization of a three-dimensional video system by means of a CCD array  

NASA Astrophysics Data System (ADS)

Design features and principles and initial tests of a prototype three-dimensional robot vision system based on a laser source and a CCD detector array is described. The use of a laser as a coherent illumination source permits the determination of the relief using one emitter since the location of the source is a known quantity with low distortion. The CCD signal detector array furnishes an acceptable signal/noise ratio and, when wired to an appropriate signal processing system, furnishes real-time data on the return signals, i.e., the characteristic points of an object being scanned. Signal processing involves integration of 29 kB of data per 100 samples, with sampling occurring at a rate of 5 MHz (the CCDs) and yielding an image every 12 msec. Algorithms for filtering errors from the data stream are discussed.

Boizard, J. L.

1985-12-01

285

A unified and efficient framework for court-net sports video analysis using 3D camera modeling  

NASA Astrophysics Data System (ADS)

The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

Han, Jungong; de With, Peter H. N.

2007-01-01

286

Three-dimensional shape recovery using a video camera with a gyro sensor and its error analysis  

NASA Astrophysics Data System (ADS)

Image changes produced by a camera motion are an important source of information on the structure of the environment. 3D-shape-recovery from an image sequence has intensively been studied by many researchers and many methods were proposed so far. Theoretically, these methods are perfect, but they are very sensitive to noise, so that, in many practical situations, we could not obtain satisfactory results. The difficulty comes from the fact that, in some cases, the discrimination of small rotation around an axis perpendicular to the optical axis and small translation along the tangent to the rotational motion is difficult. In the present paper, in order to improve the accuracy of recovery, we propose a method for recovering the object shape based on sensor fusion technique. The method uses a video camera and a gyro sensor. The gyro sensor, mounted on the video camera, outputs 3-axial angular velocity. It is used to compensate optical flow information from the video camera. We selected this sensor because it does not require any setting in the environment, so that we can carry it anywhere we want. We have made an experimental system and got fairy good results. We also report a statistical analysis of our method.

Mukai, Toshiharu; Ohnishi, Noboru

2001-04-01

287

Camera Animation  

NSDL National Science Digital Library

A general discussion of the use of cameras in computer animation. This section includes principles of traditional film techniques and suggestions for the use of a camera during an architectural walkthrough. This section includes html pages, images and one video.

288

Peering Into Virtual Space--Camera Shot Selection in the Video Conference Class.  

ERIC Educational Resources Information Center

Focuses on three essential information stations integral to the electronic classroom: the instructor camera (information station #1), the student camera (information station #2), and the copy-stand camera (information station #3). For each, the basic issues, such as camera location, instructional function, learning mode, information quality, and…

Dolhon, James P.

1998-01-01

289

High speed cooled CCD experiments  

SciTech Connect

Experiments were conducted using cooled and intensified CCD cameras. Two different cameras were identically tested using different Optical test stimulus variables. Camera gain and dynamic range were measured by varying microchannel plate (MCP) voltages and controlling light flux using neutral density (ND) filters to yield analog digitized units (ADU) which are digitized values of the CCD pixel`s analog charge. A Xenon strobe (5 {micro}s FWHM, blue light, 430 nm) and a doubled Nd.yag laser (10 ns FWHM, green light, 532 nm) were both used as pulsed illumination sources for the cameras. Images were captured on PC desktop computer system using commercial software. Camera gain and integration time values were adjusted using camera software. Mean values of camera volts versus input flux were also obtained by performing line scans through regions of interest. Experiments and results will be discussed.

Pena, C.R.; Albright, K.L.; Yates, G.J.

1998-12-31

290

Development of a 300,000-pixel ultrahigh-speed high-sensitivity CCD  

NASA Astrophysics Data System (ADS)

We are developing an ultrahigh-speed, high-sensitivity broadcast camera that is capable of capturing clear, smooth slow-motion videos even where lighting is limited, such as at professional baseball games played at night. In earlier work, we developed an ultrahigh-speed broadcast color camera1) using three 80,000-pixel ultrahigh-speed, highsensitivity CCDs2). This camera had about ten times the sensitivity of standard high-speed cameras, and enabled an entirely new style of presentation for sports broadcasts and science programs. Most notably, increasing the pixel count is crucially important for applying ultrahigh-speed, high-sensitivity CCDs to HDTV broadcasting. This paper provides a summary of our experimental development aimed at improving the resolution of CCD even further: a new ultrahigh-speed high-sensitivity CCD that increases the pixel count four-fold to 300,000 pixels.

Ohtake, H.; Hayashida, T.; Kitamura, K.; Arai, T.; Yonai, J.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Poggemann, D.; Ruckelshausen, A.; van Kuijk, H.; Bosiers, Jan T.

2006-02-01

291

Nyquist sampling theorem: understanding the illusion of a spinning wheel captured with a video camera  

NASA Astrophysics Data System (ADS)

Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the sampling time is chosen judiciously, then it is possible to accurately determine the frequency of a signal varying periodically with time. This paper is of educational value as it presents the principles of sampling during data acquisition. The concept of the Nyquist sampling theorem is usually introduced very briefly in the literature, with very little practical examples to grasp its importance during data acquisitions. Through a series of carefully chosen examples, we attempt to present data sampling from the elementary conceptual idea and try to lead the reader naturally to the Nyquist sampling theorem so we may more clearly understand why a signal can be interpreted incorrectly during a data acquisition procedure in the case of undersampling.

Lévesque, Luc

2014-11-01

292

Stochastic process modeling for multiple human tracking using stereo video camera  

NASA Astrophysics Data System (ADS)

Recently microscopic understanding of individual pedestrian behavior in public space is becoming significant. Observation data from diverse sensors have increased. Meanwhile some simulation models of human behavior have made progress. This paper proposes a method of multiple human tracking under the complex situations by integrating the various observation data and the simulation. The key concept is that the multiple human tracking can be regarded as stochastic process modeling. A data assimilation technique is employed as the stochastic process modeling. The data assimilation technique consists of observations, forecasting and filtering. For the modeling, a state vector is defined as an ellipsoid and its coordinates, which are human positions and shapes. An observation vector is also defined as observations from stereo video camera, namely color and range information. Then a system model which represents dynamics of the state vectors is formulated by using discrete choice model. The discrete choice model decides the next step of each pedestrian stochastically and deals with interaction between pedestrians. An observation model is also formulated for the filtering step. The likelihood of color is modeled based on color histogram matching, and one of range is calculated by comparing between the ellipsoidal model and observed 3D data. The proposed method is applied to the data acquired at the ticket gate of a station and the high performance of the method is confirmed. We compare the results with other models and show the advantage of integrating the behavior model to the tracking method.

Fuse, Takashi; Nakanishi, Wataru

2013-04-01

293

Application of video-cameras for quality control and sampling optimisation of hydrological and erosion measurements in a catchment  

NASA Astrophysics Data System (ADS)

Long term soil erosion studies imply substantial efforts, particularly when there is the need to maintain continuous measurements. There are high costs associated to maintenance of field equipment keeping and quality control of data collection. Energy supply and/or electronic failures, vandalism and burglary are common causes of gaps in datasets, reducing their reach in many cases. In this work, a system of three video-cameras, a recorder and a transmission modem (3G technology) has been set up in a gauging station where rainfall, runoff flow and sediment concentration are monitored. The gauging station is located in the outlet of an olive orchard catchment of 6.4 ha. Rainfall is measured with one automatic raingauge that records intensity at one minute intervals. The discharge is measured by a flume of critical flow depth, where the water is recorded by an ultrasonic sensor. When the water level rises to a predetermined level, the automatic sampler turns on and fills a bottle at different intervals according to a program depending on the antecedent precipitation. A data logger controls the instruments' functions and records the data. The purpose of the video-camera system is to improve the quality of the dataset by i) the visual analysis of the measurement conditions of flow into the flume; ii) the optimisation of the sampling programs. The cameras are positioned to record the flow at the approximation and the gorge of the flume. In order to contrast the values of ultrasonic sensor, there is a third camera recording the flow level close to a measure tape. This system is activated when the ultrasonic sensor detects a height threshold, equivalent to an electric intensity level. Thus, only when there is enough flow, video-cameras record the event. This simplifies post-processing and reduces the cost of download of recordings. The preliminary contrast analysis will be presented as well as the main improvements in the sample program.

Lora-Millán, Julio S.; Taguas, Encarnacion V.; Gomez, Jose A.; Perez, Rafael

2014-05-01

294

A multiframe soft x-ray camera with fast video capture for the LSX field reversed configuration (FRC) experiment  

SciTech Connect

Soft x-ray pinhole imaging has proven to be an exceptionally useful diagnostic for qualitative observation of impurity radiation from field reversed configuration plasmas. We used a four frame device, similar in design to those discussed in an earlier paper (E. A. Crawford, D. P. Taggart, and A. D. Bailey III, Rev. Sci. Instrum. {bold 61}, 2795 (1990)) as a routine diagnostic during the last six months of the Large s Experiment (LSX) program. Our camera is an improvement over earlier implementations in several significant aspects. It was designed and used from the onset of the LSX experiments with a video frame capture system so that an instant visual record of the shot was available to the machine operator as well as facilitating quantitative interpretation of intensity information recorded in the images. The camera was installed in the end region of the LSX on axis approximately 5.5 m from the plasma midplane. Experience with bolometers on LSX showed serious problems with particle dumps'' at the axial location at various times during the plasma discharge. Therefore, the initial implementation of the camera included an effective magnetic sweeper assembly. Overall performance of the camera, video capture system, and sweeper is discussed.

Crawford, E.A. (STI Optronics, 2755 Northup Way, Bellevue, Washington 98004 (United States))

1992-10-01

295

Method for eliminating artifacts in CCD imagers  

DOEpatents

An electronic method for eliminating artifacts in a video camera (10) employing a charge coupled device (CCD) (12) as an image sensor. The method comprises the step of initializing the camera (10) prior to normal read out and includes a first dump cycle period (76) for transferring radiation generated charge into the horizontal register (28) while the decaying image on the phosphor (39) being imaged is being integrated in the photosites, and a second dump cycle period (78), occurring after the phosphor (39) image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers (32). Image charge is then transferred from the photosites (36) and (38) to the vertical registers (32) and read out in conventional fashion. The inventive method allows the video camera (10) to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers (28) and (32), and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites (36) and (37).

Turko, Bojan T. (Moraga, CA); Yates, George J. (Santa Fe, NM)

1992-01-01

296

Method for eliminating artifacts in CCD imagers  

DOEpatents

An electronic method for eliminating artifacts in a video camera employing a charge coupled device (CCD) as an image sensor is disclosed. The method comprises the step of initializing the camera prior to normal read out and includes a first dump cycle period for transferring radiation generated charge into the horizontal register while the decaying image on the phosphor being imaged is being integrated in the photosites, and a second dump cycle period, occurring after the phosphor image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers. Image charge is then transferred from the photosites and to the vertical registers and read out in conventional fashion. The inventive method allows the video camera to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers and, and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites. 3 figs.

Turko, B.T.; Yates, G.J.

1992-06-09

297

Lori Losey - The Woman Behind the Video Camera - Duration: 3:36.  

NASA Video Gallery

The often-spectacular aerial video imagery of NASA flight research, airborne science missions and space satellite launches doesn't just happen. Much of it is the work of Lori Losey, senior video pr...

298

Advanced Video Data-Acquisition System For Flight Research  

NASA Technical Reports Server (NTRS)

Advanced video data-acquisition system (AVDAS) developed to satisfy variety of requirements for in-flight video documentation. Requirements range from providing images for visualization of airflows around fighter airplanes at high angles of attack to obtaining safety-of-flight documentation. F/A-18 AVDAS takes advantage of very capable systems like NITE Hawk forward-looking infrared (FLIR) pod and recent video developments like miniature charge-couple-device (CCD) color video cameras and other flight-qualified video hardware.

Miller, Geoffrey; Richwine, David M.; Hass, Neal E.

1996-01-01

299

Application: Surveillance Data-Stream Compression ? Need: Continuous monitoring of scene with video camera  

E-print Network

only these "interesting" video frames Introduction #12;2 Embedded System Strategy: Model-Based Design Captured Video Frames System Design & Simulation #12;7 Steps to Target the TI C6416 DSK 1. Utilize I ? Input video frames ? Captured frames #12;12 Design Verification Automating embedded software

Kepner, Jeremy

300

Jellyfish Support High Energy Intake of Leatherback Sea Turtles (Dermochelys coriacea): Video Evidence from Animal-Borne Cameras  

E-print Network

The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of lowenergy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km) from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate) correlate with the daytime foraging behavior of leatherbacks (n = 19) in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08–3:38 h), and documented a total of 601 prey captures. Lion’s mane jellyfish (Cyanea capillata) was the dominant prey (83–100%), but moon jellyfish (Aurelia aurita) were also consumed. Turtles approached and attacked most jellyfish within the camera’s field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models). Handling time increased with prey size regardless of prey species (p = 0.0001). Estimates of energy intake averaged 66,018 kJNd 21 but were as high as 167,797 kJNd 21 corresponding to turtles consuming

Susan G. Heaslip; Sara J. Iverson; W. Don Bowen; Michael C. James

301

Determining Camera Gain in Room Temperature Cameras  

SciTech Connect

James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

Joshua Cogliati

2010-12-01

302

“The Camera Rolls”: Using Third-Party Video in Field Research  

Microsoft Academic Search

This article draws on one citizen’s efforts to document daily life in his neighborhood. The authors describe the potential benefits of third-party video—videos that people who are not social scientists have recorded and preserved—to social science research. Excerpts from a collection of police-citizen interactions illustrate key points likely to confront researchers who use third-party video. The authors address two important

Nikki Jones; Geoffrey Raymond

2012-01-01

303

Hand-gesture extraction and recognition from the video sequence acquired by a dynamic camera using condensation algorithm  

NASA Astrophysics Data System (ADS)

To achieve environments in which humans and mobile robots co-exist, technologies for recognizing hand gestures from the video sequence acquired by a dynamic camera could be useful for human-to-robot interface systems. Most of conventional hand gesture technologies deal with only still camera images. This paper proposes a very simple and stable method for extracting hand motion trajectories based on the Human-Following Local Coordinate System (HFLC System), which is obtained from the located human face and both hands. Then, we apply Condensation Algorithm to the extracted hand trajectories so that the hand motion is recognized. We demonstrate the effectiveness of the proposed method by conducting experiments on 35 kinds of sign language based hand gestures.

Luo, Dan; Ohya, Jun

2009-01-01

304

In-situ measurements of alloy oxidation/corrosion/erosion using a video camera and proximity sensor with microcomputer control  

NASA Technical Reports Server (NTRS)

Two noncontacting and nondestructive, remotely controlled methods of measuring the progress of oxidation/corrosion/erosion of metal alloys, exposed to flame test conditions, are described. The external diameter of a sample under test in a flame was measured by a video camera width measurement system. An eddy current proximity probe system, for measurements outside of the flame, was also developed and tested. The two techniques were applied to the measurement of the oxidation of 304 stainless steel at 910 C using a Mach 0.3 flame. The eddy current probe system yielded a recession rate of 0.41 mils diameter loss per hour and the video system gave 0.27.

Deadmore, D. L.

1984-01-01

305

CCD imaging systems for DEIMOS  

NASA Astrophysics Data System (ADS)

The DEep Imaging Multi-Object Spectrograph (DEIMOS) images with an 8K x 8K science mosaic composed of eight 2K x 4K MIT/Lincoln Lab (MIT/LL) CCDs. It also incorporates two 1200 x 600 Orbit Semiconductor CCDs for active, close-loop flexure compensation. The science mosaic CCD controller system reads out all eight science CCDs in 40 seconds while maintaining the low noise floor of the MIT/Lincoln Lab CCDs. The flexure compensation (FC) CCD controller reads out the FC CCDs several times per minute during science mosaic exposures. The science mosaic CCD controller and the FC CCD controller are located on the electronics ring of DEIMOS. Both the MIT/Lincoln Lab CCDs and the Orbit flexure compensation CCDs and their associated cabling and printed circuit boards are housed together in the same detector vessel that is approximately 10 feet away from the electronics ring. Each CCD controller has a modular hardware design and is based on the San Diego State University (SDSU) Generation 2 (SDSU-2) CCD controller. Provisions have been made to the SDSU-2 video board to accommodate external CCD preamplifiers that are located at the detector vessel. Additional circuitry has been incorporated in the CCD controllers to allow the readback of all clocks and bias voltages for up to eight CCDs, to allow up to 10 temperature monitor and control points of the mosaic, and to allow full-time monitoring of power supplies and proper power supply sequencing. Software control features of the CCD controllers are: software selection between multiple mosaic readout modes, readout speeds, selectable gains, ramped parallel clocks to eliminate spurious charge on the CCDs, constant temperature monitoring and control of each CCD within the mosaic, proper sequencing of the bias voltages of the CCD output MOSFETs, and anti-blooming operation of the science mosaic. We cover both the hardware and software highlights of both of these CCD controller systems as well as their respective performance.

Wright, Christopher A.; Kibrick, Robert I.; Alcott, Barry; Gilmore, David K.; Pfister, Terry; Cowley, David J.

2003-03-01

306

Videos  

Cancer.gov

Home News and Events Multimedia Library Videos Videos:  Miscellaneous Videos Video: Louis Staudt, M.D., Ph.D., Director of the National Cancer Institute's Center for Cancer Genomics, discusses the future of genomics research. Dr. Louis Staudt discusses

307

Lights, Camera, Action! Learning about Management with Student-Produced Video Assignments  

ERIC Educational Resources Information Center

In this article, we present a proposal for fostering learning in the management classroom through the use of student-produced video assignments. We describe the potential for video technology to create active learning environments focused on problem solving, authentic and direct experiences, and interaction and collaboration to promote student…

Schultz, Patrick L.; Quinn, Andrew S.

2014-01-01

308

Video-Based Multi-Camera Automated Surveillance of High Value Assets in Nuclear Facilities C.-H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi  

E-print Network

Video-Based Multi-Camera Automated Surveillance of High Value Assets in Nuclear Facilities C for a multi- camera surveillance system that automatically detects, tracks, and records security violations its gaze to the object of interest. In a surveillance system with multiple dual-camera sets, camera

Abidi, Mongi A.

309

A Solid-State, Simultaneous Wide Angle - Detailed View Video Surveillance Camera  

Microsoft Academic Search

We have developed a simultaneously wide-angle and detailed-view surveillance camera. For the purpose of surveillance, detailed views for suspicious objects are needed. Conventional motorized zoom cameras, however, are fragile and provide only a small region-of-interest at a time. We propose a system which is capable of obtain- ing wide-range views as well as detailed views of multiple regions of interest

Ryutaro Oi; Marcus A. Magnor; Kiyoharu Aizawa

2003-01-01

310

Lights, Camera: Learning! Findings from studies of video in formal and informal science education  

NASA Astrophysics Data System (ADS)

As part of the panel, media researcher, Jennifer Borland, will highlight findings from a variety of studies of videos across the spectrum of formal to informal learning, including schools, museums, and in viewers homes. In her presentation, Borland will assert that the viewing context matters a great deal, but there are some general take-aways that can be extrapolated to the use of educational video in a variety of settings. Borland has served as an evaluator on several video-related projects funded by NASA and the the National Science Foundation including: Data Visualization videos and Space Shows developed by the American Museum of Natural History, DragonflyTV, Earth the Operators Manual, The Music Instinct and Time Team America.

Borland, J.

2013-12-01

311

Laboratory Test of CCD #1 in BOAO  

NASA Astrophysics Data System (ADS)

An introduction to the first CCD camera system in Bohyunsan Optical Astronomy Observatory (CCD#1) is presented. The CCD camera adopts modular dewar design of IfA(Institute for Astronomy at Hawaii University) and SDSU(San Diego State University) general purpose CCD controller. The user interface is based on IfA design of easy-to-use GUI program running on the NeXT workstation. The characteristics of the CCD#1 including Gain, Charge Transfer Efficiency, rms Read-Out Noise, Linearity and Dynamic range is tested and discussed. The CCD#1 shows 6.4 electrons RON and gain of 3.49 electrons per ADU, and the optimization resulted in about 27 seconds readout time guaranteeing charge transfer efficiency of 0.99999 for both directions. Linearity test shows that non-linear coefficient is 6e-7 in the range of 0 to 30,000 ADU.

Park, Byeong-Gon; Chun, Moo Young; Kim, Seung-Lee

1995-12-01

312

Lights, camera, action…critique? Submit videos to AGU communications workshop  

NASA Astrophysics Data System (ADS)

What does it take to create a science video that engages the audience and draws thousands of views on YouTube? Those interested in finding out should submit their research-related videos to AGU's Fall Meeting science film analysis workshop, led by oceanographer turned documentary director Randy Olson. Olson, writer-director of two films (Flock of Dodos: The Evolution-Intelligent Design Circus and Sizzle: A Global Warming Comedy) and author of the book Don't Be Such a Scientist: Talking Substance in an Age of Style, will provide constructive criticism on 10 selected video submissions, followed by moderated discussion with the audience. To submit your science video (5 minutes or shorter), post it on YouTube and send the link to the workshop coordinator, Maria-José Viñas (mjvinas@agu.org), with the following subject line: Video submission for Olson workshop. AGU will be accepting submissions from researchers and media officers of scientific institutions until 6:00 P.M. eastern time on Friday, 4 November. Those whose videos are selected to be screened will be notified by Friday, 18 November. All are welcome to attend the workshop at the Fall Meeting.

Viñas, Maria-José

2011-08-01

313

Visual surveys can reveal rather different 'pictures' of fish densities: Comparison of trawl and video camera surveys in the Rockall Bank, NE Atlantic Ocean  

NASA Astrophysics Data System (ADS)

Visual surveys allow non-invasive sampling of organisms in the marine environment which is of particular importance in deep-sea habitats that are vulnerable to damage caused by destructive sampling devices such as bottom trawls. To enable visual surveying at depths greater than 200 m we used a deep towed video camera system, to survey large areas around the Rockall Bank in the North East Atlantic. The area of seabed sampled was similar to that sampled by a bottom trawl, enabling samples from the towed video camera system to be compared with trawl sampling to quantitatively assess the numerical density of deep-water fish populations. The two survey methods provided different results for certain fish taxa and comparable results for others. Fish that exhibited a detectable avoidance behaviour to the towed video camera system, such as the Chimaeridae, resulted in mean density estimates that were significantly lower (121 fish/km2) than those determined by trawl sampling (839 fish/km2). On the other hand, skates and rays showed no reaction to the lights in the towed body of the camera system, and mean density estimates of these were an order of magnitude higher (64 fish/km2) than the trawl (5 fish/km2). This is probably because these fish can pass under the footrope of the trawl due to their flat body shape lying close to the seabed but are easily detected by the benign towed video camera system. For other species, such as Molva sp, estimates of mean density were comparable between the two survey methods (towed camera, 62 fish/km2; trawl, 73 fish/km2). The towed video camera system presented here can be used as an alternative benign method for providing indices of abundance for species such as ling in areas closed to trawling, or for those fish that are poorly monitored by trawl surveying in any area, such as the skates and rays.

McIntyre, F. D.; Neat, F.; Collie, N.; Stewart, M.; Fernandes, P. G.

2015-01-01

314

Write System of Blu-ray Disc Drive for Video Camera  

Microsoft Academic Search

This paper presents the write system for a Blu-ray disc (BD) camera drive to write with PCAV (partial constant angular velocity) method. The system controls the write waveform of the power and the pulse timing according to the write speed and the temperature. The power margin and the symbol error rate sufficiently show high performance not depending on the write

T. Tsukuda; T. Ishitobi; K. Watanabe; H. Sugiyama

2008-01-01

315

Human Daily Activities Indexing in Videos from Wearable Cameras for Monitoring of Patients with Dementia Diseases  

E-print Network

with Dementia Diseases Svebor Karaman 1 , Jenny Benois-Pineau1 , Rémi Mégret2 , Vladislavs Dovgalecs2 , Jean come from the monitoring of patients with dementia diseases by wearable cameras. We define a structural of interest are specified by medical researches in the context of studies of dementia and in particular

Boyer, Edmond

316

Circuit design of an EMCCD camera  

NASA Astrophysics Data System (ADS)

EMCCDs have been used in the astronomical observations in many ways. Recently we develop a camera using an EMCCD TX285. The CCD chip is cooled to -100°C in an LN2 dewar. The camera controller consists of a driving board, a control board and a temperature control board. Power supplies and driving clocks of the CCD are provided by the driving board, the timing generator is located in the control board. The timing generator and an embedded Nios II CPU are implemented in an FPGA. Moreover the ADC and the data transfer circuit are also in the control board, and controlled by the FPGA. The data transfer between the image workstation and the camera is done through a Camera Link frame grabber. The software of image acquisition is built using VC++ and Sapera LT. This paper describes the camera structure, the main components and circuit design for video signal processing channel, clock driver, FPGA and Camera Link interfaces, temperature metering and control system. Some testing results are presented.

Li, Binhua; Song, Qian; Jin, Jianhui; He, Chun

2012-07-01

317

The Advanced Camera for the Hubble Space Telescope  

Microsoft Academic Search

The Advanced Camera for the Hubble Space Telescope has three cameras. The first, the Wide Field Camera, will be a high- throughput, wide field, 4096 X 4096 pixel CCD optical and I-band camera that is half-critically sampled at 500 nm. The second, the High Resolution Camera (HRC), is a 1024 X 1024 pixel CCD camera that is critically sampled at

G. D. Illingworth; Paul D. Feldman; David A. Golimowski; Zlatan Tsvetanov; Christopher J. Burrows; James H. Crocker; Pierre Y. Bely; George F. Hartig; Randy A. Kimble; Michael P. Lesser; Richard L. White; Tom Broadhurst; William B. Sparks; Robert A. Woodruff; Pamela Sullivan; Carolyn A. Krebs; Douglas B. Leviton; William Burmester; Sherri Fike; Rich Johnson; Robert B. Slusher; Paul Volmer

1997-01-01

318

Video-based realtime IMU-camera calibration for robot navigation  

NASA Astrophysics Data System (ADS)

This paper introduces a new method for fast calibration of inertial measurement units (IMU) with cameras being rigidly coupled. That is, the relative rotation and translation between the IMU and the camera is estimated, allowing for the transfer of IMU data to the cameras coordinate frame. Moreover, the IMUs nuisance parameters (biases and scales) and the horizontal alignment of the initial camera frame are determined. Since an iterated Kalman Filter is used for estimation, information on the estimations precision is also available. Such calibrations are crucial for IMU-aided visual robot navigation, i.e. SLAM, since wrong calibrations cause biases and drifts in the estimated position and orientation. As the estimation is performed in realtime, the calibration can be done using a freehand movement and the estimated parameters can be validated just in time. This provides the opportunity of optimizing the used trajectory online, increasing the quality and minimizing the time effort for calibration. Except for a marker pattern, used for visual tracking, no additional hardware is required. As will be shown, the system is capable of estimating the calibration within a short period of time. Depending on the requested precision trajectories of 30 seconds to a few minutes are sufficient. This allows for calibrating the system at startup. By this, deviations in the calibration due to transport and storage can be compensated. The estimation quality and consistency are evaluated in dependency of the traveled trajectories and the amount of IMU-camera displacement and rotation misalignment. It is analyzed, how different types of visual markers, i.e. 2- and 3-dimensional patterns, effect the estimation. Moreover, the method is applied to mono and stereo vision systems, providing information on the applicability to robot systems. The algorithm is implemented using a modular software framework, such that it can be adopted to altered conditions easily.

Petersen, Arne; Koch, Reinhard

2012-06-01

319

"Lights, Camera, Reflection": Using Peer Video to Promote Reflective Dialogue among Student Teachers  

ERIC Educational Resources Information Center

This paper examines the use of peer-videoing in the classroom as a means of promoting reflection among student teachers. Ten pre-service teachers participating in a teacher education programme in a university in the Republic of Ireland and ten pre-service teachers participating in a teacher education programme in a university in the North of…

Harford, Judith; MacRuairc, Gerry; McCartan, Dermot

2010-01-01

320

Lights! Camera! Action! Producing Library Instruction Video Tutorials Using Camtasia Studio  

ERIC Educational Resources Information Center

From Web guides to online tutorials, academic librarians are increasingly experimenting with many different technologies in order to meet the needs of today's growing distance education populations. In this article, the author discusses one librarian's experience using Camtasia Studio to create subject specific video tutorials. Benefits, as well…

Charnigo, Laurie

2009-01-01

321

A simple, inexpensive video camera setup for the study of avian nest activity  

Microsoft Academic Search

Time-lapse video photography has become a valuable tool for collecting data on avian nest activity and depredation; however, commercially available systems are expensive (.USA $4000\\/unit). We designed an inexpensive system to identify causes of nest failure of American Oystercatchers (Haematopus palliatus) and assessed its utility at Cumberland Island National Seashore, Georgia. We successfully identified raccoon (Procyon lotor), bobcat (Lynx rufus),

John B. Sabine; J. Michael Meyers; Sara H. Schweitzer

322

SOME PRELIMINARY RESULTS ON STUDYING THE SHORELINE EVOLUTION OF NHA TRANG BAY USING VIDEO-CAMERA  

E-print Network

SOME PRELIMINARY RESULTS ON STUDYING THE SHORELINE EVOLUTION OF NHA TRANG BAY USING VIDEO area. The paper presents preliminary results on studying shoreline evolution of Nha Trang bay, Khanh "Study on hydrodymic regime and sediment transport in estuarine and coastal zones of Nha Trang bay, Khanh

323

A Video Surveillance System for Monitoring the Endangered Mediterranean Monk Seal ( Monachus monachus )  

Microsoft Academic Search

The components and specifications of a surveil- lance system developed in a pilot study to monitor Mediterranean monk seals (Monachus monachus) are presented. The system consisted of two B\\/W CCD cameras, infrared illuminators, a CCTV video web server, and photovoltaic solar panels, and it was operated under harsh outdoor conditions for three and a half months. It enabled the recording

Panagiotis Dendrinos; Eleni Tounta; Alexandros A. Karamanlidis; Anastasios Legakis; Spyros Kotomatas

2007-01-01

324

High-resolution video mosaicing for documents and photos by estimating camera motion  

NASA Astrophysics Data System (ADS)

Recently, document and photograph digitization from a paper is very important for digital archiving and personal data transmission through the internet. Though many people wish to digitize documents on a paper easily, now heavy and large image scanners are required to obtain high quality digitization. To realize easy and high quality digitization of documents and photographs, we propose a novel digitization method that uses a movie captured by a hand-held camera. In our method, first, 6-DOF(Degree Of Freedom) position and posture parameters of the mobile camera are estimated in each frame by tracking image features automatically. Next, re-appearing feature points in the image sequence are detected and stitched for minimizing accumulated estimation errors. Finally, all the images are merged as a high-resolution mosaic image using the optimized parameters. Experiments have successfully demonstrated the feasibility of the proposed method. Our prototype system can acquire initial estimates of extrinsic camera parameters in real-time with capturing images.

Sato, Tomokazu; Ikeda, Sei; Kanbara, Masayuki; Iketani, Akihiko; Nakajima, Noboru; Yokoya, Naokazu; Yamada, Keiji

2004-05-01

325

A Motionless Camera  

NASA Technical Reports Server (NTRS)

Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

1994-01-01

326

Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera  

NASA Astrophysics Data System (ADS)

Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

Dan, Luo; Ohya, Jun

2010-02-01

327

Technologies to develop a video camera with a frame rate higher than 100 Mfps  

NASA Astrophysics Data System (ADS)

A feasibility study is presented for an image sensor capable of image capturing at 100 Mega-frames per second (Mfps). The basic structure of the sensor is the backside-illuminated ISIS, the in-situ storage image sensor, with slanted linear CCD memories, which has already achieved 1 Mfps with very high sensitivity. There are many potential technical barriers to further increase the frame rate up to 100 Mfps, such as traveling time of electrons within a pixel, Resistive-Capacitive (RC) delay in driving voltage transfer, heat generation, heavy electro-magnetic noises, etc. For each of the barriers, a countermeasure is newly proposed and the technical and practical possibility is examined mainly by simulations. The new technical proposals include a special wafer with n and p double epitaxial layers with smoothly changing doping profiles, a design method with curves, the thunderbolt bus lines, and digitalnoiseless image capturing by the ISIS with solely sinusoidal driving voltages. It is confirmed that the integration of these technologies is very promising to realize a practical image sensor with the ultra-high frame rate.

Vo Le, Cuong; Nguyen, H. D.; Dao, V. T. S.; Takehara, K.; Etoh, T. G.; Akino, T.; Nishi, K.; Kitamura, K.; Arai, T.; Maruyama, H.

2008-11-01

328

Social interactions of juvenile brown boobies at sea as observed with animal-borne video cameras.  

PubMed

While social interactions play a crucial role on the development of young individuals, those of highly mobile juvenile birds in inaccessible environments are difficult to observe. In this study, we deployed miniaturised video recorders on juvenile brown boobies Sula leucogaster, which had been hand-fed beginning a few days after hatching, to examine how social interactions between tagged juveniles and other birds affected their flight and foraging behaviour. Juveniles flew longer with congeners, especially with adult birds, than solitarily. In addition, approximately 40% of foraging occurred close to aggregations of congeners and other species. Young seabirds voluntarily followed other birds, which may directly enhance their foraging success and improve foraging and flying skills during their developmental stage, or both. PMID:21573196

Yoda, Ken; Murakoshi, Miku; Tsutsui, Kota; Kohno, Hiroyoshi

2011-01-01

329

A simple, inexpensive video camera setup for the study of avian nest activity  

USGS Publications Warehouse

Time-lapse video photography has become a valuable tool for collecting data on avian nest activity and depredation; however, commercially available systems are expensive (>USA $4000/unit). We designed an inexpensive system to identify causes of nest failure of American Oystercatchers (Haematopus palliatus) and assessed its utility at Cumberland Island National Seashore, Georgia. We successfully identified raccoon (Procyon lotor), bobcat (Lynx rufus), American Crow (Corvus brachyrhynchos), and ghost crab (Ocypode quadrata) predation on oystercatcher nests. Other detected causes of nest failure included tidal overwash, horse trampling, abandonment, and human destruction. System failure rates were comparable with commercially available units. Our system's efficacy and low cost (<$800) provided useful data for the management and conservation of the American Oystercatcher.

Sabine, J.B.; Meyers, J.M.; Schweitzer, S.H.

2005-01-01

330

Jellyfish Support High Energy Intake of Leatherback Sea Turtles (Dermochelys coriacea): Video Evidence from Animal-Borne Cameras  

PubMed Central

The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km) from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate) correlate with the daytime foraging behavior of leatherbacks (n?=?19) in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08–3:38 h), and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata) was the dominant prey (83–100%), but moon jellyfish (Aurelia aurita) were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p?=?0.74, linear mixed-effects models). Handling time increased with prey size regardless of prey species (p?=?0.0001). Estimates of energy intake averaged 66,018 kJ•d?1 but were as high as 167,797 kJ•d?1 corresponding to turtles consuming an average of 330 kg wet mass•d?1 (up to 840 kg•d?1) or approximately 261 (up to 664) jellyfish•d-1. Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass•d?1 equating to an average energy intake of 3–7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to southward migration. PMID:22438906

Heaslip, Susan G.; Iverson, Sara J.; Bowen, W. Don; James, Michael C.

2012-01-01

331

Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea): video evidence from animal-borne cameras.  

PubMed

The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km) from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate) correlate with the daytime foraging behavior of leatherbacks (n = 19) in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08-3:38 h), and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata) was the dominant prey (83-100%), but moon jellyfish (Aurelia aurita) were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models). Handling time increased with prey size regardless of prey species (p = 0.0001). Estimates of energy intake averaged 66,018 kJ • d(-1) but were as high as 167,797 kJ • d(-1) corresponding to turtles consuming an average of 330 kg wet mass • d(-1) (up to 840 kg • d(-1)) or approximately 261 (up to 664) jellyfish • d(-1). Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass • d(-1) equating to an average energy intake of 3-7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to southward migration. PMID:22438906

Heaslip, Susan G; Iverson, Sara J; Bowen, W Don; James, Michael C

2012-01-01

332

Analysis of Small-Scale Convective Dynamics in a Crown Fire Using Infrared Video Camera Imagery.  

NASA Astrophysics Data System (ADS)

A good physical understanding of the initiation, propagation, and spread of crown fires remains an elusive goal for fire researchers. Although some data exist that describe the fire spread rate and some qualitative aspects of wildfire behavior, none have revealed the very small timescales and spatial scales in the convective processes that may play a key role in determining both the details and the rate of fire spread. Here such a dataset is derived using data from a prescribed burn during the International Crown Fire Modelling Experiment. A gradient-based image flow analysis scheme is presented and applied to a sequence of high-frequency (0.03 s), high-resolution (0.05-0.16 m) radiant temperature images obtained by an Inframetrics ThermaCAM instrument during an intense crown fire to derive wind fields and sensible heat flux. It was found that the motions during the crown fire had energy-containing scales on the order of meters with timescales of fractions of a second. Estimates of maximum vertical heat fluxes ranged between 0.6 and 3 MW m2 over the 4.5-min burn, with early time periods showing surprisingly large fluxes of 3 MW m2. Statistically determined velocity extremes, using five standard deviations from the mean, suggest that updrafts between 10 and 30 m s1, downdrafts between 10 and 20 m s1, and horizontal motions between 5 and 15 m s1 frequently occurred throughout the fire.The image flow analyses indicated a number of physical mechanisms that contribute to the fire spread rate, such as the enhanced tilting of horizontal vortices leading to counterrotating convective towers with estimated vertical vorticities of 4 to 10 s1 rotating such that air between the towers blew in the direction of fire spread at canopy height and below. The IR imagery and flow analysis also repeatedly showed regions of thermal saturation (infrared temperature > 750°C), rising through the convection. These regions represent turbulent bursts or hairpin vortices resulting again from vortex tilting but in the sense that the tilted vortices come together to form the hairpin shape. As the vortices rise and come closer together their combined motion results in the vortex tilting forward at a relatively sharp angle, giving a hairpin shape. The development of these hairpin vortices over a range of scales may represent an important mechanism through which convection contributes to the fire spread.A major problem with the IR data analysis is understanding fully what it is that the camera is sampling, in order physically to interpret the data. The results indicate that because of the large amount of after-burning incandescent soot associated with the crown fire, the camera was viewing only a shallow depth into the flame front, and variabilities in the distribution of hot soot particles provide the structures necessary to derive image flow fields. The coherency of the derived horizontal velocities support this view because if the IR camera were seeing deep into or through the flame front, then the effect of the ubiquitous vertical rotations almost certainly would result in random and incoherent estimates for the horizontal flow fields. Animations of the analyzed imagery showed a remarkable level of consistency in both horizontal and vertical velocity flow structures from frame to frame in support of this interpretation. The fact that the 2D image represents a distorted surface also must be taken into account when interpreting the data.Suggestions for further field experimentation, software development, and testing are discussed in the conclusions. These suggestions may further understanding on this topic and increase the utility of this type of analysis to wildfire research.

Clark, Terry L.; Radke, Larry; Coen, Janice; Middleton, Don

1999-10-01

333

Micro-rheology Using Multi Speckle DWS with Video Camera. Application to Film Formation, Drying and Rheological Stability  

NASA Astrophysics Data System (ADS)

We present in this work two applications of microrheology: the monitoring of film formation and the rheological stability. Microrheology is based on the Diffusing Wave Spectroscopy (DWS) method [1] that relates the particle dynamics to the speckle field dynamics, and further to the visco-elastic moduli G' and G? with respect to frequency [2]. Our technology uses the Multi Speckle DWS (MS-DWS) set-up in backscattering with a video camera. For film formation and drying application, we present a new algorithm called "Adaptive Speckle Imaging Interferometry" (ASII) that extracts a simple kinetics from the speckle field dynamics [3,4]. Different film forming and drying have been investigated (water-based, solvent and solvent-free paints, inks, adhesives, varnishes, …) on various types of substrates and at different thickness (few to hundreds microns). For rheological stability we show that the robust measurement of speckle correlation using the inter image distance [3] can bring useful information for industry on viscoelasticity variations over a wide range of frequency without additional parameter.

Brunel, Laurent; Dihang, Hélène

2008-07-01

334

Improvement in the light sensitivity of the ultrahigh-speed high-sensitivity CCD with a microlens array  

NASA Astrophysics Data System (ADS)

We are advancing the development of ultrahigh-speed, high-sensitivity CCDs for broadcast use that are capable of capturing smooth slow-motion videos in vivid colors even where lighting is limited, such as at professional baseball games played at night. We have already developed a 300,000 pixel, ultrahigh-speed CCD, and a single CCD color camera that has been used for sports broadcasts and science programs using this CCD. However, there are cases where even higher sensitivity is required, such as when using a telephoto lens during a baseball broadcast or a high-magnification microscope during science programs. This paper provides a summary of our experimental development aimed at further increasing the sensitivity of CCDs using the light-collecting effects of a microlens array.

Hayashida, T.,; Yonai, J.; Kitamura, K.; Arai, T.; Kurita, T.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Kitagawa, S.; Hatade, K.; Yamaguchi, T.; Takeuchi, H.; Iida, K.

2008-02-01

335

Advanced camera image data acquisition system for Pi-of-the-Sky  

NASA Astrophysics Data System (ADS)

The paper describes a new generation of high performance, remote control, CCD cameras designed for astronomical applications. A completely new camera PCB was designed, manufactured, tested and commissioned. The CCD chip was positioned in a different way than previously resulting in better performance of the astronomical video data acquisition system. The camera was built using a low-noise, 4Mpixel CCD circuit by STA. The electronic circuit of the camera is highly parameterized and reconfigurable, as well as modular in comparison with the solution of first generation, due to application of open software solutions and FPGA circuit, Altera Cyclone EP1C6. New algorithms were implemented into the FPGA chip. There were used the following advanced electronic circuit in the camera system: microcontroller CY7C68013a (core 8051) by Cypress, image processor AD9826 by Analog Devices, GigEth interface RTL8169s by Realtec, memory SDRAM AT45DB642 by Atmel, CPU typr microprocessor ARM926EJ-S AT91SAM9260 by ARM and Atmel. Software solutions for the camera and its remote control, as well as image data acquisition are based only on the open source platform. There were used the following image interfaces ISI and API V4L2, data bus AMBA, AHB, INDI protocol. The camera will be replicated in 20 pieces and is designed for continuous on-line, wide angle observations of the sky in the research program Pi-of-the-Sky.

Kwiatkowski, Maciej; Kasprowicz, Grzegorz; Pozniak, Krzysztof; Romaniuk, Ryszard; Wrochna, Grzegorz

2008-11-01

336

Testing fully depleted CCD  

NASA Astrophysics Data System (ADS)

The focal plane of the PAU camera is composed of eighteen 2K x 4K CCDs. These devices, plus four spares, were provided by the Japanese company Hamamatsu Photonics K.K. with type no. S10892-04(X). These detectors are 200 ?m thick fully depleted and back illuminated with an n-type silicon base. They have been built with a specific coating to be sensitive in the range from 300 to 1,100 nm. Their square pixel size is 15 ?m. The read-out system consists of a Monsoon controller (NOAO) and the panVIEW software package. The deafualt CCD read-out speed is 133 kpixel/s. This is the value used in the calibration process. Before installing these devices in the camera focal plane, they were characterized using the facilities of the ICE (CSIC- IEEC) and IFAE in the UAB Campus in Bellaterra (Barcelona, Catalonia, Spain). The basic tests performed for all CCDs were to obtain the photon transfer curve (PTC), the charge transfer efficiency (CTE) using X-rays and the EPER method, linearity, read-out noise, dark current, persistence, cosmetics and quantum efficiency. The X-rays images were also used for the analysis of the charge diffusion for different substrate voltages (VSUB). Regarding the cosmetics, and in addition to white and dark pixels, some patterns were also found. The first one, which appears in all devices, is the presence of half circles in the external edges. The origin of this pattern can be related to the assembly process. A second one appears in the dark images, and shows bright arcs connecting corners along the vertical axis of the CCD. This feature appears in all CCDs exactly in the same position so our guess is that the pattern is due to electrical fields. Finally, and just in two devices, there is a spot with wavelength dependence whose origin could be the result of a defectous coating process.

Casas, Ricard; Cardiel-Sas, Laia; Castander, Francisco J.; Jiménez, Jorge; de Vicente, Juan

2014-08-01

337

Range-gated LADAR coherent imaging using parametric up-conversion of IR and NIR light for imaging with a visible-range fast-shuttered intensified digital CCD camera  

NASA Astrophysics Data System (ADS)

Research is presented on infrared (IR) and near infrared (NIR) sensitive sensor technologies for use in a high speed shuttered/intensified digital video camera for range-gated imaging at eye-safe wavelengths in the region of 1.5 microns. The study is based upon nonlinear crystals used for second harmonic generation (SHG) in optical parametric oscillators (OPOs) for conversion of NIR and IR laser light to visible range light for detection with generic S-20 photocathodes. The intensifiers are stripline geometry 18-mm diameter microchannel plate intensifiers (MCPIIs), designed by Los Alamos National Laboratory and manufactured by Philips Photonics. The MCPIIs are designed for fast optical shuttering with exposures and resolution for the wavelength conversion process are reported. Experimental set-ups for the wavelength shifting and the optical configurations for producing and transporting laser reflectance images are discussed.

Yates, George J.; McDonald, Thomas E., Jr.; Bliss, David E.; Cameron, Stewart M.; Zutavern, Fred J.; Zagarino, Paul A.

2001-04-01

338

Evaluating the Effects of Camera Perspective in Video Modeling for Children with Autism: Point of View versus Scene Modeling  

ERIC Educational Resources Information Center

Video modeling has been used effectively to teach a variety of skills to children with autism. This body of literature is characterized by a variety of procedural variations including the characteristics of the video model (e.g., self vs. other, adult vs. peer). Traditionally, most video models have been filmed using third person perspective…

Cotter, Courtney

2010-01-01

339

The Dark Energy Survey CCD imager design  

SciTech Connect

The Dark Energy Survey is planning to use a 3 sq. deg. camera that houses a {approx} 0.5m diameter focal plane of 62 2kx4k CCDs. The camera vessel including the optical window cell, focal plate, focal plate mounts, cooling system and thermal controls is described. As part of the development of the mechanical and cooling design, a full scale prototype camera vessel has been constructed and is now being used for multi-CCD readout tests. Results from this prototype camera are described.

Cease, H.; DePoy, D.; Diehl, H.T.; Estrada, J.; Flaugher, B.; Guarino, V.; Kuk, K.; Kuhlmann, S.; Schultz, K.; Schmitt, R.L.; Stefanik, A.; /Fermilab /Ohio State U. /Argonne

2008-06-01

340

Vacuum Camera Cooler  

NASA Technical Reports Server (NTRS)

Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

Laugen, Geoffrey A.

2011-01-01

341

Camera Operator and Videographer  

ERIC Educational Resources Information Center

Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

Moore, Pam

2007-01-01

342

SMART CAMERA NETWORKS IN VIRTUAL REALITY Faisal Qureshi  

E-print Network

demonstrate a smart camera network comprising static and active simulated video surveillance cameras-- Virtual Vision, Computer Vision, Persis- tent Surveillance, Smart Cameras, Camera Networks, Multi- Camera, etc. Multi-camera systems, or camera networks, are a critical com- ponent of any video surveillance

Qureshi, Faisal Z.

343

CCD Photometer Installed on the Telescope - 600 OF the Shamakhy Astrophysical Observatory II. The Technique of Observation and Data Processing of CCD Photometry  

NASA Astrophysics Data System (ADS)

Basic technical characteristics of CCD matrix U-47 made by the Apogee Alta Instruments Inc. are provided. Short description and features of various noises introduced by optical system and CCD camera are presented. The technique of getting calibration frames: bias, dark, flat field and main stages of processing of results CCD photometry are described.

Abdullayev, B. I.; Gulmaliyev, N. I.; Majidova, S. O.; Mikayilov, Kh. M.; Rustamov, B. N.

2009-12-01

344

Are traditional methods of determining nest predators and nest fates reliable? An experiment with Wood Thrushes (Hylocichla mustelina) using miniature video cameras  

USGS Publications Warehouse

We used miniature infrared video cameras to monitor Wood Thrush (Hylocichla mustelina) nests during 1998-2000. We documented nest predators and examined whether evidence at nests can be used to predict predator identities and nest fates. Fifty-six nests were monitored; 26 failed, with 3 abandoned and 23 depredated. We predicted predator class (avian, mammalian, snake) prior to review of video footage and were incorrect 57% of the time. Birds and mammals were underrepresented whereas snakes were over-represented in our predictions. We documented ???9 nest-predator species, with the southern flying squirrel (Glaucomys volans) taking the most nests (n = 8). During 2000, we predicted fate (fledge or fail) of 27 nests; 23 were classified correctly. Traditional methods of monitoring nests appear to be effective for classifying success or failure of nests, but ineffective at classifying nest predators.

Williams, G.E.; Wood, P.B.

2002-01-01

345

CCD Double Star Measures: Jack Jones Observatory Report #2  

NASA Astrophysics Data System (ADS)

This paper submits 44 CCD measurements of 41 multiple star systems for inclusion in the WDS. Observations were made during the calendar year 2008. Measurements were made using a CCD camera and an 11" Schmidt-Cassegrain telescope. Brief discussions of pertinent observations are included.

Jones, James L.

2009-10-01

346

CCD Double Star Measures: Jack Jones Memorial Observatory Report #1  

NASA Astrophysics Data System (ADS)

This paper reports on 63 CCD measurements of 58 multiple star systems observed between 2003 and 2007. It also reports on delta mag(V) measurements of selected doubles. Measurements were made using a CCD camera and 8" or 11" SCT. A brief description of methods used is provided.

Jones, James

2008-01-01

347

Statistical Calibration of the CCD Imaging Process  

Microsoft Academic Search

Charge-Coupled Device (CCD) cameras are widely used imaging sensors in computer vision systems. Many pho- tometric algorithms, such as shape from shading, color constancy, and photometric stereo, implicitly assume that the image intensity is proportional to scene radiance. The actual image measurements deviate significantly from this assumption since the transformation from scene radiance to image intensity is non-linear and is

Yanghai Tsin; Visvanathan Ramesh; Takeo Kanade

2001-01-01

348

Camera for landing applications  

Microsoft Academic Search

This paper describes the Enhanced Video System (EVS) camera, built by OPGAL as subcontractor of Kollsman Inc. The EVS contains a Head up Display built by Honeywell, a special design camera for landing applications, and the external window installed on the plane together with the electronic control box built by Kollsman. The special design camera for lending applications is the

Ernest Grimberg

2001-01-01

349

An electronic pan/tilt/zoom camera system  

NASA Technical Reports Server (NTRS)

A small camera system is described for remote viewing applications that employs fisheye optics and electronics processing for providing pan, tilt, zoom, and rotational movements. The fisheye lens is designed to give a complete hemispherical FOV with significant peripheral distortion that is corrected with high-speed electronic circuitry. Flexible control of the viewing requirements is provided by a programmable transformation processor so that pan/tilt/rotation/zoom functions can be accomplished without mechanical movements. Images are presented that were taken with a prototype system using a CCD camera, and 5 frames/sec can be acquired from a 180-deg FOV. The image-tranformation device can provide multiple images with different magnifications and pan/tilt/rotation sequences at frame rates compatible with conventional video devices. The system is of interest to the object tracking, surveillance, and viewing in constrained environments that would require the use of several cameras.

Zimmermann, Steve; Martin, H. L.

1992-01-01

350

Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic  

NASA Astrophysics Data System (ADS)

Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

2008-11-01

351

Video Visuals.  

ERIC Educational Resources Information Center

A good training video has a focused objective, well-written script, clear sound track, and visuals that enhance the communication of the message. Good visuals depend on lighting, camera angles, continuity, and motivation for the scene. (SK)

Schleger, Peter R.

1991-01-01

352

Overview of a hybrid underwater camera system  

NASA Astrophysics Data System (ADS)

The paper provides an overview of a Hybrid Underwater Camera (HUC) system combining sonar with a range-gated laser camera system. The sonar is the BlueView P900-45, operating at 900kHz with a field of view of 45 degrees and ranging capability of 60m. The range-gated laser camera system is based on the third generation LUCIE (Laser Underwater Camera Image Enhancer) sensor originally developed by the Defence Research and Development Canada. LUCIE uses an eye-safe laser generating 1ns pulses at a wavelength of 532nm and at the rate of 25kHz. An intensified CCD camera operates with a gating mechanism synchronized with the laser pulse. The gate opens to let the camera capture photons from a given range of interest and can be set from a minimum delay of 5ns with increments of 200ps. The output of the sensor is a 30Hz video signal. Automatic ranging is achieved using a sonar altimeter. The BlueView sonar and LUCIE sensors are integrated with an underwater computer that controls the sensors parameters and displays the real-time data for the sonar and the laser camera. As an initial step for data integration, graphics overlays representing the laser camera field-of-view along with the gate position and width are overlaid on the sonar display. The HUC system can be manually handled by a diver and can also be controlled from a surface vessel through an umbilical cord. Recent test data obtained from the HUC system operated in a controlled underwater environment will be presented along with measured performance characteristics.

Church, Philip; Hou, Weilin; Fournier, Georges; Dalgleish, Fraser; Butler, Derek; Pari, Sergio; Jamieson, Michael; Pike, David

2014-05-01

353

Mapping herbage biomass and nitrogen status in an Italian ryegrass (Lolium multiflorum L.) field using a digital video camera with balloon system  

NASA Astrophysics Data System (ADS)

Improving current precision nutrient management requires practical tools to aid the collection of site specific data. Recent technological developments in commercial digital video cameras and the miniaturization of systems on board low-altitude platforms offer cost effective, real time applications for efficient nutrient management. We tested the potential use of commercial digital video camera imagery acquired by a balloon system for mapping herbage biomass (BM), nitrogen (N) concentration, and herbage mass of N (Nmass) in an Italian ryegrass (Lolium multiflorum L.) meadow. The field measurements were made at the Setouchi Field Science Center, Hiroshima University, Japan on June 5 and 6, 2009. The field consists of two 1.0 ha Italian ryegrass meadows, which are located in an east-facing slope area (230 to 240 m above sea level). Plant samples were obtained at 20 sites in the field. A captive balloon was used for obtaining digital video data from a height of approximately 50 m (approximately 15 cm spatial resolution). We tested several statistical methods, including simple and multivariate regressions, using forage parameters (BM, N, and Nmass) and three visible color bands or color indices based on ratio vegetation index and normalized difference vegetation index. Of the various investigations, a multiple linear regression (MLR) model showed the best cross validated coefficients of determination (R2) and minimum root-mean-squared error (RMSECV) values between observed and predicted herbage BM (R2 = 0.56, RMSECV = 51.54), Nmass (R2 = 0.65, RMSECV = 0.93), and N concentration (R2 = 0.33, RMSECV = 0.24). Applying these MLR models on mosaic images, the spatial distributions of the herbage BM and N status within the Italian ryegrass field were successfully displayed at a high resolution. Such fine-scale maps showed higher values of BM and N status at the bottom area of the slope, with lower values at the top of the slope.

Kawamura, Kensuke; Sakuno, Yuji; Tanaka, Yoshikazu; Lee, Hyo-Jin; Lim, Jihyun; Kurokawa, Yuzo; Watanabe, Nariyasu

2011-01-01

354

LSST Camera Electronics  

NASA Astrophysics Data System (ADS)

The 3.2 Gpixel LSST camera will be read out by means of 189 highly segmented 4K x 4K CCDs. A total of 3024 video channels will be processed by a modular, in-cryostat electronics package based on two custom multichannel analog ASICs now in development. Performance goals of 5 electrons noise, .01% electronic crosstalk, and 80 mW power dissipation per channel are targeted. The focal plane is organized as a set of 12K x 12K sub-mosaics ("rafts") with front end electronics housed in an enclosure falling within the footprint of the CCDs making up the raft. CCD surfaces within a raft are required to be coplanar to within 6.5 microns. The assembly of CCDs, baseplate, electronics boards, and cooling components constitutes a self-contained and testable 144 Mpix imager ("raft tower"), and 21 identical raft towers make up the LSST science focal plane. Electronic, mechanical, and thermal prototypes are now undergoing testing and results will be presented at the meeting.

Van Berg, Richard; O'Connor, P.; Oliver, J.; Geary, J.; Radeka, V.

2007-12-01

355

Real-time integral imaging system with handheld light field camera  

NASA Astrophysics Data System (ADS)

Our objective is to construct real-time pickup and display in integral imaging system with handheld light field camera. A micro lens array and high frame rate charge-coupled device (CCD) are used to implement handheld light field camera, and a simple lens array and a liquid crystal (LC) display panel are used to reconstruct three-dimensional (3D) images in real-time. Handheld light field camera is implemented by adding the micro lens array on CCD sensor. Main lens, which is mounted on CCD sensor, is used to capture the scene. To make the elemental image in real-time, pixel mapping algorithm is applied. With this algorithm, not only pseudoscopic problem can be solved, but also user can change the depth plane of the displayed 3D images in real-time. For real-time high quality 3D video generation, a high resolution and high frame rate CCD and LC display panel are used in proposed system. Experiment and simulation results are presented to verify our proposed system. As a result, 3D image is captured and reconstructed in real-time through integral imaging system.

Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Byoungho

2014-11-01

356

Smart Camera Networks in Virtual Reality  

E-print Network

and active simulated video surveillance cameras that provides extensive coverage of a large virtual public simulated network of smart cameras performs persistent visual surveillance of individual pedestrians observation; sensor networks; smart cameras; virtual reality; visual surveillance I. INTRODUCTION Future

Qureshi, Faisal Z.

357

Research of aerial camera focal pane micro-displacement measurement system based on Michelson interferometer  

NASA Astrophysics Data System (ADS)

The aerial camera focal plane in the correct position is critical to the imaging quality. In order to adjust the aerial camera focal plane displacement caused in the process of maintenance, a new micro-displacement measuring system of aerial camera focal plane in view of the Michelson interferometer has been designed in this paper, which is based on the phase modulation principle, and uses the interference effect to realize the focal plane of the micro-displacement measurement. The system takes He-Ne laser as the light source, uses the Michelson interference mechanism to produce interference fringes, changes with the motion of the aerial camera focal plane interference fringes periodically, and records the periodicity of the change of the interference fringes to obtain the aerial camera plane displacement; Taking linear CCD and its driving system as the interference fringes picking up tool, relying on the frequency conversion and differentiating system, the system determines the moving direction of the focal plane. After data collecting, filtering, amplifying, threshold comparing, counting, CCD video signals of the interference fringes are sent into the computer processed automatically, and output the focal plane micro displacement results. As a result, the focal plane micro displacement can be measured automatically by this system. This system uses linear CCD as the interference fringes picking up tool, greatly improving the counting accuracy and eliminated the artificial counting error almost, improving the measurement accuracy of the system. The results of the experiments demonstrate that: the aerial camera focal plane displacement measurement accuracy is 0.2nm. While tests in the laboratory and flight show that aerial camera focal plane positioning is accurate and can satisfy the requirement of the aerial camera imaging.

Wang, Shu-juan; Zhao, Yu-liang; Li, Shu-jun

2014-09-01

358

A geometric comparison of video camera-captured raster data to vector-parented raster data generated by the X-Y digitizing table  

NASA Technical Reports Server (NTRS)

The relative accuracy of a georeferenced raster data set captured by the Megavision 1024XM system using the Videk Megaplus CCD cameras is compared to a georeferenced raster data set generated from vector lines manually digitized through the ELAS software package on a Summagraphics X-Y digitizer table. The study also investigates the amount of time necessary to fully complete the rasterization of the two data sets, evaluating individual areas such as time necessary to generate raw data, time necessary to edit raw data, time necessary to georeference raw data, and accuracy of georeferencing against a norm. Preliminary results exhibit a high level of agreement between areas of the vector-parented data and areas of the captured file data where sufficient control points were chosen. Maps of 1:20,000 scale were digitized into raster files of 5 meter resolution per pixel and overall error in RMS was estimated at less than eight meters. Such approaches offer time and labor-saving advantages as well as increasing the efficiency of project scheduling and enabling the digitization of new types of data.

Swalm, C.; Pelletier, R.; Rickman, D.; Gilmore, K.

1989-01-01

359

Video-rate nanoscopy enabled by sCMOS camera-specific single-molecule localization algorithms  

PubMed Central

Newly developed scientific complementary metal–oxide–semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition in single-molecule switching nanoscopy (SMSN) while simultaneously increasing the effective quantum efficiency. However, sCMOS-intrinsic pixel-dependent readout noise substantially reduces the localization precision and introduces localization artifacts. Here we present algorithms that overcome these limitations and provide unbiased, precise localization of single molecules at the theoretical limit. In combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at up to 32 reconstructed images/second (recorded at 1,600–3,200 camera frames/second) in both fixed and living cells. PMID:23708387

Huang, Fang; Hartwich, Tobias M. P.; Rivera-Molina, Felix E.; Lin, Yu; Duim, Whitney C.; Long, Jane J.; Uchil, Pradeep D.; Myers, Jordan R.; Baird, Michelle A.; Mothes, Walther; Davidson, Michael W.; Toomre, Derek; Bewersdorf, Joerg

2013-01-01

360

Tracking white road line by particle filter from the video sequence acquired by the camera attached to a walking human body  

NASA Astrophysics Data System (ADS)

This paper proposes a method for tracking and recognizing the white line marked in the surface of the road from the video sequence acquired by the camera attached to a walking human, towards the actualization of an automatic navigation system for the visually handicapped. Our proposed method consists of two main modules: (1) Particle Filter based module for tracking the white line, and (2) CLAFIC Method based module for classifying whether the tracked object is the white line. In (1), each particle is a rectangle, and is described by its centroid's coordinates and its orientation. The likelihood of a particle is computed based on the number of white pixels in the rectangle. In (2), in order to obtain the ranges (to be used for the recognition) for the white line's length and width, Principal Component Analysis is applied to the covariance matrix obtained from valid sample particles. At each frame, PCA is applied to the covariance matrix constructed from particles with high likelihood, and if the obtained length and width are within the abovementioned ranges, it is recognized as the white line. Experimental results using real video sequences show the validity of the proposed method.

Takahashi, Shohei; Ohya, Jun

2012-03-01

361

A CCD offset guider for the KAO  

NASA Technical Reports Server (NTRS)

We describe a focal plane guider for the Kuiper Airborne Observatory which consists of a CCD camera interfaced to an AMIGA personal computer. The camera is made by Photometrics Ltd. and utilizes a Thomson 576 x 384 pixel CCD chip operated in Frame Transfer mode. Custom optics produce a scale of 2.4 arc-sec/pixel, yielding an approx. 12 ft. diameter field of view. Chopped images of stars with HST Guide Star Catalog magnitudes fainter than 14 have been used for guiding at readout rates greater than or equal to 0.5 Hz. The software includes automatic map generation, subframing and zooming, and correction for field rotation when two stars are in the field of view.

Colgan, Sean W. J.; Erickson, Edwin F.; Haynes, Fredric B.; Rank, David M.

1995-01-01

362

Object-Video Streams for Preserving Privacy in Video Surveillance Faisal Z. Qureshi  

E-print Network

and community safety. Video footage captured through surveillance cameras is routinely used to identify suspects preserving video surveillance systems. Camera video is processed to construct object-video streams. ObjectObject-Video Streams for Preserving Privacy in Video Surveillance Faisal Z. Qureshi Faculty

Qureshi, Faisal Z.

363

Video Surveillance Unit  

SciTech Connect

The Video Surveillance Unit (VSU) has been designed to provide a flexible, easy to operate video surveillance and recording capability for permanent rack-mounted installations. The system consists of a single rack-mountable chassis and a camera enclosure. The chassis contains two 8 mm video recorders, a color monitor, system controller board, a video authentication verifier module (VAVM) and a universal power supply. A separate camera housing contains a solid state camera and a video authentication processor module (VAPM). Through changes in the firmware in the system, the recorders can be commanded to record at the same time, on alternate time cycle, or sequentially. Each recorder is capable of storing up to 26,000 scenes consisting of 6 to 8 video frames. The firmware can be changed to provide fewer recording with more frames per scene. The modular video authentication system provides verification of the integrity of the video transmission line between the camera and the recording chassis. 5 figs.

Martinez, R.L.; Johnson, C.S.

1990-01-01

364

Wise Observatory System of Fast CCD Photometry  

NASA Astrophysics Data System (ADS)

We have developed a data acquisition and an online reduction system for fast (a few seconds integration time) photometry with the Wise Observatory CCD camera. The method is based on successively collecting frames, each one is a mere small fraction of the entire CCD array. If necessary, the observer is able to place the object star and the comparison star on one and the same row or column of the CCD chip by rotating the image plane, an option available with the Wise telescope. In so doing, the rectangular frame that has to be read out may have a small area of only some 30 columns or rows, even when the two stars are far away from each other. The readout time of the small frame is thus reduced to merely one or two seconds. Thus photometry with an integration time of 5 s and up becomes possible. The system is a network of 3 computers. One controls the telescope, second controls the camera whilst the third computer is used, during the exposure of each frame, for data reduction of the previous one in the observing sequence. The online photometry is performed using standard procedures of the IRAF CCD photometry package. It yields an instrumental magnitude of the object star relative to one or more reference stars that are present in the frame. The light curve of the object star is displayed with a delay of a single frame relative to the one currently under acquisition.

Leibowitz, E. M.; Ibbetson, P.; Ofek, E. O.

365

CCD Readout Electronics for Subaru Telescope Instruments  

NASA Astrophysics Data System (ADS)

A general-purpose CCD readout electronics has been developed for use with the instruments installed on the Subaru telescope. The readout performance of the electronics itself was precisely evaluated. The readout noise measures ˜12 ?V rms around the 150,000 pixel s-1 readout speed in the 1.0 V full-scale signal range, which is equivalent to 2.4 e- of the readout noise in the 200,000 e- full-well capacity with the 5 ?V e-1 output sensitivity of a CCD. The readout noise falls to ˜1 ?V rms at slower readout speeds, which is equivalent to 0.2 e- as well. The linearity is throughout almost the entire signal range within the 0.1% linearity error. The gain drift relative to temperature change of the correlated-double-sampling (CDS) circuit and the remnant signal are also measured. The low power consumption readout circuit is designed for mosaic CCD cameras equipped with multiple outputs. The two optical instruments of the Subaru telescope have been using this CCD readout electronics, which achieves the CCD limited readout performance.

Nakaya, Hidehiko

2012-05-01

366

Closed circuit color video system on the TFTR machine  

SciTech Connect

This paper describes the Closed Circuit Color Video System used on the TFTR machine and several of its systems. The system was installed for DT Operations to enhance surveillance, and to diagnose problems such as frost, leaks, blowoffs, arcing and many other day to day operational problems. The system consists of 23 digital color cameras with pan, tilt, and zoom capability. Three portable camera arts with optical transceivers are also provided for special cases when it is necessary to have close machine views with electrical safety breaks. Primary controlling and monitoring is provided at the Shift Supervisor Station, which plays an essential role in each day`s operations. Secondary control and monitor stations are located in the Laboratory Auditorium, for large screen projection, and the Visitor Gallery; in addition these two stations can operate a camera in the TFTR Control Room. The system has two types of switchers, a passive type for switching between control stations and a sequential type for switching between cameras. Modifications were also incorporated in the control drive circuits to double the acceptable neutron fluence. Degradation in camera and control performance due to Neutron fluence from DD/DT operations, activation levels of the Test Cell cameras, and the cool down profile are discussed in this paper. The system maintenance, repair, camera replacement frequency, and the possibility of improving camera longevity by selection of parts, shielding and CCD cooling are also discussed.

Kolinchak, G.; Wertenbaker, J. [Princeton Plasma Physics Lab., NJ (United States)

1995-12-31

367

Lights, Camera…Citizen Science: Assessing the Effectiveness of Smartphone-Based Video Training in Invasive Plant Identification  

PubMed Central

The rapid growth and increasing popularity of smartphone technology is putting sophisticated data-collection tools in the hands of more and more citizens. This has exciting implications for the expanding field of citizen science. With smartphone-based applications (apps), it is now increasingly practical to remotely acquire high quality citizen-submitted data at a fraction of the cost of a traditional study. Yet, one impediment to citizen science projects is the question of how to train participants. The traditional “in-person” training model, while effective, can be cost prohibitive as the spatial scale of a project increases. To explore possible solutions, we analyze three training models: 1) in-person, 2) app-based video, and 3) app-based text/images in the context of invasive plant identification in Massachusetts. Encouragingly, we find that participants who received video training were as successful at invasive plant identification as those trained in-person, while those receiving just text/images were less successful. This finding has implications for a variety of citizen science projects that need alternative methods to effectively train participants when in-person training is impractical. PMID:25372597

Starr, Jared; Schweik, Charles M.; Bush, Nathan; Fletcher, Lena; Finn, Jack; Fish, Jennifer; Bargeron, Charles T.

2014-01-01

368

High-speed multicolour photometry with CMOS cameras  

NASA Astrophysics Data System (ADS)

We present the results of testing the commercial digital camera Nikon D90 with a CMOS sensor for high-speed photometry with a small telescope Celestron 11'' at the Peak Terskol Observatory. CMOS sensor allows to perform photometry in 3 filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system of CMOS sensors is close to the Johnson BVR system. The results of testing show that one can carry out photometric measurements with CMOS cameras for stars with the V-magnitude up to ?14^{m} with the precision of 0.01^{m}. Stars with the V-magnitude up to ˜10 can be shot at 24 frames per second in the video mode.

Pokhvala, S. M.; Zhilyaev, B. E.; Reshetnyk, V. M.

2012-11-01

369

Advisory Surveillance Cameras Page 1 of 2  

E-print Network

be produced and how will it be secured, who will have access to the tape? 7. At what will the camera to ensure the cameras' presence doesn't create a false sense of security #12;Advisory ­ Surveillance CamerasAdvisory ­ Surveillance Cameras May 2008 Page 1 of 2 ADVISORY -- USE OF CAMERAS/VIDEO SURVEILLANCE

Liebling, Michael

370

Surveillance Camera Scheduling: A Virtual Vision Approach  

E-print Network

Surveillance Camera Scheduling: A Virtual Vision Approach Faisal Z. Qureshi1 and Demetri a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras cameras generate synthetic video feeds that emu- late those generated by real surveillance cameras

Terzopoulos, Demetri

371

Surveillance camera scheduling: a virtual vision approach  

Microsoft Academic Search

We present a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan\\/tilt\\/zoom (PTZ) active cameras, which automatically captures and labels high-resolution videos of pedestrians as they move through a designated area. A wide-FOV stationary camera can track multiple pedestrians, while any PTZ active camera can capture high-quality videos of a single pedestrian at a time. We propose a multi-camera

Faisal Z. Qureshi; Demetri Terzopoulos

2005-01-01

372

Surveillance camera scheduling: a virtual vision approach  

Microsoft Academic Search

ABSTRACT We present a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan\\/tilt\\/zoom (PTZ) active cameras, which automatically captures and labels high-resolution videos of pedestrians as they move,through a designated area. A wide-FOV stationary camera can track multiple pedestrians, while any PTZ active camera can capture high-quality videos of a single pedestrian at a time. We propose a multi-camera

Faisal Z. Qureshi; Demetri Terzopoulos

2006-01-01

373

Video Object Tracking and Analysis for Computer Assisted Surgery  

E-print Network

Pedicle screw insertion technique has made revolution in the surgical treatment of spinal fractures and spinal disorders. Although X- ray fluoroscopy based navigation is popular, there is risk of prolonged exposure to X- ray radiation. Systems that have lower radiation risk are generally quite expensive. The position and orientation of the drill is clinically very important in pedicle screw fixation. In this paper, the position and orientation of the marker on the drill is determined using pattern recognition based methods, using geometric features, obtained from the input video sequence taken from CCD camera. A search is then performed on the video frames after preprocessing, to obtain the exact position and orientation of the drill. Animated graphics, showing the instantaneous position and orientation of the drill is then overlaid on the processed video for real time drill control and navigation.

Pallath, Nobert Thomas

2012-01-01

374

High-resolution CCD imagers using area-array CCD's for sensing spectral components of an optical line image  

NASA Technical Reports Server (NTRS)

CCD imagers with a novel replicated-line-imager architecture are abutted to form an extended line sensor. The sensor is preceded by optics having a slit aperture and having an optical beam splitter or astigmatic lens for projecting multiple line images through an optical color-discriminating stripe filter to the CCD imagers. A very high resolution camera suitable for use in a satellite, for example, is thus provided. The replicated-line architecture of the imager comprises an area-array CCD, successive rows of which are illuminated by replications of the same line segment, as transmitted by respective color filter stripes. The charge packets formed by accumulation of photoresponsive charge in the area-array CCD are read out row by row. Each successive row of charge packets is then converted from parallel to serial format in a CCD line register and its amplitude sensed to generate a line of output signal.

Elabd, Hammam (Inventor); Kosonocky, Walter F. (Inventor)

1987-01-01

375

Video semaphore decoding for free-space optical communication  

NASA Astrophysics Data System (ADS)

Using teal-time image processing we have demonstrated a low bit-rate free-space optical communication system at a range of more than 20km with an average optical transmission power of less than 2mW. The transmitter is an autonomous one cubic inch microprocessor-controlled sensor node with a laser diode output. The receiver is a standard CCD camera with a 1-inch aperture lens, and both hardware and software implementations of the video semaphore decoding algorithm. With this system sensor data can be reliably transmitted 21 km form San Francisco to Berkeley.

Last, Matthew; Fisher, Brian; Ezekwe, Chinwuba; Hubert, Sean M.; Patel, Sheetal; Hollar, Seth; Leibowitz, Brian S.; Pister, Kristofer S. J.

2001-04-01

376

Event detection intelligent camera development  

Microsoft Academic Search

A new camera system ‘event detection intelligent camera’ (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized

A. Szappanos; G. Kocsis; A. Molnár; J. Sárkozi; S. Zoletnik

2008-01-01

377

Smart Cameras as Embedded Systems  

Microsoft Academic Search

Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While today's digital cameras capture images, smart cameras capture high-level descriptions of the scene and analyze what they see. These devices could support a wide variety of applications including human and animal detection, surveillance, motion analysis, and facial identification. Video processing has

Wayne Wolf; Burak Ozer; Lv Tiehan

2002-01-01

378

CAMERA MOTION STYLE TRANSFER Christian Kurz1  

E-print Network

CAMERA MOTION STYLE TRANSFER Christian Kurz1 Tobias Ritschel2 Elmar Eisemann2 Thorsten Thorm¨ahlen1 is a database of videos filmed by physical cameras. These videos are analyzed with a camera-motion estimation algorithm (structure-from-motion) and labeled manually with a specific style. By considering spectral

379

A Prototype of Autonomous Intelligent Surveillance Cameras  

Microsoft Academic Search

This paper presents an architecture and an FPGAbased prototype of an autonomous intelligent video surveillance camera. The camera takes the advantage of high resolution of CMOS image sensors and enables instantly automatic pan, tilt and zoom adjustment based upon motion activity. It performs automated scene analysis and provides immediate response to suspicious events by optimizing camera capturing parameters. The video

Wanqing Li; Igor Kharitonenko; Serge Lichman; Chaminda Weerasinghe

2006-01-01

380

CCD Build a Table  

NSDL National Science Digital Library

The National Center for Education Statistics has launched a new and innovative tool that allows users to create customized tables using data from the Common Core of Data (CCD). As the Department of Education's primary database on elementary and secondary US public schools, the CCD provides national statistical data in three main categories: general descriptive information on schools and school districts; data on students and staff; and fiscal data, which covers revenues and current expenditures. With the Build a Table application, users can now design their own tables of CCD public school data for states, counties, and districts, using data from multiple years. There is a comprehensive tutorial available for first time users needing step-by-step instructions on the "build a table" process.

381

Megapixel imaging camera for expanded H{sup {minus}} beam measurements  

SciTech Connect

A charge coupled device (CCD) imaging camera system has been developed as part of the Ground Test Accelerator project at the Los Alamos National Laboratory to measure the properties of a large diameter, neutral particle beam. The camera is designed to operate in the accelerator vacuum system for extended periods of time. It would normally be cooled to reduce dark current. The CCD contains 1024 {times} 1024 pixels with pixel size of 19 {times} 19 {mu}m{sup 2} and with four phase parallel clocking and two phase serial clocking. The serial clock rate is 2.5{times}10{sup 5} pixels per second. Clock sequence and timing are controlled by an external logic-word generator. The DC bias voltages are likewise located externally. The camera contains circuitry to generate the analog clocks for the CCD and also contains the output video signal amplifier. Reset switching noise is removed by an external signal processor that employs delay elements to provide noise suppression by the method of double-correlated sampling. The video signal is digitized to 12 bits in an analog to digital converter (ADC) module controlled by a central processor module. Both modules are located in a VME-type computer crate that communicates via ethernet with a separate workstation where overall control is exercised and image processing occurs. Under cooled conditions the camera shows good linearity with dynamic range of 2000 and with dark noise fluctuations of about {plus_minus}1/2 ADC count. Full well capacity is about 5{times}10{sup 5} electron charges.

Simmons, J.E.; Lillberg, J.W.; McKee, R.J.; Slice, R.W.; Torrez, J.H. [Los Alamos National Lab., NM (United States); McCurnin, T.W.; Sanchez, P.G. [EG and G Energy Measurements, Inc., Los Alamos, NM (United States). Los Alamos Operations

1994-02-01

382

Experiments with a novel CCD stellar polarimeter  

NASA Astrophysics Data System (ADS)

Experiments and observations have been undertaken with ``bread-board'' equipment to explore the potential of a ``ring'' stellar polarimeter with a CCD camera, rather than photographic plates used in Treanor's (\\cite{treanor}) original instrument. By spreading the polarimetric signal over a large number of pixels on the detector, design prediction suggests that the polarimetric accuracy could be { ~ }Delta p+/-0.00001 or +/- 0.001% per frame or even better. Although the photon accumulations suggest that this was achieved, instabilities in the employed crude modulator system provided frame to frame measurements with a greater than expected scatter. Software was developed to reduce the data in a simple way. With a design using more professional components and perhaps with more sophisticated reduction procedures, the full potential of the method should be achievable with the prospect of high precision polarimetry of the brighter stars. As an experimental bonus, the employed CCD chip was found to be free from any measurable polarizational sensitivity.

Clarke, D.; Neumayer, D.

2002-01-01

383

Integrated Scene And Graphics For Multiple-Camera Viewing  

NASA Technical Reports Server (NTRS)

Multiple-video-camera viewing system for monitoring telerobot undergoing development. System presents user with variety of information on single video display: picture of robot from any of five movable cameras, graphical depiction of location and orientation of camera producing current picture in middle of video display, graphical depiction of locations and orientations of other cameras, and graphical images of fields of view of cameras and three-dimensional relationship to workspace of telerobot. Display helps user control telerobot quickly and efficiently.

Diner, Daniel B.; Venema, Steven C.

1992-01-01

384

Inspection focus technology of space tridimensional mapping camera based on astigmatic method  

Microsoft Academic Search

The CCD plane of the space tridimensional mapping camera will be deviated from the focal plane(including the CCD plane deviated due to camera focal length changed), under the condition of space environment and vibration, impact when satellite is launching, image resolution ratio will be descended because defocusing. For tridimensional mapping camera, principal point position and focal length variation of the

Zhi Wang; Liping Zhang

2010-01-01

385

The DSLR Camera  

NASA Astrophysics Data System (ADS)

Cameras have developed significantly in the past decade; in particular, digital Single-Lens Reflex Cameras (DSLR) have appeared. As a consequence we can buy cameras of higher and higher pixel number, and mass production has resulted in the great reduction of prices. CMOS sensors used for imaging are increasingly sensitive, and the electronics in the cameras allows images to be taken with much less noise. The software background is developing in a similar way—intelligent programs are created for after-processing and other supplementary works. Nowadays we can find a digital camera in almost every household, most of these cameras are DSLR ones. These can be used very well for astronomical imaging, which is nicely demonstrated by the amount and quality of the spectacular astrophotos appearing in different publications. These examples also show how much post-processing software contributes to the rise in the standard of the pictures. To sum up, the DSLR camera serves as a cheap alternative for the CCD camera, with somewhat weaker technical characteristics. In the following, I will introduce how we can measure the main parameters (position angle and separation) of double stars, based on the methods, software and equipment I use. Others can easily apply these for their own circumstances.

Berkó, Ern?; Argyle, R. W.

386

A MULTI-CAMERA SURVEILLANCE SYSTEM THAT ESTIMATES QUALITY-OF-VIEW MEASUREMENT  

E-print Network

A MULTI-CAMERA SURVEILLANCE SYSTEM THAT ESTIMATES QUALITY-OF-VIEW MEASUREMENT Changsong Shen, Chris a multi-camera video surveillance system with automatic camera selection. A new confidence measure-Of-View, multi-camera, camera selection 1. INTRODUCTION Multi-camera video surveillance systems have generated

British Columbia, University of

387

The Digital Interactive Video  

E-print Network

The Digital Interactive Video Exploration and Reflection (Diver) system lets users create virtual pathways through existing video content using a virtual camera and an annotation window for commentary repurposing, and discussion. W ith the inexorable growth of low-cost consumer video elec- tronics

Paris-Sud XI, Université de

388

Preliminary study of a dual-CCD-based ratiometric optical mapping system  

Microsoft Academic Search

A customized dual CCD based optical mapping system is designed and constructed for the purpose of applying ratiometry to the study of cardiac arrhythmia mechanisms. The system offers 490 frames per second data acquisition speed, 128 by 128 detector elements and 12 bits digitization. Each CCD camera is dedicated to imaging a specific region of the di-4-ANEPPS emission spectrum, through

David Y. Tang; Yuhua Li; Jianan Y. Qu; Sunny S. Po; Eugene S. Patterson; Wei R. Chen; Warren M. Jackman; Hong Liu

2003-01-01

389

Evaluation of gadolinium oxysulphide (P43) phosphor used in CCD detectors for electron microscopy  

Microsoft Academic Search

One of the most important components of a CCD camera for electron microscopy is the thin layer of phosphor or scintillator which converts the energy of the incident electron into light which is subsequently recorded by the CCD. We have previously evaluated a number of phosphors: gadolinium oxy-sulphide doped with terbium (P43) and zinc cadmium sulphide doped with silver (P20)

A. R Faruqi; G. C Tyrell

1999-01-01

390

A CCD processing system design implemented by an embedded DSP in the OPM  

NASA Astrophysics Data System (ADS)

A CCD signal acquisition system featuring a flexible and compact (Charged-Coupled Device) CCD driving mode based on a DSP device is designed for a new developed optical performance monitor (OPM) module. The design has prominent advantages, such as less power consumption, fewer and cheaper attached components and flexibility. The DSP-based system aims to implement the high accuracy conversion from successive video output signals of a CCD device to DSP Processor with least additional circuit components.

Peng, Dingmin; Yu, Jiekui; Cheng, Xiaohu; Hu, Qianggao

2004-05-01

391

Tests of commercial colour CMOS cameras for astronomical applications  

NASA Astrophysics Data System (ADS)

We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

2013-12-01

392

Freeway Auto-surveillance From Traffic Video  

Microsoft Academic Search

Video based surveillance systems have been widely used on freeway for traffic monitoring, as the cameras can provide the most intuitionistic information. In order to manage all the traffic videos automatically, in this paper, a real-time auto-surveillance system is presented. The freeway traffic videos are taken as input video from Pan Tilt Zoom (PTZ) camera, and then produces an analysis

Li Bo; Chen Qimei; Guo Fan

2006-01-01

393

Upgrades to NDSF Vehicle Camera Systems and Development of a Prototype System for Migrating and Archiving Video Data in the National Deep Submergence Facility Archives at WHOI  

NASA Astrophysics Data System (ADS)

In recent years, considerable effort has been made to improve the visual recording capabilities of Alvin and ROV Jason. This has culminated in the routine use of digital cameras, both internal and external on these vehicles, which has greatly expanded the scientific recording capabilities of the NDSF. The UNOLS National Deep Submergence Facility (NDSF) archives maintained at Woods Hole Oceanograpic Institution (WHOI) are the repository for the diverse suite of photographic still images (both 35mm and recently digital), video imagery, vehicle data and navigation, and near-bottom side-looking sonar data obtained by the facility vehicles. These data comprise a unique set of information from a wide range of seafloor environments over the more than 25 years of NDSF operations in support of science. Included in the holdings are Alvin data plus data from the tethered vehicles- ROV Jason, Argo II, and the DSL-120 side scan sonar. This information conservatively represents an outlay in facilities and science costs well in excess of \\$100 million. Several archive related improvement issues have become evident over the past few years. The most critical are: 1. migration and better access to the 35mm Alvin and Jason still images through digitization and proper cataloging with relevant meta-data, 2. assessing Alvin data logger data, migrating data on older media no longer in common use, and properly labeling and evaluating vehicle attitude and navigation data, 3. migrating older Alvin and Jason video data, especially data recorded on Hi-8 tape that is very susceptible to degradation on each replay, to newer digital format media such as DVD, 4. improving the capabilities of the NDSF archives to better serve the increasingly complex needs of the oceanographic community, including researchers involved in focused programs like Ridge2000 and MARGINS, where viable distributed databases in various disciplinary topics will form an important component of the data management structure. We report on an archiving effort to transfer video footage currently on Hi-8 and VHS tape to digital media (DVD). At the same time as this is being done, frame grab imagery at reasonable resolution (640x480) at 30 sec. intervals will be compiled and the images will be integrated, as much as possible with vehicle attitude/navigation data and provided to the user community in a web-browser format, such as has already been done for the recent Jason and Alvin frame grabbed imagery. The frame-grabbed images will be tagged with time, thereby permitting integration of vehicle attitude and navigation data once that is available. In order to prototype this system, we plan to utilize data from the East Pacific Rise and Juan de Fuca Ridge which are field areas selected by the community as Ridge2000 Integrated Study Sites. There are over 500 Alvin dives in both these areas and having frame-grabbed, synoptic views of the terrains covered during those dives will be invaluable for scientific and outreach use as part of Ridge2000. We plan to coordinate this activity with the Ridge2000 Data Management Office at LDEO.

Fornari, D.; Howland, J.; Lerner, S.; Gegg, S.; Walden, B.; Bowen, A.; Lamont, M.; Kelley, D.

2003-12-01

394

CCD high-speed videography system with new concepts and techniques  

NASA Astrophysics Data System (ADS)

A novel CCD high speed videography system with brand-new concepts and techniques is developed by Zhejiang University recently. The system can send a series of short flash pulses to the moving object. All of the parameters, such as flash numbers, flash durations, flash intervals, flash intensities and flash colors, can be controlled according to needs by the computer. A series of moving object images frozen by flash pulses, carried information of moving object, are recorded by a CCD video camera, and result images are sent to a computer to be frozen, recognized and processed with special hardware and software. Obtained parameters can be displayed, output as remote controlling signals or written into CD. The highest videography frequency is 30,000 images per second. The shortest image freezing time is several microseconds. The system has been applied to wide fields of energy, chemistry, medicine, biological engineering, aero- dynamics, explosion, multi-phase flow, mechanics, vibration, athletic training, weapon development and national defense engineering. It can also be used in production streamline to carry out the online, real-time monitoring and controlling.

Zheng, Zengrong; Zhao, Wenyi; Wu, Zhiqiang

1997-05-01

395

Adaptive Monitoring for Video Surveillance , Wei-Qi Yan2  

E-print Network

to change the video camera parameters. Our framework first detects the moving objects in the surveillance system. The fixed camera is thus adaptively tuned so as to obtain a good quality surveillance video the automatic adjustment of camera parameters stems from the fact that a camera for video surveillance

396

CCD controller requirements for ground-based optical astronomy  

NASA Astrophysics Data System (ADS)

Astronomical CCD controllers are being called upon to operate a wide variety of CCDs in a range of ground-based astronomical applications. These include operation of several CCDs in the same focal plane (mosaics), simultaneous readout from two or four corners of the same CCD (multiple readout), readout of only a small region or number of regions of a single CCD (sub-image or region of interest readout), continuous readout of devices for drift scan observations, differential imaging for low contrast polarimetric or spectroscopic observations and very fast readout of small devices for wavefront sensing in adaptive optics systems. These applications all require that the controller electronics not contribute significantly to the readout noise of the CCD, that the dynamic range of the CCD by fully sampled (except for wavefront sensors), that the CCD be read out as quickly as possible from one or more readout ports, and that considerably flexibility in readout modes (binning, skipping and signal sampling) and device format exist. A further requirement imposed by some institutions is that a single controller design be used for all their CCD instruments to minimize maintenance and development efforts. A controller design recently upgraded to meet these requirements is reviewed. It uses a sequencer built with a programmable DSP to provide user flexibility combined with fast 16-bit A/D converters on a programmable video processor chain to provide either fast or slow readouts.

Leach, Robert W.

1996-03-01

397

Exposing Digital Forgeries in Video by Detecting Double MPEG Compression  

E-print Network

video. In addition, an ever-growing number of video surveillance cameras is giving rise to an enormous, has an esti- mated 4, 000, 000 video surveillance cameras, many of which are installed in public, consider a station- ary video surveillance camera positioned to survey pedestri- ans walking along

Farid, Hany

398

Video-based beam position monitoring at CHESS  

NASA Astrophysics Data System (ADS)

CHESS has pioneered the development of X-ray Video Beam Position Monitors (VBPMs). Unlike traditional photoelectron beam position monitors that rely on photoelectrons generated by the fringe edges of the X-ray beam, with VBPMs we collect information from the whole cross-section of the X-ray beam. VBPMs can also give real-time shape/size information. We have developed three types of VBPMs: (1) VBPMs based on helium luminescence from the intense white X-ray beam. In this case the CCD camera is viewing the luminescence from the side. (2) VBPMs based on luminescence of a thin (~50 micron) CVD diamond sheet as the white beam passes through it. The CCD camera is placed outside the beam line vacuum and views the diamond fluorescence through a viewport. (3) Scatter-based VBPMs. In this case the white X-ray beam passes through a thin graphite filter or Be window. The scattered X-rays create an image of the beam's footprint on an X-ray sensitive fluorescent screen using a slit placed outside the beam line vacuum. For all VBPMs we use relatively inexpensive 1.3 Mega-pixel CCD cameras connected via USB to a Windows host for image acquisition and analysis. The VBPM host computers are networked and provide live images of the beam and streams of data about the beam position, profile and intensity to CHESS's signal logging system and to the CHESS operator. The operational use of VBPMs showed great advantage over the traditional BPMs by providing direct visual input for the CHESS operator. The VBPM precision in most cases is on the order of ~0.1 micron. On the down side, the data acquisition frequency (50-1000ms) is inferior to the photoelectron based BPMs. In the future with the use of more expensive fast cameras we will be able create VBPMs working in the few hundreds Hz scale.

Revesz, Peter; Pauling, Alan; Krawczyk, Thomas; Kelly, Kevin J.

2012-10-01

399

Camera for landing applications  

NASA Astrophysics Data System (ADS)

This paper describes the Enhanced Video System (EVS) camera, built by OPGAL as subcontractor of Kollsman Inc. The EVS contains a Head up Display built by Honeywell, a special design camera for landing applications, and the external window installed on the plane together with the electronic control box built by Kollsman. The special design camera for lending applications is the subject of this paper. The entire system was installed on a Gulfstream V plane and passed the FAA proof of concept during August and September 2000.

Grimberg, Ernest

2001-08-01

400

Measurement of marine picoplankton cell size by using a cooled, charge-coupled device camera with image-analyzed fluorescence microscopy.  

PubMed Central

Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 microns) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 microns) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured. Images PMID:1610183

Viles, C L; Sieracki, M E

1992-01-01

401

Solid State Television Camera (CID)  

NASA Technical Reports Server (NTRS)

The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

Steele, D. W.; Green, W. T.

1976-01-01

402

Colorized linear CCD data acquisition system with automatic exposure control  

NASA Astrophysics Data System (ADS)

Colorized linear cameras deliver superb color fidelity at the fastest line rates in the industrial inspection. It's RGB trilinear sensor eliminates image artifacts by placing a separate row of pixels for each color on a single sensor. It's advanced design minimizes distance between rows to minimize image artifacts due to synchronization. In this paper, the high-speed colorized linear CCD data acquisition system was designed take advantages of the linear CCD sensor ?pd3728. The hardware and software design of the system based on FPGA is introduced and the design of the functional modules is performed. The all system is composed of CCD driver module, data buffering module, data processing module and computer interface module. The image data was transferred to computer by Camera link interface. The system which automatically adjusts the exposure time of linear CCD, is realized with a new method. The integral time of CCD can be controlled by the program. The method can automatically adjust the integration time for different illumination intensity under controlling of FPGA, and respond quickly to brightness changes. The data acquisition system is also offering programmable gains and offsets for each color. The quality of image can be improved after calibration in FPGA. The design has high expansibility and application value. It can be used in many application situations.

Li, Xiaofan; Sui, Xiubao

2014-11-01

403

CCD-based POSTNET bar-code reader  

NASA Astrophysics Data System (ADS)

A CCD based barcode reader has been developed to read the POSTNET barcode with a bar/space width as small as 13 mils. This barcode is currently being used by the united States Post Office. The system can decode barcodes up to a conveyor speed of 250 ft/min for bars travelling parallel to the CCD sensor array. The system consisting of a camera and a logic unit was designed to incorporate various lengths of linear CCD sensor arrays manufactured by EG&G Corporation. The length of the sensor is dependent upon the required field of view. The camera unit processes the analog signal from the CCD sensor and converts it into a binary signal, which is then transmitted to the logic unit. The logic unit uses a Texas Instrument TMS320C30 processor and does the actual signal processing and decoding of the POSTNET code. This paper describes the hardware and the software developed for this system using a 512 element CCD sensor.

Patel, Mehul; Shreesha, Vasanth; Hecht, Kurt; Cox, Jim; Schultz, John

1995-12-01

404

Classical astrometry longitude and latitude determination by using CCD technique  

NASA Astrophysics Data System (ADS)

At the AOB, it is the zenith-telescope (D=11 cm, F=128.7 cm, denoted by BLZ in the list of Bureau International de l'Heure - BIH), and at Punta Indio (near La Plata) it is the photographic zenith tube (D=20 cm, F=457.7 cm, denoted by PIP in the list of BIH). At the AOB there is a CCD camera ST-8 of Santa Barbara Instrument Group (SBIG) with 1530×1020 number of pixels, 9×9 microns pixel size and 13.8×9.2 mm array dimension. We did some investigations about the possibilities for longitude (?) and latitude (?) determinations by using ST-8 with BLZ and PIP, and our predicted level of accuracy is few 0."01 from one CCD zenith stars processing with Tycho-2 Catalogue. Also, astro-geodesy has got new practicability with the CCDs (to reach a good accuracy of geoid determination via astro-geodesy ? and ? observations). At the TU Wien there is the CCD MX916 of Starlight Xpress (with 752×580 pixels, 11×12 microns, 8.7×6.5 mm active area). Our predicted level of accuracy for ? and ? measurements is few 0."1 from one CCD MX916 processing of zenith stars, with small optic (20 cm focus length because of not stable, but mobile instrument) and Tycho-2. A transportable zenith camera with CCD is under development at the TU Wien for astro-geodesy subjects.

Damljanovi?, G.; de Biasi, M. S.; Gerstbach, G.

405

Video flowmeter  

DOEpatents

A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

Lord, D.E.; Carter, G.W.; Petrini, R.R.

1983-08-02

406

Video flowmeter  

DOEpatents

A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

Lord, David E. (Livermore, CA); Carter, Gary W. (Livermore, CA); Petrini, Richard R. (Livermore, CA)

1983-01-01

407

Video flowmeter  

DOEpatents

A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid.

Lord, D.E.; Carter, G.W.; Petrini, R.R.

1981-06-10

408

CCD technique for longitude/latitude astronomy  

NASA Astrophysics Data System (ADS)

We report about CCD (Charge Coupled Device) experiments with the isntruments of astrometry and geodesy for the longitude and latitude determinations. At the Techn. University Vienna (TU Vienna), a mobile zenith camera "G1" was developed, based on CCD MX916 (Starlight Xpress) and F=20 cm photo optic. With Hipparcos/Tycho Catalogue, the first results show accuracy up to 0."5 for latitude/longitude. The PC-guided observations can be completed within 10 minutes. The camera G1 (near 4 kg) is used for astrogeodesy (geoid, Earth's crust, etc.). At the Belgrade Astronomical Observatory (AOB), the accuracy of (mean value of) latitude/longitude determinations can be a few 0."01 using zenith stars, Tycho-2 Catalogue and a ST-8 of SBIG (Santa Barbara Instrument Group) with zenith-telescope BLZ (D=11 cm, F=128.7 cm). The same equipment with PIP instrument (D=20 cm and F=457.7 cm, Punta Indio PZT, near La Plata) yields a little better accuracy than the BLZ's one. Both instruments, BLZ and PIP, where in the list of Bureau International de l'Heure - BIH. The mentioned instruments have acquired good possibilities for semi or full-automatic observations.

Damljanovi?, G.; Gerstbach, G.; de Biasi, M. S.; Pejovi?, N.

2003-10-01

409

BLAST Autonomous Daytime Star Cameras  

E-print Network

We have developed two redundant daytime star cameras to provide the fine pointing solution for the balloon-borne submillimeter telescope, BLAST. The cameras are capable of providing a reconstructed pointing solution with an absolute accuracy daytime float conditions. Each camera combines a 1 megapixel CCD with a 200 mm f/2 lens to image a 2 degree x 2.5 degree field of the sky. The instruments are autonomous. An internal computer controls the temperature, adjusts the focus, and determines a real-time pointing solution at 1 Hz. The mechanical details and flight performance of these instruments are presented.

Marie Rex; Edward Chapin; Mark J. Devlin; Joshua Gundersen; Jeff Klein; Enzo Pascale; Donald Wiebe

2006-05-01

410

BLAST Autonomous Daytime Star Cameras  

E-print Network

We have developed two redundant daytime star cameras to provide the fine pointing solution for the balloon-borne submillimeter telescope, BLAST. The cameras are capable of providing a reconstructed pointing solution with an absolute accuracy daytime float conditions. Each camera combines a 1 megapixel CCD with a 200 mm f/2 lens to image a 2 degree x 2.5 degree field of the sky. The instruments are autonomous. An internal computer controls the temperature, adjusts the focus, and determines a real-time pointing solution at 1 Hz. The mechanical details and flight performance of these instruments are presented.

Rex, M; Devlin, M J; Gundersen, J; Klein, J; Pascale, E; Wiebe, D; Rex, Marie; Chapin, Edward; Devlin, Mark J.; Gundersen, Joshua; Klein, Jeff; Pascale, Enzo; Wiebe, Donald

2006-01-01

411

Electronic Still Camera  

NASA Technical Reports Server (NTRS)

A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

Holland, S. Douglas (inventor)

1992-01-01

412

Electronic still camera  

NASA Astrophysics Data System (ADS)

A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

Holland, S. Douglas

1992-09-01

413

Miniaturized thermal snapshot camera  

NASA Astrophysics Data System (ADS)

This paper reports on the development of a new class of thermal cameras. Known as the FLAsh STabilized (FLAST) thermal imaging camera systme, these cameras are the first to be able to capture snapshop thermal images. Results from testing of the prototype unit will be presented and status on the design of amore efficient, miniaturized version for produciotn. The camera is highly programmable for images capture method, shot sequence, and shot quantity. To achieve the ability to operate in a snapship mode, the FLAST camera is designed to function without the need for cooling or other thermal regulation. In addition, the camera can operate over extended periods without the need for re-calibration. Thus, the cemera does not require a shutter, chopper or user inserted imager blocking system. This camera is capable operating for weeks using standard AA batteries. The initial camera configuration provides an image resolution of 320 x 240 and is able to turn-on and capture an image within approximately 1/4 sec. The FLAST camera operates autonomously, to collect, catalog and store over 500 images. Any interface and relay system capable of video formatted input will be able to serve as the image download transmission system.

Hornback, William B.; Payson, Ellwood; Linsacum, Deron L.; Ward, Kenneth; Kennedy, John; Myers, Leo; Cuadra, Dean; Li, Mark

2003-01-01

414

A semi-automatic approach to home video editing  

Microsoft Academic Search

Hitchcock is a system that allows users to easily create cus- tom videos from raw video shot with a standard video cam- era. In contrast to other video editing systems, Hitchcock uses automatic analysis to determine the suitability of por- tions of the raw video. Unsuitable video typically has fast or erratic camera motion. Hitchcock first analyzes video to identify

Andreas Girgensohn; John S. Boreczky; Patrick Chiu; John Doherty; Jonathan Foote; Gene Golovchinsky; Shingo Uchihashi; Lynn Wilcox

2000-01-01

415

MEASURING COMPLETE GROUND-TRUTH DATA AND ERROR ESTIMATES FOR REAL VIDEO SEQUENCES, FOR PERFORMANCE EVALUATION OF TRACKING, CAMERA POSE AND MOTION ESTIMATION ALGORITHMS  

Microsoft Academic Search

Fundamental tasks in computer vision include determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. Predominantly, these remain un-validated since the ground-truth camera positions and orientations at each frame in

R. Stolkin; A. Greig; J. Gilby

416

An advanced CCD emulator with 32MB image memory  

NASA Astrophysics Data System (ADS)

As part of the LSST sensor development program we have developed an advanced CCD emulator for testing new multichannel readout electronics. The emulator, based on an Altera Stratix II FPGA for timing and control, produces 4 channels of simulated video waveforms in response to an appropriate sequence of horizontal and vertical clocks. It features 40MHz, 16-bit DACs for reset and video generation, 32MB of image memory for storage of arbitrary grayscale bitmaps, and provision to simulate reset and clock feedthrough ("glitches") on the video channels. Clock inputs are qualified for proper sequences and levels before video output is generated. Binning, region of interest, and reverse clock sequences are correctly recognized and appropriate video output will be produced. Clock transitions are timestamped and can be played back to a control PC. A simplified user interface is provided via a daughter card having an ARM M3 Cortex microprocessor and miniature color LCD display and joystick. The user can select video modes from stored bitmap images, or flat, gradient, bar, chirp, or checkerboard test patterns; set clock thresholds and video output levels; and set row/column formats for image outputs. Multiple emulators can be operated in parallel to simulate complex CCDs or CCD arrays.

O'Connor, P.; Fried, J.; Kotov, I.

2012-07-01

417

Make a Pinhole Camera  

ERIC Educational Resources Information Center

On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

Fisher, Diane K.; Novati, Alexander

2009-01-01

418

Camera Obscura  

NSDL National Science Digital Library

Before photography was invented there was the camera obscura, useful for studying the sun, as an aid to artists, and for general entertainment. What is a camera obscura and how does it work ??? Camera = Latin for room Obscura = Latin for dark But what is a Camera Obscura? The Magic Mirror of Life What is a camera obscura? A French drawing camera with supplies A French drawing camera with supplies Drawing Camera Obscuras with Lens at the top Drawing Camera Obscuras with Lens at the top Read the first three paragraphs of this article. Under the portion Early Observations and Use in Astronomy you will find the answers to the ...

Mr. Engelman

2008-10-28

419

Using a digital camera to study motion  

Microsoft Academic Search

A digital camera can easily be used to make a video record of a range of motions and interactions of objects - shm, free-fall and collisions, both elastic and inelastic. The video record allows measurements of displacement and time, and hence calculation of velocities, and practice with the standard formulas for motions and collisions. The camera extends the range of

Andrew J. McNeil; Steven Daniel

420

USB Security Camera Software for Linux  

Microsoft Academic Search

USB Security Camera has been developed in the society security field, however, current video surveillance is too expensive to limit use widely. The paper proposes a new method that Linux system is software development, with USB camera as video gather. Using TCP\\/IP Protocol agreement realize network communication. The system inside embeds web server so users can visit resources by browser

J. Weerachai; P. Siam; K. Narawith

2011-01-01

421

Surveillance Camera Coordination Through Distributed Scheduling  

Microsoft Academic Search

A challenge to scaling a video surveillance system is the amount of human supervision required for control of the cameras. In this paper we consider the problem of coordinating a network of video cameras for the purpose of identifying people. We pose the problem as a machine scheduling problem where each person is a job that should be scheduled before

Cash J. Costello; I-Jeng Wang

2005-01-01

422

In-camera detection of fabric defects  

Microsoft Academic Search

Industrial inspection cameras for Web processes have a very high output video rate. This output data rate can be reduced by preprocessing the video stream inside the camera in order to only send pixels that have a high probability of containing defect information. Co-occurrence matrices are quite successful in detecting structural defects in fabrics; however, they require high processing power.

Ibrahim Cem Baykal; Graham A. Jullien

2004-01-01

423

Mars Science Laboratory Engineering Cameras  

NASA Technical Reports Server (NTRS)

NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

2012-01-01

424

Video imaging of cardiac transmembrane activity  

NASA Astrophysics Data System (ADS)

High resolution movies of transmembrane electrical activity in thin (0.5 mm) slices of sheep epicardial muscle were recorded by optical imaging with voltage-sensitive dyes and a CCD video camera. Activity was monitored at approximately 65,000 picture elements per 2 cm2 tissue for several seconds at a 16 msec sampling rate. Simple image processing operations permitted visualization and analysis of the optical signal, while isochrome maps depicted complex patterns of propagation. Maps of action potential duration and regional intermittent conduction block showed that even these small preparations may exhibit considerable spatial heterogeneity. Self-sustaining reentrant activity in the form of spiral waves was consistently initiated and observed either drifting across the tissue or anchored to small heterogeneities. The current limitations of video optical mappings are a low signal-to- noise ratio and low temporal resolution. The advantages include high spatial resolution and direct correlation of electrical activity with anatomy. Video optical mapping permits the analysis of the electrophysiological properties of any region of the preparation during both regular stimulation and reentrant activation, providing a useful tool for studying cardiac arrhythmias.

Baxter, William T.; Davidenko, Jorge; Cabo, Candido; Jalife, Jose

1994-05-01

425

Video Visualization Gareth Daniel Min Chen  

E-print Network

, generated by the entertainment industry, security and traffic cameras, video conferencing systems, video, such as the United Kingdom, it is estimated that on av- erage a citizen is caught on security and traffic cameras 300 in the security industry is the ratio of surveillance cameras to security personnel. Imagine that security

Grant, P. W.

426

Fully depleted back illuminated CCD  

DOEpatents

A backside illuminated charge coupled device (CCD) is formed of a relatively thick high resistivity photon sensitive silicon substrate, with frontside electronic circuitry, and an optically transparent backside ohmic contact for applying a backside voltage which is at least sufficient to substantially fully deplete the substrate. A greater bias voltage which overdepletes the substrate may also be applied. One way of applying the bias voltage to the substrate is by physically connecting the voltage source to the ohmic contact. An alternate way of applying the bias voltage to the substrate is to physically connect the voltage source to the frontside of the substrate, at a point outside the depletion region. Thus both frontside and backside contacts can be used for backside biasing to fully deplete the substrate. Also, high resistivity gaps around the CCD channels and electrically floating channel stop regions can be provided in the CCD array around the CCD channels. The CCD array forms an imaging sensor useful in astronomy.

Holland, Stephen Edward (Hercules, CA)

2001-01-01

427

Travelling route of mobile surveillance camera  

Microsoft Academic Search

A video surveillance system is becoming more and more important for investigation and deterrent of crimes, and cameras installed in public space are increasing. However, a number of cameras is required to observe a wide and complex area with cameras installed at fixed positions. In order to efficiently observe a wide and complex area at lower cost, mobile robots have

Yoichi TOMIOKA; Atsushi TAKARA; Hitoshi KITAZAWA

2010-01-01

428

Graph modeling based video event detection  

Microsoft Academic Search

Video processing and analysis have been an interesting field in research and industry. Information detection or retrieval were a challenged task especially with the spread of multimedia applications and the increased number of the video acquisition devices such as the surveillance cameras, phones cameras. These have produced a large amount of video data which are also diversified and complex. This

Najib Ben Aoun; Haytham Elghazel; Chokri Ben Amar

2011-01-01

429

Video monitoring system for car seat  

NASA Technical Reports Server (NTRS)

A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.

Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

2004-01-01

430

The Dark Energy Camera  

E-print Network

The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250 micron thick fully-depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2kx4k CCDs for imaging and 12 2kx2k CCDs for guiding and focus. The CCDs have 15 microns x15 microns pixels with a plate scale of 0.263 arc sec per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construct...

Flaugher, B; Honscheid, K; Abbott, T M C; Alvarez, O; Angstadt, R; Annis, J T; Antonik, M; Ballester, O; Beaufore, L; Bernstein, G M; Bernstein, R A; Bigelow, B; Bonati, M; Boprie, D; Brooks, D; Buckley-Geer, E J; Campa, J; Cardiel-Sas, L; Castander, F J; Castilla, J; Cease, H; Cela-Ruiz, J M; Chappa, S; Chi, E; Cooper, C; da Costa, L N; Dede, E; Derylo, G; DePoy, D L; de Vicente, J; Doel, P; Drlica-Wagner, A; Eiting, J; Elliott, A E; Emes, J; Estrada, J; Neto, A Fausti; Finley, D A; Flores, R; Frieman, J; Gerdes, D; Gladders, M D; Gregory, B; Gutierrez, G R; Hao, J; Holland, S E; Holm, S; Huffman, D; Jackson, C; James, D J; Jonas, M; Karcher, A; Karliner, I; Kent, S; Kessler, R; Kozlovsky, M; Kron, R G; Kubik, D; Kuehn, K; Kuhlmann, S; Kuk, K; Lahav, O; Lathrop, A; Lee, J; Levi, M E; Lewis, P; Li, T S; Mandrichenko, I; Marshall, J L; Martinez, G; Merritt, K W; Miquel, R; Munoz, F; Neilsen, E H; Nichol, R C; Nord, B; Ogando, R; Olsen, J; Palio, N; Patton, K; Peoples, J; Plazas, A A; Rauch, J; Reil, K; Rheault, J -P; Roe, N A; Rogers, H; Roodman, A; Sanchez, E; Scarpine, V; Schindler, R H; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Schurter, P; Scott, L; Serrano, S; Shaw, T M; Smith, R C; Soares-Santos, M; Stefanik, A; Stuermer, W; Suchyta, E; Sypniewski, A; Tarle, G; Thaler, J; Tighe, R; Tran, C; Tucker, D; Walker, A R; Wang, G; Watson, M; Weaverdyck, C; Wester, W; Woods, R; Yanny, B

2015-01-01

431

Inspection focus technology of space tridimensional mapping camera based on astigmatic method  

NASA Astrophysics Data System (ADS)

The CCD plane of the space tridimensional mapping camera will be deviated from the focal plane(including the CCD plane deviated due to camera focal length changed), under the condition of space environment and vibration, impact when satellite is launching, image resolution ratio will be descended because defocusing. For tridimensional mapping camera, principal point position and focal length variation of the camera affect positioning accuracy of ground target, conventional solution is under the condition of vacuum and focusing range, calibrate the position of CCD plane with code of photoelectric encoder, when the camera defocusing in orbit, the magnitude and direction of defocusing amount are obtained by photoelectric encoder, then the focusing mechanism driven by step motor to compensate defocusing amount of the CCD plane. For tridimensional mapping camera, under the condition of space environment and vibration, impact when satellite is launching, if the camera focal length changes, above focusing method has been meaningless. Thus, the measuring and focusing method was put forward based on astigmation, a quadrant detector was adopted to measure the astigmation caused by the deviation of the CCD plane, refer to calibrated relation between the CCD plane poison and the asrigmation, the deviation vector of the CCD plane can be obtained. This method includes all factors caused deviation of the CCD plane, experimental results show that the focusing resolution of mapping camera focusing mechanism based on astigmatic method can reach 0.25 ?m.

Wang, Zhi; Zhang, Liping

2010-10-01

432

CCD imager with photodetector bias introduced via the CCD register  

NASA Technical Reports Server (NTRS)

An infrared charge-coupled-device (IR-CCD) imager uses an array of Schottky-barrier diodes (SBD's) as photosensing elements and uses a charge-coupled-device (CCD) for arranging charge samples supplied in parallel from the array of SBD's into a succession of serially supplied output signal samples. Its sensitivity to infrared (IR) is improved by placing bias charges on the Schottky barrier diodes. Bias charges are transported to the Schottky barrier diodes by a CCD also used for charge sample read-out.

Kosonocky, Walter F. (Inventor)

1986-01-01

433

A HARDWARE PLATFORM FOR AN AUTOMATIC VIDEO TRACKING  

E-print Network

cameras. Video tracking can be used in many areas especially in security-related areas such as airportsA HARDWARE PLATFORM FOR AN AUTOMATIC VIDEO TRACKING SYSTEM USING MULTIPLE PTZ CAMERAS A report on fixed position still cameras. In this report, we proposed a hardware platform for a video tracking

Abidi, Mongi A.

434

Video Source Identification in Lossy Wireless Networks Shaxun Chen  

E-print Network

. In the security camera mar- ket, wireless video cameras continue to replace their wired counterparts due. With the prevalence of wireless communication, wireless video cameras continue to replace their wired counterparts in security / surveillance systems and tactical networks. However, wirelessly streamed videos usually suffer

California at Davis, University of

435

Guerrilla Video: A New Protocol for Producing Classroom Video  

ERIC Educational Resources Information Center

Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

Fadde, Peter; Rich, Peter

2010-01-01

436

Video Mosaicking for Inspection of Gas Pipelines  

NASA Technical Reports Server (NTRS)

A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.

Magruder, Darby; Chien, Chiun-Hong

2005-01-01

437

Storage and compression design of high speed CCD  

NASA Astrophysics Data System (ADS)

In current field of CCD measurement, large area and high resolution CCD is used to obtain big measurement image, so that, speed and capacity of CCD requires high performance of later storage and process system. The paper discusses how to use SCSI hard disk to construct storage system and use DSPs and FPGA to realize image compression. As for storage subsystem, Because CCD is divided into multiplex output, SCSI array is used in RAID0 way. The storage system is com posed of high speed buffer, DM A controller, control M CU, SCSI protocol controller and SCSI hard disk. As for compression subsystem, according to requirement of communication and monitor system, the output is fixed resolution image and analog PA L signal. The compression means is JPEG 2000 standard, in which, 9/7 wavelets in lifting format is used. 2 DSPs and FPGA are used to com pose parallel compression system. The system is com posed of FPGA pre-processing module, DSP compression module, video decoder module, data buffer module and communication module. Firstly, discrete wavelet transform and quantization is realized in FPGA. Secondly, entropy coding and stream adaption is realized in DSPs. Last, analog PA L signal is output by Video decoder. Data buffer is realized in synchronous dual-port RAM and state of subsystem is transfer to controller. Through subjective and objective evaluation, the storage and compression system satisfies the requirement of system.

Cai, Xichang; Zhai, LinPei

2009-05-01

438

Toying with obsolescence : Pixelvision filmmakers and the Fisher Price PXL 2000 camera  

E-print Network

This thesis is a study of the Fisher Price PXL 2000 camera and the artists and amateurs who make films and videos with this technology. The Pixelvision camera records video onto an audiocassette; its image is low-resolution, ...

McCarty, Andrea Nina

2005-01-01

439

NSTX Tangential Divertor Camera  

SciTech Connect

Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor.

A.L. Roquemore; Ted Biewer; D. Johnson; S.J. Zweben; Nobuhiro Nishino; V.A. Soukhanovskii

2004-07-16

440

The Dark Energy Camera  

NASA Astrophysics Data System (ADS)

The DES Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which is now mounted at the prime focus of the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory. DECam is comprised of 74 250 micron thick fully depleted CCDs: 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. A filter set of u,g,r,i,z, and Y, a hexapod for focus and lateral alignment as well as thermal management of the cage temperature. DECam will be used to perform the Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. An overview of the DECam design, construction and initial on-sky performance information will be presented.

Flaugher, Brenna; DES Collaboration

2013-01-01

441

The NEAT Camera Project  

NASA Technical Reports Server (NTRS)

The NEAT (Near Earth Asteroid Tracking) camera system consists of a camera head with a 6.3 cm square 4096 x 4096 pixel CCD, fast electronics, and a Sun Sparc 20 data and control computer with dual CPUs, 256 Mbytes of memory, and 36 Gbytes of hard disk. The system was designed for optimum use with an Air Force GEODSS (Ground-based Electro-Optical Deep Space Surveillance) telescope. The GEODSS telescopes have 1 m f/2.15 objectives of the Ritchey-Chretian type, designed originally for satellite tracking. Installation of NEAT began July 25 at the Air Force Facility on Haleakala, a 3000 m peak on Maui in Hawaii.

Jr., Ray L. Newburn

1995-01-01

442

Representing videos in tangible products  

NASA Astrophysics Data System (ADS)

Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

Fageth, Reiner; Weiting, Ralf

2014-03-01

443

CCD Photometer Installed on the Telescope - 600 OF the Shamakhy Astrophysical Observatory: I. Adjustment of CCD Photometer with Optics - 600  

NASA Astrophysics Data System (ADS)

Short description of optical and electric scheme of CCD photometer with camera U-47 installed on the Cassegrain focus of ZEISS-600 telescope of the ShAO NAS Azerbaijan is provided. The reducer of focus with factor of reduction 1.7 is applied. It is calculated equivalent focal distances of a telescope with a focus reducer. General calculations of optimum distance from focal plane and t sizes of optical filters of photometer are presented.

Lyuty, V. M.; Abdullayev, B. I.; Alekberov, I. A.; Gulmaliyev, N. I.; Mikayilov, Kh. M.; Rustamov, B. N.

2009-12-01

444

Framework for Freeway Auto-Surveillance from Traffic Video  

Microsoft Academic Search

Video based surveillance systems have been widely used on freeway for traffic monitoring, as the cameras can provide the most intuitionistic information. In order to manage all the traffic videos automatically, in this paper, a distributed real-time auto-surveillance system is presented. The freeway traffic videos are taken as input video from Pan Tilt Zoom (PTZ) camera, and then produces an

Bo Li; Qi-mei Chen

2009-01-01

445

Video Indexing and Summarization as a Tool for Privacy Protection  

E-print Network

number of surveillance camera networks being deployed all over the world has resulted in a high interest cameras during a day [2]. Obviously, the rapid growth of video surveillance systems results protection by design. Keywords-Video Indexing; Video Summarization; Privacy Pro- tection; Video Surveillance

Wichmann, Felix

446

On the Development of a Digital Video Motion Detection Test Set  

SciTech Connect

This paper describes the current effort to develop a standardized data set, or suite of digital video sequences, that can be used for test and evaluation of digital video motion detectors (VMDS) for exterior applications. We have drawn from an extensive video database of typical application scenarios to assemble a comprehensive data set. These data, some existing for many years on analog videotape, have been converted to a reproducible digital format and edited to generate test sequences several minutes long for many scenarios. Sequences include non- alarm video, intrusions and nuisance alarm sources, taken with a variety of imaging sensors including monochrome CCD cameras and infrared (thermal) imaging cameras, under a variety of daytime and nighttime conditions. The paper presents an analysis of the variables and estimates the complexity of a thorough data set. Some of this video data test has been digitized for CD-ROM storage and playback. We are considering developing a DVD disk for possible use in screening and testing VMDs prior to government testing and deployment. In addition, this digital video data may be used by VMD developers for fhrther refinement or customization of their product to meet specific requirements. These application scenarios may also be used to define the testing parameters for futore procurement qualification. A personal computer may be used to play back either the CD-ROM or the DVD video data. A consumer electronics-style DVD player may be used to replay the DVD disk. This paper also discusses various aspects of digital video storage including formats, resolution, CD-ROM and DVD storage capacity, formats, editing and playback.

Pritchard, Daniel A.; Vigil, Jose T.

1999-06-07

447

Enhanced performance CCD output amplifier  

DOEpatents

A low-noise FET amplifier is connected to amplify output charge from a che coupled device (CCD). The FET has its gate connected to the CCD in common source configuration for receiving the output charge signal from the CCD and output an interm