Sample records for video ccd camera

  1. Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.

    1998-07-01

    This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).

  2. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  3. Clinical applications of commercially available video recording and monitoring systems: inexpensive, high-quality video recording and monitoring systems for endoscopy and microsurgery.

    PubMed

    Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko

    2006-01-01

    The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.

  4. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  5. Scientific CCD technology at JPL

    NASA Technical Reports Server (NTRS)

    Janesick, J.; Collins, S. A.; Fossum, E. R.

    1991-01-01

    Charge-coupled devices (CCD's) were recognized for their potential as an imaging technology almost immediately following their conception in 1970. Twenty years later, they are firmly established as the technology of choice for visible imaging. While consumer applications of CCD's, especially the emerging home video camera market, dominated manufacturing activity, the scientific market for CCD imagers has become significant. Activity of the Jet Propulsion Laboratory and its industrial partners in the area of CCD imagers for space scientific instruments is described. Requirements for scientific imagers are significantly different from those needed for home video cameras, and are described. An imager for an instrument on the CRAF/Cassini mission is described in detail to highlight achieved levels of performance.

  6. Explosive Transient Camera (ETC) Program

    DTIC Science & Technology

    1991-10-01

    VOLTAGES 4.- VIDEO OUT CCD CLOCKING UNIT UUPSTAIRS" ELECTRONICS AND ANALOG TO DIGITAL IPR OCECSSER I COMMANDS TO DATA AND STATUS INSTRUMENT INFORMATION I...and transmits digital video and status information to the "downstairs" system. The clocking unit and regulator/driver board are the only CCD dependent...A. 1001, " Video Cam-era’CC’" tandari Piells" (1(P’ll m-norartlum, unpublished). Condon,, J.J., Puckpan, M.A., and Vachalski, J. 1970, A. J., 9U, 1149

  7. Miniature self-contained vacuum compatible electronic imaging microscope

    DOEpatents

    Naulleau, Patrick P.; Batson, Phillip J.; Denham, Paul E.; Jones, Michael S.

    2001-01-01

    A vacuum compatible CCD-based microscopic camera with an integrated illuminator. The camera can provide video or still feed from the microscope contained within a vacuum chamber. Activation of an optional integral illuminator can provide light to illuminate the microscope subject. The microscope camera comprises a housing with a objective port, modified objective, beam-splitter, CCD camera, and LED illuminator.

  8. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  9. Timing generator of scientific grade CCD camera and its implementation based on FPGA technology

    NASA Astrophysics Data System (ADS)

    Si, Guoliang; Li, Yunfei; Guo, Yongfei

    2010-10-01

    The Timing Generator's functions of Scientific Grade CCD Camera is briefly presented: it generates various kinds of impulse sequence for the TDI-CCD, video processor and imaging data output, acting as the synchronous coordinator for time in the CCD imaging unit. The IL-E2TDI-CCD sensor produced by DALSA Co.Ltd. use in the Scientific Grade CCD Camera. Driving schedules of IL-E2 TDI-CCD sensor has been examined in detail, the timing generator has been designed for Scientific Grade CCD Camera. FPGA is chosen as the hardware design platform, schedule generator is described with VHDL. The designed generator has been successfully fulfilled function simulation with EDA software and fitted into XC2VP20-FF1152 (a kind of FPGA products made by XILINX). The experiments indicate that the new method improves the integrated level of the system. The Scientific Grade CCD camera system's high reliability, stability and low power supply are achieved. At the same time, the period of design and experiment is sharply shorted.

  10. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  11. High-frame-rate infrared and visible cameras for test range instrumentation

    NASA Astrophysics Data System (ADS)

    Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1995-09-01

    Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.

  12. NEUTRON RADIATION DAMAGE IN CCD CAMERAS AT JOINT EUROPEAN TORUS (JET).

    PubMed

    Milocco, Alberto; Conroy, Sean; Popovichev, Sergey; Sergienko, Gennady; Huber, Alexander

    2017-10-26

    The neutron and gamma radiations in large fusion reactors are responsible for damage to charged couple device (CCD) cameras deployed for applied diagnostics. Based on the ASTM guide E722-09, the 'equivalent 1 MeV neutron fluence in silicon' was calculated for a set of CCD cameras at the Joint European Torus. Such evaluations would be useful to good practice in the operation of the video systems. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  14. Solid state television camera (CCD-buried channel)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  15. Solid state television camera (CCD-buried channel), revision 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  16. Solid state, CCD-buried channel, television camera study and design

    NASA Technical Reports Server (NTRS)

    Hoagland, K. A.; Balopole, H.

    1976-01-01

    An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.

  17. CCD TV focal plane guider development and comparison to SIRTF applications

    NASA Technical Reports Server (NTRS)

    Rank, David M.

    1989-01-01

    It is expected that the SIRTF payload will use a CCD TV focal plane fine guidance sensor to provide acquisition of sources and tracking stability of the telescope. Work has been done to develop CCD TV cameras and guiders at Lick Observatory for several years and have produced state of the art CCD TV systems for internal use. NASA decided to provide additional support so that the limits of this technology could be established and a comparison between SIRTF requirements and practical systems could be put on a more quantitative basis. The results of work carried out at Lick Observatory which was designed to characterize present CCD autoguiding technology and relate it to SIRTF applications is presented. Two different design types of CCD cameras were constructed using virtual phase and burred channel CCD sensors. A simple autoguider was built and used on the KAO, Mt. Lemon and Mt. Hamilton telescopes. A video image processing system was also constructed in order to characterize the performance of the auto guider and CCD cameras.

  18. Systems approach to the design of the CCD sensors and camera electronics for the AIA and HMI instruments on solar dynamics observatory

    NASA Astrophysics Data System (ADS)

    Waltham, N.; Beardsley, S.; Clapp, M.; Lang, J.; Jerram, P.; Pool, P.; Auker, G.; Morris, D.; Duncan, D.

    2017-11-01

    Solar Dynamics Observatory (SDO) is imaging the Sun in many wavelengths near simultaneously and with a resolution ten times higher than the average high-definition television. In this paper we describe our innovative systems approach to the design of the CCD cameras for two of SDO's remote sensing instruments, the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI). Both instruments share use of a custom-designed 16 million pixel science-grade CCD and common camera readout electronics. A prime requirement was for the CCD to operate with significantly lower drive voltages than before, motivated by our wish to simplify the design of the camera readout electronics. Here, the challenge lies in the design of circuitry to drive the CCD's highly capacitive electrodes and to digitize its analogue video output signal with low noise and to high precision. The challenge is greatly exacerbated when forced to work with only fully space-qualified, radiation-tolerant components. We describe our systems approach to the design of the AIA and HMI CCD and camera electronics, and the engineering solutions that enabled us to comply with both mission and instrument science requirements.

  19. CCD Camera Detection of HIV Infection.

    PubMed

    Day, John R

    2017-01-01

    Rapid and precise quantification of the infectivity of HIV is important for molecular virologic studies, as well as for measuring the activities of antiviral drugs and neutralizing antibodies. An indicator cell line, a CCD camera, and image-analysis software are used to quantify HIV infectivity. The cells of the P4R5 line, which express the receptors for HIV infection as well as β-galactosidase under the control of the HIV-1 long terminal repeat, are infected with HIV and then incubated 2 days later with X-gal to stain the infected cells blue. Digital images of monolayers of the infected cells are captured using a high resolution CCD video camera and a macro video zoom lens. A software program is developed to process the images and to count the blue-stained foci of infection. The described method allows for the rapid quantification of the infected cells over a wide range of viral inocula with reproducibility, accuracy and at relatively low cost.

  20. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    PubMed Central

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133

  1. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition.

    PubMed

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  2. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y Convenciones Ecológicas y Medioambientales (CIECEM, University of Huelva), in the environment of Doñana Natural Park (Huelva province). In this way, both stations, which are separated by a distance of 75 km, will work as a double video station system in order to provide trajectory and orbit information of mayor bolides and, thus, increase the chance of meteorite recovery in the Iberian Peninsula. The new diurnal SPMN video stations are endowed with different models of Mintron cameras (Mintron Enterprise Co., LTD). These are high-sensitivity devices that employ a colour 1/2" Sony interline transfer CCD image sensor. Aspherical lenses are attached to the video cameras in order to maximize image quality. However, the use of fast lenses is not a priority here: while most of our nocturnal cameras use f0.8 or f1.0 lenses in order to detect meteors as faint as magnitude +3, diurnal systems employ in most cases f1.4 to f2.0 lenses. Their focal length ranges from 3.8 to 12 mm to cover different atmospheric volumes. The cameras are arranged in such a way that the whole sky is monitored from every observing station. Figure 1. A daylight event recorded from Sevilla on May 26, 2008 at 4h30m05.4 +-0.1s UT. The way our diurnal video cameras work is similar to the operation of our nocturnal systems [1]. Thus, diurnal stations are automatically switched on and off at sunrise and sunset, respectively. The images taken at 25 fps and with a resolution of 720x576 pixels are continuously sent to PC computers through a video capture device. The computers run a software (UFOCapture, by SonotaCo, Japan) that automatically registers meteor trails and stores the corresponding video frames on hard disk. Besides, before the signal from the cameras reaches the computers, a video time inserter that employs a GPS device (KIWI-OSD, by PFD Systems) inserts time information on every video frame. This allows us to measure time in a precise way (about 0.01 sec.) along the whole fireball path. EPSC Abstracts, Vol. 3, EPSC2008-A-00319, 2008 European Planetary Science Congress, Author(s) 2008 However, one of the issues with respect to nocturnal observing stations is the high number of false detections as a consequence of several factors: higher activity of birds and insects, reflection of sunlight on planes and helicopters, etc. Sometimes some of these false events follow a pattern which is very similar to fireball trails, which makes absolutely necessary the use of a second station in order to discriminate between them. Other key issue is related to the passage of the Sun before the field of view of some of the cameras. In fact, special care is necessary with this to avoid any damage to the CCD sensor. Besides, depending on atmospheric conditions (dust or moisture, for instance), the Sun may saturate most of the video frame. To solve this, our automated system determines which camera is pointing towards the Sun at a given moment and disconnects it. As the cameras are endowed with autoiris lenses, its disconnection means that the optics is fully closed and, so, the CCD sensor is protected. This, of course, means that when this happens the atmospheric volume covered by the corresponding camera is not monitored. It must be also taken into account that, in general, operation temperatures are higher for diurnal cameras. This results in higher thermal noise and, so, poses some difficulties to the detection software. To minimize this effect, it is necessary to employ CCD video cameras with proper signal to noise ratio. Refrigeration of the CCD sensor with, for instance, a Peltier system, can also be considered. The astrometric reduction procedure is also somewhat different for daytime events: it requires that reference objects are located within the field of view of every camera in order to calibrate the corresponding images. This is done by allowing every camera to capture distant buildings that, by means of said calibration, would allow us to obtain the equatorial coordinates of the fireball along its path by measuring its corresponding X and Y positions on every video frame. Such calibration can be performed from stars positions measured from nocturnal images taken with the same cameras. Once made, if the cameras are not moved it is possible to estimate the equatorial coordinates of any future fireball event. We don't use any software for automatic astrometry of the images. This crucial step is made via direct measurements of the pixel position as in all our previous work. Then, from these astrometric measurements, our software estimates the atmospheric trajectory and radiant for each fireball ([10] to [13]). During 2007 and 2008 the SPMN has also setup other diurnal stations based on 1/3' progressive-scan CMOS sensors attached to modified wide-field lenses covering a 120x80 degrees FOV. They are placed in Andalusia: El Arenosillo (Huelva), La Mayora (Málaga) and Murtas (Granada). They have also night sensitivity thanks to a infrared cut filter (ICR) which enables the camera to perform well in both high and low light condition in colour as well as provide IR sensitive Black/White video at night. Conclusions First detections of daylight fireballs by CCD video camera are being achieved in the SPMN framework. Future expansion and set up of new observing stations is currently being planned. The future establishment of additional diurnal SPMN stations will allow an increase in the number of daytime fireballs detected. This will also increase our chance of meteorite recovery.

  3. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  4. Digital photography for the light microscope: results with a gated, video-rate CCD camera and NIH-image software.

    PubMed

    Shaw, S L; Salmon, E D; Quatrano, R S

    1995-12-01

    In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.

  5. Development of a 300,000-pixel ultrahigh-speed high-sensitivity CCD

    NASA Astrophysics Data System (ADS)

    Ohtake, H.; Hayashida, T.; Kitamura, K.; Arai, T.; Yonai, J.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Poggemann, D.; Ruckelshausen, A.; van Kuijk, H.; Bosiers, Jan T.

    2006-02-01

    We are developing an ultrahigh-speed, high-sensitivity broadcast camera that is capable of capturing clear, smooth slow-motion videos even where lighting is limited, such as at professional baseball games played at night. In earlier work, we developed an ultrahigh-speed broadcast color camera1) using three 80,000-pixel ultrahigh-speed, highsensitivity CCDs2). This camera had about ten times the sensitivity of standard high-speed cameras, and enabled an entirely new style of presentation for sports broadcasts and science programs. Most notably, increasing the pixel count is crucially important for applying ultrahigh-speed, high-sensitivity CCDs to HDTV broadcasting. This paper provides a summary of our experimental development aimed at improving the resolution of CCD even further: a new ultrahigh-speed high-sensitivity CCD that increases the pixel count four-fold to 300,000 pixels.

  6. ONR Workshop on Magnetohydrodynamic Submarine Propulsion (2nd), Held in San Diego, California on November 16-17, 1989

    DTIC Science & Technology

    1990-07-01

    electrohtic dissociation of the electrode mate- pedo applications seem to be still somewhat rial, and to provide a good gas evolution wlhich out of the...rod cathode. A unique feature of this preliminary experiment was the use of a prototype gated, intensified video camera. This camera is based on a...microprocessor controlled microchannel plate intensifier tube. The intensifier tube image is focused on a standard CCD video camera so that the object

  7. Development of an all-in-one gamma camera/CCD system for safeguard verification

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo

    2014-12-01

    For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.

  8. Microgravity

    NASA Image and Video Library

    1991-04-03

    The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.

  9. Microgravity

    NASA Image and Video Library

    1995-08-29

    The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.

  10. Measurement of an Evaporating Drop on a Reflective Substrate

    NASA Technical Reports Server (NTRS)

    Chao, David F.; Zhang, Nengli

    2004-01-01

    A figure depicts an apparatus that simultaneously records magnified ordinary top-view video images and laser shadowgraph video images of a sessile drop on a flat, horizontal substrate that can be opaque or translucent and is at least partially specularly reflective. The diameter, contact angle, and rate of evaporation of the drop as functions of time can be calculated from the apparent diameters of the drop in sequences of the images acquired at known time intervals, and the shadowgrams that contain flow patterns indicative of thermocapillary convection (if any) within the drop. These time-dependent parameters and flow patterns are important for understanding the physical processes involved in the spreading and evaporation of drops. The apparatus includes a source of white light and a laser (both omitted from the figure), which are used to form the ordinary image and the shadowgram, respectively. Charge-coupled-device (CCD) camera 1 (with zoom) acquires the ordinary video images, while CCD camera 2 acquires the shadowgrams. With respect to the portion of laser light specularly reflected from the substrate, the drop acts as a plano-convex lens, focusing the laser beam to a shadowgram on the projection screen in front of CCD camera 2. The equations for calculating the diameter, contact angle, and rate of evaporation of the drop are readily derived on the basis of Snell s law of refraction and the geometry of the optics.

  11. Fourier Theory Explanation for the Sampling Theorem Demonstrated by a Laboratory Experiment.

    ERIC Educational Resources Information Center

    Sharma, A.; And Others

    1996-01-01

    Describes a simple experiment that uses a CCD video camera, a display monitor, and a laser-printed bar pattern to illustrate signal sampling problems that produce aliasing or moiri fringes in images. Uses the Fourier transform to provide an appropriate and elegant means to explain the sampling theorem and the aliasing phenomenon in CCD-based…

  12. The CTIO Acquisition CCD-TV camera design

    NASA Astrophysics Data System (ADS)

    Schmidt, Ricardo E.

    1990-07-01

    A CCD-based Acquisition TV Camera has been developed at CTIO to replace the existing ISIT units. In a 60 second exposure, the new Camera shows a sixfold improvement in sensitivity over an ISIT used with a Leaky Memory. Integration times can be varied over a 0.5 to 64 second range. The CCD, contained in an evacuated enclosure, is operated at -45 C. Only the image section, an area of 8.5 mm x 6.4 mm, gets exposed to light. Pixel size is 22 microns and either no binning or 2 x 2 binning can be selected. The typical readout rates used vary between 3.5 and 9 microseconds/pixel. Images are stored in a PC/XT/AT, which generates RS-170 video. The contrast in the RS-170 frames is automatically enhanced by the software.

  13. Advanced Video Data-Acquisition System For Flight Research

    NASA Technical Reports Server (NTRS)

    Miller, Geoffrey; Richwine, David M.; Hass, Neal E.

    1996-01-01

    Advanced video data-acquisition system (AVDAS) developed to satisfy variety of requirements for in-flight video documentation. Requirements range from providing images for visualization of airflows around fighter airplanes at high angles of attack to obtaining safety-of-flight documentation. F/A-18 AVDAS takes advantage of very capable systems like NITE Hawk forward-looking infrared (FLIR) pod and recent video developments like miniature charge-couple-device (CCD) color video cameras and other flight-qualified video hardware.

  14. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  15. NASA Imaging for Safety, Science, and History

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; Lindblom, Walt; Bowerman, Deborah S. (Technical Monitor)

    2002-01-01

    Since its creation in 1958 NASA has been making and documenting history, both on Earth and in space. To complete its missions NASA has long relied on still and motion imagery to document spacecraft performance, see what can't be seen by the naked eye, and enhance the safety of astronauts and expensive equipment. Today, NASA is working to take advantage of new digital imagery technologies and techniques to make its missions more safe and efficient. An HDTV camera was on-board the International Space Station from early August, to mid-December, 2001. HDTV cameras previously flown have had degradation in the CCD during the short duration of a Space Shuttle flight. Initial performance assessment of the CCD during the first-ever long duration space flight of a HDTV camera and earlier flights is discussed. Recent Space Shuttle launches have been documented with HDTV cameras and new long lenses giving clarity never before seen with video. Examples and comparisons will be illustrated between HD, highspeed film, and analog video of these launches and other NASA tests. Other uses of HDTV where image quality is of crucial importance will also be featured.

  16. Fast noninvasive eye-tracking and eye-gaze determination for biomedical and remote monitoring applications

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.

    2004-04-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.

  17. Video photographic considerations for measuring the proximity of a probe aircraft with a smoke seeded trailing vortex

    NASA Technical Reports Server (NTRS)

    Childers, Brooks A.; Snow, Walter L.

    1990-01-01

    Considerations for acquiring and analyzing 30 Hz video frames from charge coupled device (CCD) cameras mounted in the wing tips of a Beech T-34 aircraft are described. Particular attention is given to the characterization and correction of optical distortions inherent in the data.

  18. Formulating an image matching strategy for terrestrial stem data collection using a multisensor video system

    Treesearch

    Neil A. Clark

    2001-01-01

    A multisensor video system has been developed incorporating a CCD video camera, a 3-axis magnetometer, and a laser-rangefinding device, for the purpose of measuring individual tree stems. While preliminary results show promise, some changes are needed to improve the accuracy and efficiency of the system. Image matching is needed to improve the accuracy of length...

  19. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  20. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  1. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  2. Fast auto-acquisition tomography tilt series by using HD video camera in ultra-high voltage electron microscope.

    PubMed

    Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto

    2014-11-01

    The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt series of 61 images within 30 minutes. Accuracy and repeatability were good enough to practical use (Figure 1). We successfully reduced the total acquisition time of a tomography tilt series in half than before.jmicro;63/suppl_1/i25/DFU066F1F1DFU066F1Fig. 1.Objective lens current change with a tilt angle during acquisition of tomography series (Sample: a rat hepatocyte, thickness: 2 m, magnification: 4k, acc. voltage: 2 MV). Tilt angle range is ±60 degree with 2 degree step angle. Two series were acquired in the same area. Both data were almost same and the deviation was smaller than the minimum step by manual, so auto-focus worked well. We also developed a computer-aided three dimensional (3D) visualization and analysis software for electron tomography "HawkC" which can sectionalize the 3D data semi-automatically[5,6]. If this auto-acquisition system is used with IMOD reconstruction software[7] and HawkC software, we will be able to do on-line UHVEM tomography. The system would help pathology examination in the future.This work was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, under a Grant-in-Aid for Scientific Research (Grant No. 23560024, 23560786), and SENTAN, Japan Science and Technology Agency, Japan. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. The design and development of low- and high-voltage ASICs for space-borne CCD cameras

    NASA Astrophysics Data System (ADS)

    Waltham, N.; Morrissey, Q.; Clapp, M.; Bell, S.; Jones, L.; Torbet, M.

    2017-12-01

    The CCD remains the pre-eminent visible and UV wavelength image sensor in space science, Earth and planetary remote sensing. However, the design of space-qualified CCD readout electronics is a significant challenge with requirements for low-volume, low-mass, low-power, high-reliability and tolerance to space radiation. Space-qualified components are frequently unavailable and up-screened commercial components seldom meet project or international space agency requirements. In this paper, we describe an alternative approach of designing and space-qualifying a series of low- and high-voltage mixed-signal application-specific integrated circuits (ASICs), the ongoing development of two low-voltage ASICs with successful flight heritage, and two new high-voltage designs. A challenging sub-system of any CCD camera is the video processing and digitisation electronics. We describe recent developments to improve performance and tolerance to radiation-induced single event latchup of a CCD video processing ASIC originally developed for NASA's Solar Terrestrial Relations Observatory and Solar Dynamics Observatory. We also describe a programme to develop two high-voltage ASICs to address the challenges presented with generating a CCD's bias voltages and drive clocks. A 0.35 μm, 50 V tolerant, CMOS process has been used to combine standard low-voltage 3.3 V transistors with high-voltage 50 V diffused MOSFET transistors that enable output buffers to drive CCD bias drains, gates and clock electrodes directly. We describe a CCD bias voltage generator ASIC that provides 24 independent and programmable 0-32 V outputs. Each channel incorporates a 10-bit digital-to-analogue converter, provides current drive of up to 20 mA into loads of 10 μF, and includes current-limiting and short-circuit protection. An on-chip telemetry system with a 12-bit analogue-to-digital converter enables the outputs and multiple off-chip camera voltages to be monitored. The ASIC can drive one or more CCDs and replaces the many discrete components required in current cameras. We also describe a CCD clock driver ASIC that provides six independent and programmable drivers with high-current capacity. The device enables various CCD clock parameters to be programmed independently, for example the clock-low and clock-high voltage levels, and the clock-rise and clock-fall times, allowing configuration for serial clock frequencies in the range 0.1-2 MHz and image clock frequencies in the range 10-100 kHz. Finally, we demonstrate the impact and importance of this technology for the development of compact, high-performance and low-power integrated focal plane electronics.

  4. Circuit design of an EMCCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; Jin, Jianhui; He, Chun

    2012-07-01

    EMCCDs have been used in the astronomical observations in many ways. Recently we develop a camera using an EMCCD TX285. The CCD chip is cooled to -100°C in an LN2 dewar. The camera controller consists of a driving board, a control board and a temperature control board. Power supplies and driving clocks of the CCD are provided by the driving board, the timing generator is located in the control board. The timing generator and an embedded Nios II CPU are implemented in an FPGA. Moreover the ADC and the data transfer circuit are also in the control board, and controlled by the FPGA. The data transfer between the image workstation and the camera is done through a Camera Link frame grabber. The software of image acquisition is built using VC++ and Sapera LT. This paper describes the camera structure, the main components and circuit design for video signal processing channel, clock driver, FPGA and Camera Link interfaces, temperature metering and control system. Some testing results are presented.

  5. Method for eliminating artifacts in CCD imagers

    DOEpatents

    Turko, B.T.; Yates, G.J.

    1992-06-09

    An electronic method for eliminating artifacts in a video camera employing a charge coupled device (CCD) as an image sensor is disclosed. The method comprises the step of initializing the camera prior to normal read out and includes a first dump cycle period for transferring radiation generated charge into the horizontal register while the decaying image on the phosphor being imaged is being integrated in the photosites, and a second dump cycle period, occurring after the phosphor image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers. Image charge is then transferred from the photosites and to the vertical registers and read out in conventional fashion. The inventive method allows the video camera to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers and, and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites. 3 figs.

  6. Inspection and Gamma-Ray Dose Rate Measurements of the Annulus of the VSC-17 Concrete Spent Nuclear Fuel Storage Cask

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. L. Winston

    2007-09-01

    The air cooling annulus of the Ventilated Storage Cask (VSC)-17 spent fuel storage cask was inspected using a Toshiba 7 mm (1/4”) CCD video camera. The dose rates observed in the annular space were measured to provide a reference for the activity to which the camera(s) being tested were being exposed. No gross degradation, pitting, or general corrosion was observed.

  7. Ice-Borehole Probe

    NASA Technical Reports Server (NTRS)

    Behar, Alberto; Carsey, Frank; Lane, Arthur; Engelhardt, Herman

    2006-01-01

    An instrumentation system has been developed for studying interactions between a glacier or ice sheet and the underlying rock and/or soil. Prior borehole imaging systems have been used in well-drilling and mineral-exploration applications and for studying relatively thin valley glaciers, but have not been used for studying thick ice sheets like those of Antarctica. The system includes a cylindrical imaging probe that is lowered into a hole that has been bored through the ice to the ice/bedrock interface by use of an established hot-water-jet technique. The images acquired by the cameras yield information on the movement of the ice relative to the bedrock and on visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At the time of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-data-transmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At the time of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-datatransmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At thime of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-datatransmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the adjacent water and ice.

  8. Measurement of marine picoplankton cell size by using a cooled, charge-coupled device camera with image-analyzed fluorescence microscopy.

    PubMed Central

    Viles, C L; Sieracki, M E

    1992-01-01

    Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 microns) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 microns) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured. Images PMID:1610183

  9. Real-Time Visualization of Tissue Ischemia

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)

    2000-01-01

    A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.

  10. Intra-cavity upconversion to 631 nm of images illuminated by an eye-safe ASE source at 1550 nm.

    PubMed

    Torregrosa, A J; Maestre, H; Capmany, J

    2015-11-15

    We report an image wavelength upconversion system. The system mixes an incoming image at around 1550 nm (eye-safe region) illuminated by an amplified spontaneous emission (ASE) fiber source with a Gaussian beam at 1064 nm generated in a continuous-wave diode-pumped Nd(3+):GdVO(4) laser. Mixing takes place in a periodically poled lithium niobate (PPLN) crystal placed intra-cavity. The upconverted image obtained by sum-frequency mixing falls around the 631 nm red spectral region, well within the spectral response of standard silicon focal plane array bi-dimensional sensors, commonly used in charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) video cameras, and of most image intensifiers. The use of ASE illumination benefits from a noticeable increase in the field of view (FOV) that can be upconverted with regard to using coherent laser illumination. The upconverted power allows us to capture real-time video in a standard nonintensified CCD camera.

  11. Study of atmospheric discharges caracteristics using with a standard video camera

    NASA Astrophysics Data System (ADS)

    Ferraz, E. C.; Saba, M. M. F.

    In this study is showed some preliminary statistics on lightning characteristics such as: flash multiplicity, number of ground contact points, formation of new and altered channels and presence of continuous current in the strokes that form the flash. The analysis is based on the images of a standard video camera (30 frames.s-1). The results obtained for some flashes will be compared to the images of a high-speed CCD camera (1000 frames.s-1). The camera observing site is located in São José dos Campos (23°S,46° W) at an altitude of 630m. This observational site has nearly 360° field of view at a height of 25m. It is possible to visualize distant thunderstorms occurring within a radius of 25km from the site. The room, situated over a metal structure, has water and power supplies, a telephone line and a small crane on the roof. KEY WORDS: Video images, Lightning, Multiplicity, Stroke.

  12. The Effects of Radiation on Imagery Sensors in Space

    NASA Technical Reports Server (NTRS)

    Mathis, Dylan

    2007-01-01

    Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.

  13. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    NASA Astrophysics Data System (ADS)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  14. Particle displacement tracking applied to air flows

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    Electronic Particle Image Velocimeter (PIV) techniques offer many advantages over conventional photographic PIV methods such as fast turn around times and simplified data reduction. A new all electronic PIV technique was developed which can measure high speed gas velocities. The Particle Displacement Tracking (PDT) technique employs a single cw laser, small seed particles (1 micron), and a single intensified, gated CCD array frame camera to provide a simple and fast method of obtaining two-dimensional velocity vector maps with unambiguous direction determination. Use of a single CCD camera eliminates registration difficulties encountered when multiple cameras are used to obtain velocity magnitude and direction information. An 80386 PC equipped with a large memory buffer frame-grabber board provides all of the data acquisition and data reduction operations. No array processors of other numerical processing hardware are required. Full video resolution (640x480 pixel) is maintained in the acquired images, providing high resolution video frames of the recorded particle images. The time between data acquisition to display of the velocity vector map is less than 40 sec. The new electronic PDT technique is demonstrated on an air nozzle flow with velocities less than 150 m/s.

  15. Method for eliminating artifacts in CCD imagers

    DOEpatents

    Turko, Bojan T.; Yates, George J.

    1992-01-01

    An electronic method for eliminating artifacts in a video camera (10) employing a charge coupled device (CCD) (12) as an image sensor. The method comprises the step of initializing the camera (10) prior to normal read out and includes a first dump cycle period (76) for transferring radiation generated charge into the horizontal register (28) while the decaying image on the phosphor (39) being imaged is being integrated in the photosites, and a second dump cycle period (78), occurring after the phosphor (39) image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers (32). Image charge is then transferred from the photosites (36) and (38) to the vertical registers (32) and read out in conventional fashion. The inventive method allows the video camera (10) to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers (28) and (32), and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites (36) and (37).

  16. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  17. Improvement in the light sensitivity of the ultrahigh-speed high-sensitivity CCD with a microlens array

    NASA Astrophysics Data System (ADS)

    Hayashida, T.,; Yonai, J.; Kitamura, K.; Arai, T.; Kurita, T.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Kitagawa, S.; Hatade, K.; Yamaguchi, T.; Takeuchi, H.; Iida, K.

    2008-02-01

    We are advancing the development of ultrahigh-speed, high-sensitivity CCDs for broadcast use that are capable of capturing smooth slow-motion videos in vivid colors even where lighting is limited, such as at professional baseball games played at night. We have already developed a 300,000 pixel, ultrahigh-speed CCD, and a single CCD color camera that has been used for sports broadcasts and science programs using this CCD. However, there are cases where even higher sensitivity is required, such as when using a telephoto lens during a baseball broadcast or a high-magnification microscope during science programs. This paper provides a summary of our experimental development aimed at further increasing the sensitivity of CCDs using the light-collecting effects of a microlens array.

  18. Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras

    NASA Astrophysics Data System (ADS)

    Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi

    1997-04-01

    Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.

  19. Eliminating Bias In Acousto-Optical Spectrum Analysis

    NASA Technical Reports Server (NTRS)

    Ansari, Homayoon; Lesh, James R.

    1992-01-01

    Scheme for digital processing of video signals in acousto-optical spectrum analyzer provides real-time correction for signal-dependent spectral bias. Spectrum analyzer described in "Two-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18092), related apparatus described in "Three-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18122). Essence of correction is to average over digitized outputs of pixels in each CCD row and to subtract this from the digitized output of each pixel in row. Signal processed electro-optically with reference-function signals to form two-dimensional spectral image in CCD camera.

  20. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  1. Concerning the Video Drift Method to Measure Double Stars

    NASA Astrophysics Data System (ADS)

    Nugent, Richard L.; Iverson, Ernest W.

    2015-05-01

    Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.

  2. Multiple Target Tracking in a Wide-Field-of-View Camera System

    DTIC Science & Technology

    1990-01-01

    assembly is mounted on a Contraves alt-azi axis table with a pointing accuracy of < 2 Urad. * Work performed under the auspices of the U.S. Department of... Contraves SUN 3 CCD DR11W VME EITHERNET SUN 3 !3T 3 RS170 Video 1 Video ^mglifier^ I WWV Clock VCR Datacube u Monitor Monitor UL...displaying processed images with overlay from the Datacube. We control the Contraves table using a GPIB interface on the SUN. GPIB also interfaces a

  3. Vision-sensing image analysis for GTAW process control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  4. System of launchable mesoscale robots for distributed sensing

    NASA Astrophysics Data System (ADS)

    Yesin, Kemal B.; Nelson, Bradley J.; Papanikolopoulos, Nikolaos P.; Voyles, Richard M.; Krantz, Donald G.

    1999-08-01

    A system of launchable miniature mobile robots with various sensors as payload is used for distributed sensing. The robots are projected to areas of interest either by a robot launcher or by a human operator using standard equipment. A wireless communication network is used to exchange information with the robots. Payloads such as a MEMS sensor for vibration detection, a microphone and an active video module are used mainly to detect humans. The video camera provides live images through a wireless video transmitter and a pan-tilt mechanism expands the effective field of view. There are strict restrictions on total volume and power consumption of the payloads due to the small size of the robot. Emerging technologies are used to address these restrictions. In this paper, we describe the use of microrobotic technologies to develop active vision modules for the mesoscale robot. A single chip CMOS video sensor is used along with a miniature lens that is approximately the size of a sugar cube. The device consumes 100 mW; about 5 times less than the power consumption of a comparable CCD camera. Miniature gearmotors 3 mm in diameter are used to drive the pan-tilt mechanism. A miniature video transmitter is used to transmit analog video signals from the camera.

  5. Mission Specialist Hawley works with the SWUIS experiment

    NASA Image and Video Library

    2013-11-18

    STS093-350-022 (22-27 July 1999) --- Astronaut Steven A. Hawley, mission specialist, works with the Southwest Ultraviolet Imaging System (SWUIS) experiment onboard the Earth-orbiting Space Shuttle Columbia. The SWUIS is based around a Maksutov-design Ultraviolet (UV) telescope and a UV-sensitive, image-intensified Charge-Coupled Device (CCD) camera that frames at video frame rates.

  6. Accurate estimation of camera shot noise in the real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.

  7. Construction/Application of the Internet Observatories in Japan

    NASA Astrophysics Data System (ADS)

    Satoh, T.; Tsubota, Y.; Matsumoto, N.; Takahashi, N.

    2000-05-01

    We have successfully built two Internet Observatories in Japan: one at Noda campus of the Science University of Tokyo and another at Hiyoshi campus of the Keio Senior High School. Both observatories are equipped with a computerized Meade LX-200 telescope (8" tube at the SUT site and 12" at the Keio site) with a CCD video camera inside the sliding-roof type observatory. Each observatory is controlled by two personal computer: one controls almost everything, including the roof, the telescope, and the camera, while another is dedicated to encode the real-time picture from the CCD video camera into the RealVideo format for live broadcasting. A user can operate the observatory through the web-based interface and can enjoy the real-time picture of the objects via the RealPlayer software. The administrator can run a sequence of batch commands with which no human interaction is needed from the beginning to the end of an observation. Although our observatories are primarily for educational purposes, this system can easily be converted to a signal-triggered one which may be very useful to observe transient phenomena, such as afterglows of gamma-ray bursts. The most remarkable feature of our observatories is that it is very inexpensive (it costs only a few tens of grands). We'll report details of the observatories in the poster, and at the same time, will demonstrate operating the observatories using an internet-connected PC from the meeting site. This work has been supported through the funding from the Telecommunicaitons Advancement Foundation for FY 1998 and 1999.

  8. Architecture of PAU survey camera readout electronics

    NASA Astrophysics Data System (ADS)

    Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

    2012-07-01

    PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

  9. Video-based beam position monitoring at CHESS

    NASA Astrophysics Data System (ADS)

    Revesz, Peter; Pauling, Alan; Krawczyk, Thomas; Kelly, Kevin J.

    2012-10-01

    CHESS has pioneered the development of X-ray Video Beam Position Monitors (VBPMs). Unlike traditional photoelectron beam position monitors that rely on photoelectrons generated by the fringe edges of the X-ray beam, with VBPMs we collect information from the whole cross-section of the X-ray beam. VBPMs can also give real-time shape/size information. We have developed three types of VBPMs: (1) VBPMs based on helium luminescence from the intense white X-ray beam. In this case the CCD camera is viewing the luminescence from the side. (2) VBPMs based on luminescence of a thin (~50 micron) CVD diamond sheet as the white beam passes through it. The CCD camera is placed outside the beam line vacuum and views the diamond fluorescence through a viewport. (3) Scatter-based VBPMs. In this case the white X-ray beam passes through a thin graphite filter or Be window. The scattered X-rays create an image of the beam's footprint on an X-ray sensitive fluorescent screen using a slit placed outside the beam line vacuum. For all VBPMs we use relatively inexpensive 1.3 Mega-pixel CCD cameras connected via USB to a Windows host for image acquisition and analysis. The VBPM host computers are networked and provide live images of the beam and streams of data about the beam position, profile and intensity to CHESS's signal logging system and to the CHESS operator. The operational use of VBPMs showed great advantage over the traditional BPMs by providing direct visual input for the CHESS operator. The VBPM precision in most cases is on the order of ~0.1 micron. On the down side, the data acquisition frequency (50-1000ms) is inferior to the photoelectron based BPMs. In the future with the use of more expensive fast cameras we will be able create VBPMs working in the few hundreds Hz scale.

  10. Development of low-noise CCD drive electronics for the world space observatory ultraviolet spectrograph subsystem

    NASA Astrophysics Data System (ADS)

    Salter, Mike; Clapp, Matthew; King, James; Morse, Tom; Mihalcea, Ionut; Waltham, Nick; Hayes-Thakore, Chris

    2016-07-01

    World Space Observatory Ultraviolet (WSO-UV) is a major Russian-led international collaboration to develop a large space-borne 1.7 m Ritchey-Chrétien telescope and instrumentation to study the universe at ultraviolet wavelengths between 115 nm and 320 nm, exceeding the current capabilities of ground-based instruments. The WSO Ultraviolet Spectrograph subsystem (WUVS) is led by the Institute of Astronomy of the Russian Academy of Sciences and consists of two high resolution spectrographs covering the Far-UV range of 115-176 nm and the Near-UV range of 174-310 nm, and a long-slit spectrograph covering the wavelength range of 115-305 nm. The custom-designed CCD sensors and cryostat assemblies are being provided by e2v technologies (UK). STFC RAL Space is providing the Camera Electronics Boxes (CEBs) which house the CCD drive electronics for each of the three WUVS channels. This paper presents the results of the detailed characterisation of the WUVS CCD drive electronics. The electronics include a novel high-performance video channel design that utilises Digital Correlated Double Sampling (DCDS) to enable low-noise readout of the CCD at a range of pixel frequencies, including a baseline requirement of less than 3 electrons rms readout noise for the combined CCD and electronics system at a readout rate of 50 kpixels/s. These results illustrate the performance of this new video architecture as part of a wider electronics sub-system that is designed for use in the space environment. In addition to the DCDS video channels, the CEB provides all the bias voltages and clocking waveforms required to operate the CCD and the system is fully programmable via a primary and redundant SpaceWire interface. The development of the CEB electronics design has undergone critical design review and the results presented were obtained using the engineering-grade electronics box. A variety of parameters and tests are included ranging from general system metrics, such as the power and mass, to more detailed analysis of the video performance including noise, linearity, crosstalk, gain stability and transient response.

  11. Towards fish-eye camera based in-home activity assessment.

    PubMed

    Bas, Erhan; Erdogmus, Deniz; Ozertem, Umut; Pavel, Misha

    2008-01-01

    Indoors localization, activity classification, and behavioral modeling are increasingly important for surveillance applications including independent living and remote health monitoring. In this paper, we study the suitability of fish-eye cameras (high-resolution CCD sensors with very-wide-angle lenses) for the purpose of monitoring people in indoors environments. The results indicate that these sensors are very useful for automatic activity monitoring and people tracking. We identify practical and mathematical problems related to information extraction from these video sequences and identify future directions to solve these issues.

  12. Video semaphore decoding for free-space optical communication

    NASA Astrophysics Data System (ADS)

    Last, Matthew; Fisher, Brian; Ezekwe, Chinwuba; Hubert, Sean M.; Patel, Sheetal; Hollar, Seth; Leibowitz, Brian S.; Pister, Kristofer S. J.

    2001-04-01

    Using teal-time image processing we have demonstrated a low bit-rate free-space optical communication system at a range of more than 20km with an average optical transmission power of less than 2mW. The transmitter is an autonomous one cubic inch microprocessor-controlled sensor node with a laser diode output. The receiver is a standard CCD camera with a 1-inch aperture lens, and both hardware and software implementations of the video semaphore decoding algorithm. With this system sensor data can be reliably transmitted 21 km form San Francisco to Berkeley.

  13. SU-F-BRA-16: Development of a Radiation Monitoring Device Using a Low-Cost CCD Camera Following Radionuclide Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taneja, S; Fru, L Che; Desai, V

    Purpose: It is now commonplace to handle treatments of hyperthyroidism using iodine-131 as an outpatient procedure due to lower costs and less stringent federal regulations. The Nuclear Regulatory Commission has currently updated release guidelines for these procedures, but there is still a large uncertainty in the dose to the public. Current guidelines to minimize dose to the public require patients to remain isolated after treatment. The purpose of this study was to use a low-cost common device, such as a cell phone, to estimate exposure emitted from a patient to the general public. Methods: Measurements were performed using an Applemore » iPhone 3GS and a Cs-137 irradiator. The charge-coupled device (CCD) camera on the phone was irradiated to exposure rates ranging from 0.1 mR/hr to 100 mR/hr and 30-sec videos were taken during irradiation with the camera lens covered by electrical tape. Interactions were detected as white pixels on a black background in each video. Both single threshold (ST) and colony counting (CC) methods were performed using MATLAB®. Calibration curves were determined by comparing the total pixel intensity output from each method to the known exposure rate. Results: The calibration curve showed a linear relationship above 5 mR/hr for both analysis techniques. The number of events counted per unit exposure rate within the linear region was 19.5 ± 0.7 events/mR and 8.9 ± 0.4 events/mR for the ST and CC methods respectively. Conclusion: Two algorithms were developed and show a linear relationship between photons detected by a CCD camera and low exposure rates, in the range of 5 mR/hr to 100-mR/hr. Future work aims to refine this model by investigating the dose-rate and energy dependencies of the camera response. This algorithm allows for quantitative monitoring of exposure from patients treated with iodine-131 using a simple device outside of the hospital.« less

  14. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  15. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Astrophysics Data System (ADS)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  16. Marshall Grazing Incidence X-ray Spectrometer (MaGIXS) Slit-Jaw Imaging System

    NASA Astrophysics Data System (ADS)

    Wilkerson, P.; Champey, P. R.; Winebarger, A. R.; Kobayashi, K.; Savage, S. L.

    2017-12-01

    The Marshall Grazing Incidence X-ray Spectrometer is a NASA sounding rocket payload providing a 0.6 - 2.5 nm spectrum with unprecedented spatial and spectral resolution. The instrument is comprised of a novel optical design, featuring a Wolter1 grazing incidence telescope, which produces a focused solar image on a slit plate, an identical pair of stigmatic optics, a planar diffraction grating and a low-noise detector. When MaGIXS flies on a suborbital launch in 2019, a slit-jaw camera system will reimage the focal plane of the telescope providing a reference for pointing the telescope on the solar disk and aligning the data to supporting observations from satellites and other rockets. The telescope focuses the X-ray and EUV image of the sun onto a plate covered with a phosphor coating that absorbs EUV photons, which then fluoresces in visible light. This 10-week REU project was aimed at optimizing an off-axis mounted camera with 600-line resolution NTSC video for extremely low light imaging of the slit plate. Radiometric calculations indicate an intensity of less than 1 lux at the slit jaw plane, which set the requirement for camera sensitivity. We selected a Watec 910DB EIA charge-coupled device (CCD) monochrome camera, which has a manufacturer quoted sensitivity of 0.0001 lux at F1.2. A high magnification and low distortion lens was then identified to image the slit jaw plane from a distance of approximately 10 cm. With the selected CCD camera, tests show that at extreme low-light levels, we achieve a higher resolution than expected, with only a moderate drop in frame rate. Based on sounding rocket flight heritage, the launch vehicle attitude control system is known to stabilize the instrument pointing such that jitter does not degrade video quality for context imaging. Future steps towards implementation of the imaging system will include ruggedizing the flight camera housing and mounting the selected camera and lens combination to the instrument structure.

  17. A multiscale video system for studying an optical phenomena during active experiments in the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Nikolashkin, S. V.; Reshetnikov, A. A.

    2017-11-01

    The system of video surveillance during active rocket experiments in the Polar geophysical observatory "Tixie" and studies of the effects of "Soyuz" vehicle launches from the "Vostochny" cosmodrome over the territory of the Republic of Sakha (Yakutia) is presented. The created system consists of three AHD video cameras with different angles of view mounted on a common platform mounted on a tripod with the possibility of manual guiding. The main camera with high-sensitivity black and white CCD matrix SONY EXview HADII is equipped depending on the task with lenses "MTO-1000" (F = 1000 mm) or "Jupiter-21M " (F = 300 mm) and is designed for more detailed shooting of luminous formations. The second camera of the same type, but with a 30 degree angle of view. It is intended for shooting of the general plan and large objects, and also for a binding of coordinates of object on stars. The third color wide-angle camera (120 degrees) is designed to be connected to landmarks in the daytime, the optical axis of this channel is directed at 60 degrees down. The data is recorded on the hard disk of a four-channel digital video recorder. Tests of the original version of the system with two channels were conducted during the launch of the geophysical rocket in Tixie in September 2015 and showed its effectiveness.

  18. Videogrammetric Model Deformation Measurement Technique

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Liu, Tian-Shu

    2001-01-01

    The theory, methods, and applications of the videogrammetric model deformation (VMD) measurement technique used at NASA for wind tunnel testing are presented. The VMD technique, based on non-topographic photogrammetry, can determine static and dynamic aeroelastic deformation and attitude of a wind-tunnel model. Hardware of the system includes a video-rate CCD camera, a computer with an image acquisition frame grabber board, illumination lights, and retroreflective or painted targets on a wind tunnel model. Custom software includes routines for image acquisition, target-tracking/identification, target centroid calculation, camera calibration, and deformation calculations. Applications of the VMD technique at five large NASA wind tunnels are discussed.

  19. Analysis of the color rendition of flexible endoscopes

    NASA Astrophysics Data System (ADS)

    Murphy, Edward M.; Hegarty, Francis J.; McMahon, Barry P.; Boyle, Gerard

    2003-03-01

    Endoscopes are imaging devices routinely used for the diagnosis of disease within the human digestive tract. Light is transmitted into the body cavity via incoherent fibreoptic bundles and is controlled by a light feedback system. Fibreoptic endoscopes use coherent fibreoptic bundles to provide the clinician with an image. It is also possible to couple fibreoptic endoscopes to a clip-on video camera. Video endoscopes consist of a small CCD camera, which is inserted into gastrointestinal tract, and associated image processor to convert the signal to analogue RGB video signals. Images from both types of endoscope are displayed on standard video monitors. Diagnosis is dependent upon being able to determine changes in the structure and colour of tissues and biological fluids, and therefore is dependent upon the ability of the endoscope to reproduce the colour of these tissues and fluids with fidelity. This study investigates the colour reproduction of flexible optical and video endoscopes. Fibreoptic and video endoscopes alter image colour characteristics in different ways. The colour rendition of fibreoptic endoscopes was assessed by coupling them to a video camera and applying video colorimetric techniques. These techniques were then used on video endoscopes to assess how the colour rendition of video endoscopes compared with that of optical endoscopes. In both cases results were obtained at fixed illumination settings. Video endoscopes were then assessed with varying levels of illumination. Initial results show that at constant luminance endoscopy systems introduce non-linear shifts in colour. Techniques for examining how this colour shift varies with illumination intensity were developed and both methodology and results will be presented. We conclude that more rigorous quality assurance is required to reduce colour error and are developing calibration procedures applicable to medical endoscopes.

  20. French Meteor Network for High Precision Orbits of Meteoroids

    NASA Technical Reports Server (NTRS)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  1. Phosphor thermography technique in hypersonic wind tunnel - Feasibility study

    NASA Astrophysics Data System (ADS)

    Edy, J. L.; Bouvier, F.; Baumann, P.; Le Sant, Y.

    Probative research has been undertaken at ONERA on a new technique of thermography in hypersonic wind tunnels. This method is based on the heat sensitivity of a luminescent coating applied to the model. The luminescent compound, excited by UV light, emits visible light, the properties of which depend on the phosphor temperature, among other factors. Preliminary blowdown wind tunnel tests have been performed, firstly for spot measurements and then for cartographic measurements using a 3-CCD video camera, a BETACAM video recorder and a digital image processing system. The results provide a good indication of the method feasibility.

  2. Transmission electron microscope CCD camera

    DOEpatents

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  3. Optimization of subcutaneous vein contrast enhancement

    NASA Astrophysics Data System (ADS)

    Zeman, Herbert D.; Lovhoiden, Gunnar; Deshmukh, Harshal

    2000-05-01

    A technique for enhancing the contrast of subcutaneous veins has been demonstrated. This techniques uses a near IR light source and one or more IR sensitive CCD TV cameras to produce a contrast enhanced image of the subcutaneous veins. This video image of the veins is projected back onto the patient's skin using a n LCD video projector. The use of an IR transmitting filter in front of the video cameras prevents any positive feedback from the visible light from the video projector from causing instabilities in the projected image. The demonstration contrast enhancing illuminator has been tested on adults and children, both Caucasian and African-American, and it enhances veins quite well in all cases. The most difficult cases are those where significant deposits of subcutaneous fat are present which make the veins invisible under normal room illumination. Recent attempts to see through fat using different IR wavelength bands and both linearly and circularly polarized light were unsuccessful. The key to seeing through fat turns out to be a very diffuse source of RI light. Results on adult and pediatric subjects are shown with this new IR light source.

  4. The Art of Astrophotography

    NASA Astrophysics Data System (ADS)

    Morison, Ian

    2017-02-01

    1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.

  5. Optimum color filters for CCD digital cameras

    NASA Astrophysics Data System (ADS)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  6. Biomechanical and mathematical analysis of human movement in medical rehabilitation science using time-series data from two video cameras and force-plate sensor

    NASA Astrophysics Data System (ADS)

    Tsuruoka, Masako; Shibasaki, Ryosuke; Box, Elgene O.; Murai, Shunji; Mori, Eiji; Wada, Takao; Kurita, Masahiro; Iritani, Makoto; Kuroki, Yoshikatsu

    1994-08-01

    In medical rehabilitation science, quantitative understanding of patient movement in 3-D space is very important. The patient with any joint disorder will experience its influence on other body parts in daily movement. The alignment of joints in movement is able to improve under medical therapy process. In this study, the newly developed system is composed of two non- metri CCD video cameras and a force plate sensor, which are controlled simultaneously by a personal computer. By this system time-series digital data from 3-D image photogrammetry, each foot pressure and its center position, is able to provide efficient information for biomechanical and mathematical analysis of human movement. Each specific and common points are indicated in any patient movement. This study suggests more various, quantitative understanding in medical rehabilitation science.

  7. Real-time tricolor phase measuring profilometry based on CCD sensitivity calibration

    NASA Astrophysics Data System (ADS)

    Zhu, Lin; Cao, Yiping; He, Dawu; Chen, Cheng

    2017-02-01

    A real-time tricolor phase measuring profilometry (RTPMP) based on charge coupled device (CCD) sensitivity calibration is proposed. Only one colour fringe pattern whose red (R), green (G) and blue (B) components are, respectively, coded as three sinusoidal phase-shifting gratings with an equivalent shifting phase of 2π/3 is needed and sent to an appointed flash memory on a specialized digital light projector (SDLP). A specialized time-division multiplexing timing sequence actively controls the SDLP to project the fringe patterns in R, G and B channels sequentially onto the measured object in one over seventy-two of a second and meanwhile actively controls a high frame rate monochrome CCD camera to capture the corresponding deformed patterns synchronously with the SDLP. So the sufficient information for reconstructing the three-dimensional (3D) shape in one over twenty-four of a second is obtained. Due to the different spectral sensitivity of the CCD camera to RGB lights, the captured deformed patterns from R, G and B channels cannot share the same peak and valley, which will lead to lower accuracy or even failing to reconstruct the 3D shape. So a deformed pattern amending method based on CCD sensitivity calibration is developed to guarantee the accurate 3D reconstruction. The experimental results verify the feasibility of the proposed RTPMP method. The proposed RTPMP method can obtain the 3D shape at over the video frame rate of 24 frames per second, avoid the colour crosstalk completely and be effective for measuring real-time changing object.

  8. Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Hardware

    NASA Astrophysics Data System (ADS)

    Kang, Y.-W.; Byun, Y. I.; Rhee, J. H.; Oh, S. H.; Kim, D. K.

    2007-12-01

    We designed and developed a multi-purpose CCD camera system for three kinds of CCDs; KAF-0401E(768×512), KAF-1602E(1536×1024), KAF-3200E(2184×1472) made by KODAK Co.. The system supports fast USB port as well as parallel port for data I/O and control signal. The packing is based on two stage circuit boards for size reduction and contains built-in filter wheel. Basic hardware components include clock pattern circuit, A/D conversion circuit, CCD data flow control circuit, and CCD temperature control unit. The CCD temperature can be controlled with accuracy of approximately 0.4° C in the max. range of temperature, Δ 33° C. This CCD camera system has with readout noise 6 e^{-}, and system gain 5 e^{-}/ADU. A total of 10 CCD camera systems were produced and our tests show that all of them show passable performance.

  9. Diffraction-based optical sensor detection system for capture-restricted environments

    NASA Astrophysics Data System (ADS)

    Khandekar, Rahul M.; Nikulin, Vladimir V.

    2008-04-01

    The use of digital cameras and camcorders in prohibited areas presents a growing problem. Piracy in the movie theaters results in huge revenue loss to the motion picture industry every year, but still image and video capture may present even a bigger threat if performed in high-security locations. While several attempts are being made to address this issue, an effective solution is yet to be found. We propose to approach this problem using a very commonly observed optical phenomenon. Cameras and camcorders use CCD and CMOS sensors, which include a number of photosensitive elements/pixels arranged in a certain fashion. Those are photosites in CCD sensors and semiconductor elements in CMOS sensors. They are known to reflect a small fraction of incident light, but could also act as a diffraction grating, resulting in the optical response that could be utilized to identify the presence of such a sensor. A laser-based detection system is proposed that accounts for the elements in the optical train of the camera, as well as the eye-safety of the people who could be exposed to optical beam radiation. This paper presents preliminary experimental data, as well as the proof-of-concept simulation results.

  10. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  11. High-frame rate multiport CCD imager and camera

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.

    1993-01-01

    A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.

  12. Keyhole imaging method for dynamic objects behind the occlusion area

    NASA Astrophysics Data System (ADS)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  13. Vacuum compatible miniature CCD camera head

    DOEpatents

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  14. Automatic vision system for analysis of microscopic behavior of flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Dickenson, Eric; Daemi, M. Farhang

    1997-10-01

    This paper describes the development of a novel automated and efficient vision system to obtain velocity and concentration measurement within a porous medium. An aqueous fluid lace with a fluorescent dye to microspheres flows through a transparent, refractive-index-matched column packed with transparent crystals. For illumination purposes, a planar sheet of laser passes through the column as a CCD camera records all the laser illuminated planes. Detailed microscopic velocity and concentration fields have been computed within a 3D volume of the column. For measuring velocities, while the aqueous fluid, laced with fluorescent microspheres, flows through the transparent medium, a CCD camera records the motions of the fluorescing particles by a video cassette recorder. The recorded images are acquired automatically frame by frame and transferred to the computer for processing, by using a frame grabber an written relevant algorithms through an RS-232 interface. Since the grabbed image is poor in this stage, some preprocessings are used to enhance particles within images. Finally, these enhanced particles are monitored to calculate velocity vectors in the plane of the beam. For concentration measurements, while the aqueous fluid, laced with a fluorescent organic dye, flows through the transparent medium, a CCD camera sweeps back and forth across the column and records concentration slices on the planes illuminated by the laser beam traveling simultaneously with the camera. Subsequently, these recorded images are transferred to the computer for processing in similar fashion to the velocity measurement. In order to have a fully automatic vision system, several detailed image processing techniques are developed to match exact images that have different intensities values but the same topological characteristics. This results in normalized interstitial chemical concentrations as a function of time within the porous column.

  15. Real-time automatic inspection under adverse conditions

    NASA Astrophysics Data System (ADS)

    Carvalho, Fernando D.; Correia, Fernando C.; Freitas, Jose C. A.; Rodrigues, Fernando C.

    1991-03-01

    This paper presents the results of a R&D Program supported by a grant from the Ministry of Defense, devoted to the development of an inteffigent camera for surveillance in the open air. The effects of shadows, clouds and winds were problems to be solved without generating false alarm events. The system is based on a video CCD camera which generates a video CCIR signal. The signal is then processed in modular hardware which detects the changes in the scene and processes the image, in order to enhance the intruder image and path. Windows may be defined over the image in order to increase the information obtained about the intruder and a first approach to the classification of the type of intruder may be achieved. The paper describes the hardware used in the system, as well as the software, used for the installation of the camera and the software developed for the microprocessor which is responsible for the generation of the alarm signals. The paper also presents some results of surveillance tasks in the open air executed by the system with real time performance.

  16. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  17. Printed circuit board for a CCD camera head

    DOEpatents

    Conder, Alan D.

    2002-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  18. A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis

    PubMed Central

    Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

    2011-01-01

    This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes. PMID:22319349

  19. A refrigerated web camera for photogrammetric video measurement inside biomass boilers and combustion analysis.

    PubMed

    Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

    2011-01-01

    This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes.

  20. Liquid-Phase Circulation and Mixing in Multicomponent Droplets Vaporizing in a Laminar Convective Environment

    DTIC Science & Technology

    1993-10-15

    included an f/2.8 dual port long-distance microscope coupled to a black d•rl white CCD video camera. A long-pass filter (with a cut-off at 530 nm) was...evaporation rates of multicomponent droplets is needed for the calibration of exciplex -based vapor/liquid visualization techniques that are employed today in...Publishing Co., Houston. Texas. Hanlon. T. R.. and Melton. L. A. (1992). Exciplex fluorescence thermometry of falling hexadecane droplets. Journal of Heat

  1. Ultrahigh- and high-speed photography, videography, and photonics '91; Proceedings of the Meeting, San Diego, CA, July 24-26, 1991

    NASA Astrophysics Data System (ADS)

    Jaanimagi, Paul A.

    1992-01-01

    This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.

  2. Surgical Videos with Synchronised Vertical 2-Split Screens Recording the Surgeons' Hand Movement.

    PubMed

    Kaneko, Hiroki; Ra, Eimei; Kawano, Kenichi; Yasukawa, Tsutomu; Takayama, Kei; Iwase, Takeshi; Terasaki, Hiroko

    2015-01-01

    To improve the state-of-the-art teaching system by creating surgical videos with synchronised vertical 2-split screens. An ultra-compact, wide-angle point-of-view camcorder (HX-A1, Panasonic) was mounted on the surgical microscope focusing mostly on the surgeons' hand movements. In combination with the regular surgical videos obtained from the CCD camera in the surgical microscope, synchronised vertical 2-split-screen surgical videos were generated with the video-editing software. Using synchronised vertical 2-split-screen videos, residents of the ophthalmology department could watch and learn how assistant surgeons controlled the eyeball, while the main surgeons performed scleral buckling surgery. In vitrectomy, the synchronised vertical 2-split-screen videos showed the surgeons' hands holding the instruments and moving roughly and boldly, in contrast to the very delicate movements of the vitrectomy instruments inside the eye. Synchronised vertical 2-split-screen surgical videos are beneficial for the education of young surgical trainees when learning surgical skills including the surgeons' hand movements. © 2015 S. Karger AG, Basel.

  3. Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.

    PubMed

    Kopp, O; Markert, S; Tornow, R P

    2002-01-01

    To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.

  4. Digital readout for image converter cameras

    NASA Astrophysics Data System (ADS)

    Honour, Joseph

    1991-04-01

    There is an increasing need for fast and reliable analysis of recorded sequences from image converter cameras so that experimental information can be readily evaluated without recourse to more time consuming photographic procedures. A digital readout system has been developed using a randomly triggerable high resolution CCD camera, the output of which is suitable for use with IBM AT compatible PC. Within half a second from receipt of trigger pulse, the frame reformatter displays the image and transfer to storage media can be readily achieved via the PC and dedicated software. Two software programmes offer different levels of image manipulation which includes enhancement routines and parameter calculations with accuracy down to pixel levels. Hard copy prints can be acquired using a specially adapted Polaroid printer, outputs for laser and video printer extend the overall versatility of the system.

  5. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  6. Apparatus and method for laser beam diagnosis

    DOEpatents

    Salmon, Jr., Joseph T.

    1991-01-01

    An apparatus and method is disclosed for accurate, real time monitoring of the wavefront curvature of a coherent laser beam. Knowing the curvature, it can be quickly determined whether the laser beam is collimated, or focusing (converging), or de-focusing (diverging). The apparatus includes a lateral interferometer for forming an interference pattern of the laser beam to be diagnosed. The interference pattern is imaged to a spatial light modulator (SLM), whose output is a coherent laser beam having an image of the interference pattern impressed on it. The SLM output is focused to obtain the far-field diffraction pattern. A video camera, such as CCD, monitors the far-field diffraction pattern, and provides an electrical output indicative of the shape of the far-field pattern. Specifically, the far-field pattern comprises a central lobe and side lobes, whose relative positions are indicative of the radius of curvature of the beam. The video camera's electrical output may be provided to a computer which analyzes the data to determine the wavefront curvature of the laser beam.

  7. Apparatus and method for laser beam diagnosis

    DOEpatents

    Salmon, J.T. Jr.

    1991-08-27

    An apparatus and method are disclosed for accurate, real time monitoring of the wavefront curvature of a coherent laser beam. Knowing the curvature, it can be quickly determined whether the laser beam is collimated, or focusing (converging), or de-focusing (diverging). The apparatus includes a lateral interferometer for forming an interference pattern of the laser beam to be diagnosed. The interference pattern is imaged to a spatial light modulator (SLM), whose output is a coherent laser beam having an image of the interference pattern impressed on it. The SLM output is focused to obtain the far-field diffraction pattern. A video camera, such as CCD, monitors the far-field diffraction pattern, and provides an electrical output indicative of the shape of the far-field pattern. Specifically, the far-field pattern comprises a central lobe and side lobes, whose relative positions are indicative of the radius of curvature of the beam. The video camera's electrical output may be provided to a computer which analyzes the data to determine the wavefront curvature of the laser beam. 11 figures.

  8. Toolkit for testing scientific CCD cameras

    NASA Astrophysics Data System (ADS)

    Uzycki, Janusz; Mankiewicz, Lech; Molak, Marcin; Wrochna, Grzegorz

    2006-03-01

    The CCD Toolkit (1) is a software tool for testing CCD cameras which allows to measure important characteristics of a camera like readout noise, total gain, dark current, 'hot' pixels, useful area, etc. The application makes a statistical analysis of images saved in files with FITS format, commonly used in astronomy. A graphical interface is based on the ROOT package, which offers high functionality and flexibility. The program was developed in a way to ensure future compatibility with different operating systems: Windows and Linux. The CCD Toolkit was created for the "Pie of the Sky" project collaboration (2).

  9. New design environment for defect detection in web inspection systems

    NASA Astrophysics Data System (ADS)

    Hajimowlana, S. Hossain; Muscedere, Roberto; Jullien, Graham A.; Roberts, James W.

    1997-09-01

    One of the aims of industrial machine vision is to develop computer and electronic systems destined to replace human vision in the process of quality control of industrial production. In this paper we discuss the development of a new design environment developed for real-time defect detection using reconfigurable FPGA and DSP processor mounted inside a DALSA programmable CCD camera. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The system is targeted for web inspection but has the potential for broader application areas. We describe and show test results of the prototype system board, mounted inside a DALSA camera and discuss some of the algorithms currently simulated and implemented for web inspection applications.

  10. SU-E-T-161: SOBP Beam Analysis Using Light Output of Scintillation Plate Acquired by CCD Camera.

    PubMed

    Cho, S; Lee, S; Shin, J; Min, B; Chung, K; Shin, D; Lim, Y; Park, S

    2012-06-01

    To analyze Bragg-peak beams in SOBP (spread-out Bragg-peak) beam using CCD (charge-coupled device) camera - scintillation screen system. We separated each Bragg-peak beam using light output of high sensitivity scintillation material acquired by CCD camera and compared with Bragg-peak beams calculated by Monte Carlo simulation. In this study, CCD camera - scintillation screen system was constructed with a high sensitivity scintillation plate (Gd2O2S:Tb) and a right-angled prismatic PMMA phantom, and a Marlin F-201B, EEE-1394 CCD camera. SOBP beam irradiated by the double scattering mode of a PROTEUS 235 proton therapy machine in NCC is 8 cm width, 13 g/cm 2 range. The gain, dose rate and current of this beam is 50, 2 Gy/min and 70 nA, respectively. Also, we simulated the light output of scintillation plate for SOBP beam using Geant4 toolkit. We evaluated the light output of high sensitivity scintillation plate according to intergration time (0.1 - 1.0 sec). The images of CCD camera during the shortest intergration time (0.1 sec) were acquired automatically and randomly, respectively. Bragg-peak beams in SOBP beam were analyzed by the acquired images. Then, the SOBP beam used in this study was calculated by Geant4 toolkit and Bragg-peak beams in SOBP beam were obtained by ROOT program. The SOBP beam consists of 13 Bragg-peak beams. The results of experiment were compared with that of simulation. We analyzed Bragg-peak beams in SOBP beam using light output of scintillation plate acquired by CCD camera and compared with that of Geant4 simulation. We are going to study SOBP beam analysis using more effective the image acquisition technique. © 2012 American Association of Physicists in Medicine.

  11. Driving techniques for high frame rate CCD camera

    NASA Astrophysics Data System (ADS)

    Guo, Weiqiang; Jin, Longxu; Xiong, Jingwu

    2008-03-01

    This paper describes a high-frame rate CCD camera capable of operating at 100 frames/s. This camera utilizes Kodak KAI-0340, an interline transfer CCD with 640(vertical)×480(horizontal) pixels. Two output ports are used to read out CCD data and pixel rates approaching 30 MHz. Because of its reduced effective opacity of vertical charge transfer registers, interline transfer CCD can cause undesired image artifacts, such as random white spots and smear generated in the registers. To increase frame rate, a kind of speed-up structure has been incorporated inside KAI-0340, then it is vulnerable to a vertical stripe effect. The phenomena which mentioned above may severely impair the image quality. To solve these problems, some electronic methods of eliminating these artifacts are adopted. Special clocking mode can dump the unwanted charge quickly, then the fast readout of the images, cleared of smear, follows immediately. Amplifier is used to sense and correct delay mismatch between the dual phase vertical clock pulses, the transition edges become close to coincident, so vertical stripes disappear. Results obtained with the CCD camera are shown.

  12. A Simple Method Based on the Application of a CCD Camera as a Sensor to Detect Low Concentrations of Barium Sulfate in Suspension

    PubMed Central

    de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba

    2011-01-01

    The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607

  13. Tumor detection in mice by measurement of fluorescence decay time matrices

    NASA Astrophysics Data System (ADS)

    Cubeddu, R.; Pifferi, A.; Taroni, P.; Valentini, G.; Canti, G.

    1995-12-01

    An intensified CCD video camera has been used to measure the spatial distribution of the fluorescence decay time in tumor-bearing mice sensitized with hematoporphyrin derivative. Mice were injected with five doses of sensitizer, ranging from 0.1 to 10 mg / kg body weight. For any drug dose the decay time of the exogenous fluorescence in the tumor is always significantly longer than in normal tissues. The image created by associating a gray-shade scale to the decay time matrix of each mouse permits a reliable and precise detection of the neoplasia.

  14. Movement measurement of isolated skeletal muscle using imaging microscopy

    NASA Astrophysics Data System (ADS)

    Elias, David; Zepeda, Hugo; Leija, Lorenzo S.; Sossa, Humberto; de la Rosa, Jose I.

    1997-05-01

    An imaging-microscopy methodology to measure contraction movement in chemically stimulated crustacean skeletal muscle, whose movement speed is about 0.02 mm/s is presented. For this, a CCD camera coupled to a microscope and a high speed digital image acquisition system, allowing us to capture 960 images per second are used. The images are digitally processed in a PC and displayed in a video monitor. A maximal field of 0.198 X 0.198 mm2 and a spatial resolution of 3.5 micrometers are obtained.

  15. Performance evaluation of low-cost airglow cameras for mesospheric gravity wave measurements

    NASA Astrophysics Data System (ADS)

    Suzuki, S.; Shiokawa, K.

    2016-12-01

    Atmospheric gravity waves significantly contribute to the wind/thermal balances in the mesosphere and lower thermosphere (MLT) through their vertical transport of horizontal momentum. It has been reported that the gravity wave momentum flux preferentially associated with the scale of the waves; the momentum fluxes of the waves with a horizontal scale of 10-100 km are particularly significant. Airglow imaging is a useful technique to observe two-dimensional structure of small-scale (<100 km) gravity waves in the MLT region and has been used to investigate global behaviour of the waves. Recent studies with simultaneous/multiple airglow cameras have derived spatial extent of the MLT waves. Such network imaging observations are advantageous to ever better understanding of coupling between the lower and upper atmosphere via gravity waves. In this study, we newly developed low-cost airglow cameras to enlarge the airglow imaging network. Each of the cameras has a fish-eye lens with a 185-deg field-of-view and equipped with a CCD video camera (WATEC WAT-910HX) ; the camera is small (W35.5 x H36.0 x D63.5 mm) and inexpensive, much more than the airglow camera used for the existing ground-based network (Optical Mesosphere Thermosphere Imagers (OMTI) operated by Solar-Terrestrial Environmental Laboratory, Nagoya University), and has a CCD sensor with 768 x 494 pixels that is highly sensitive enough to detect the mesospheric OH airglow emission perturbations. In this presentation, we will report some results of performance evaluation of this camera made at Shigaraki (35-deg N, 136-deg E), Japan, where is one of the OMTI station. By summing 15-images (i.e., 1-min composition of the images) we recognised clear gravity wave patterns in the images with comparable quality to the OMTI's image. Outreach and educational activities based on this research will be also reported.

  16. Development and use of an L3CCD high-cadence imaging system for Optical Astronomy

    NASA Astrophysics Data System (ADS)

    Sheehan, Brendan J.; Butler, Raymond F.

    2008-02-01

    A high cadence imaging system, based on a Low Light Level CCD (L3CCD) camera, has been developed for photometric and polarimetric applications. The camera system is an iXon DV-887 from Andor Technology, which uses a CCD97 L3CCD detector from E2V technologies. This is a back illuminated device, giving it an extended blue response, and has an active area of 512×512 pixels. The camera system allows frame-rates ranging from 30 fps (full frame) to 425 fps (windowed & binned frame). We outline the system design, concentrating on the calibration and control of the L3CCD camera. The L3CCD detector can be either triggered directly by a GPS timeserver/frequency generator or be internally triggered. A central PC remotely controls the camera computer system and timeserver. The data is saved as standard `FITS' files. The large data loads associated with high frame rates, leads to issues with gathering and storing the data effectively. To overcome such problems, a specific data management approach is used, and a Python/PYRAF data reduction pipeline was written for the Linux environment. This uses calibration data collected either on-site, or from lab based measurements, and enables a fast and reliable method for reducing images. To date, the system has been used twice on the 1.5 m Cassini Telescope in Loiano (Italy) we present the reduction methods and observations made.

  17. IR Hiding: A Method to Prevent Video Re-shooting by Exploiting Differences between Human Perceptions and Recording Device Characteristics

    NASA Astrophysics Data System (ADS)

    Yamada, Takayuki; Gohshi, Seiichi; Echizen, Isao

    A method is described to prevent video images and videos displayed on screens from being re-shot by digital cameras and camcorders. Conventional methods using digital watermarking for re-shooting prevention embed content IDs into images and videos, and they help to identify the place and time where the actual content was shot. However, these methods do not actually prevent digital content from being re-shot by camcorders. We developed countermeasures to stop re-shooting by exploiting the differences between the sensory characteristics of humans and devices. The countermeasures require no additional functions to use-side devices. It uses infrared light (IR) to corrupt the content recorded by CCD or CMOS devices. In this way, re-shot content will be unusable. To validate the method, we developed a prototype system and implemented it on a 100-inch cinema screen. Experimental evaluations showed that the method effectively prevents re-shooting.

  18. Real-time color image processing for forensic fiber investigations

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils

    1995-09-01

    This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.

  19. Effect of camera resolution and bandwidth on facial affect recognition.

    PubMed

    Cruz, Mario; Cruz, Robyn Flaum; Krupinski, Elizabeth A; Lopez, Ana Maria; McNeeley, Richard M; Weinstein, Ronald S

    2004-01-01

    This preliminary study explored the effect of camera resolution and bandwidth on facial affect recognition, an important process and clinical variable in mental health service delivery. Sixty medical students and mental health-care professionals were recruited and randomized to four different combinations of commonly used teleconferencing camera resolutions and bandwidths: (1) one chip charged coupling device (CCD) camera, commonly used for VHSgrade taping and in teleconferencing systems costing less than $4,000 with a resolution of 280 lines, and 128 kilobytes per second bandwidth (kbps); (2) VHS and 768 kbps; (3) three-chip CCD camera, commonly used for Betacam (Beta) grade taping and in teleconferencing systems costing more than $4,000 with a resolution of 480 lines, and 128 kbps; and (4) Betacam and 768 kbps. The subjects were asked to identify four facial affects dynamically presented on videotape by an actor and actress presented via a video monitor at 30 frames per second. Two-way analysis of variance (ANOVA) revealed a significant interaction effect for camera resolution and bandwidth (p = 0.02) and a significant main effect for camera resolution (p = 0.006), but no main effect for bandwidth was detected. Post hoc testing of interaction means, using the Tukey Honestly Significant Difference (HSD) test and the critical difference (CD) at the 0.05 alpha level = 1.71, revealed subjects in the VHS/768 kbps (M = 7.133) and VHS/128 kbps (M = 6.533) were significantly better at recognizing the displayed facial affects than those in the Betacam/768 kbps (M = 4.733) or Betacam/128 kbps (M = 6.333) conditions. Camera resolution and bandwidth combinations differ in their capacity to influence facial affect recognition. For service providers, this study's results support the use of VHS cameras with either 768 kbps or 128 kbps bandwidths for facial affect recognition compared to Betacam cameras. The authors argue that the results of this study are a consequence of the VHS camera resolution/bandwidth combinations' ability to improve signal detection (i.e., facial affect recognition) by subjects in comparison to Betacam camera resolution/bandwidth combinations.

  20. Compact Video Microscope Imaging System Implemented in Colloid Studies

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2002-01-01

    Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.

  1. Portal imaging with flat-panel detector and CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Wai; Dallas, William J.

    1997-07-01

    This paper provides a comparison of imaging parameters of two portal imaging systems at 6 MV: a flat panel detector and a CCD-camera based portal imaging system. Measurements were made of the signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. Both systems have a linear response with respect to exposure, and the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal- to-noise ratio, which is higher than that observed wit the CCD-camera based portal imaging system. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The paper also presents data on the screen's photon gain (the number of light-photons per interacting x-ray photon), as well as on the magnitude of the Swank-noise, (which describes fluctuation in the screen's photon gain). Images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center, were generated at an exposure of 1 MU. The CCD-camera based system permits detection of aluminum-holes of 0.01194 cm diameter and 0.228 mm depth while the flat-panel detector permits detection of aluminum holes of 0.01194 cm diameter and 0.1626 mm depth, indicating a better signal-to-noise ratio. Rank order filtering was applied to the raw images from the CCD-based system in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-camera and generate 'salt and pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise.

  2. CCD high-speed videography system with new concepts and techniques

    NASA Astrophysics Data System (ADS)

    Zheng, Zengrong; Zhao, Wenyi; Wu, Zhiqiang

    1997-05-01

    A novel CCD high speed videography system with brand-new concepts and techniques is developed by Zhejiang University recently. The system can send a series of short flash pulses to the moving object. All of the parameters, such as flash numbers, flash durations, flash intervals, flash intensities and flash colors, can be controlled according to needs by the computer. A series of moving object images frozen by flash pulses, carried information of moving object, are recorded by a CCD video camera, and result images are sent to a computer to be frozen, recognized and processed with special hardware and software. Obtained parameters can be displayed, output as remote controlling signals or written into CD. The highest videography frequency is 30,000 images per second. The shortest image freezing time is several microseconds. The system has been applied to wide fields of energy, chemistry, medicine, biological engineering, aero- dynamics, explosion, multi-phase flow, mechanics, vibration, athletic training, weapon development and national defense engineering. It can also be used in production streamline to carry out the online, real-time monitoring and controlling.

  3. Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing

    NASA Technical Reports Server (NTRS)

    Crooke, Julie A.

    2003-01-01

    The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.

  4. Visualizing individual microtubules by bright field microscopy

    NASA Astrophysics Data System (ADS)

    Gutiérrez-Medina, Braulio; Block, Steven M.

    2010-11-01

    Microtubules are slender (˜25 nm diameter), filamentous polymers involved in cellular structure and organization. Individual microtubules have been visualized via fluorescence imaging of dye-labeled tubulin subunits and by video-enhanced, differential interference-contrast microscopy of unlabeled polymers using sensitive CCD cameras. We demonstrate the imaging of unstained microtubules using a microscope with conventional bright field optics in conjunction with a webcam-type camera and a light-emitting diode illuminator. The light scattered by microtubules is image-processed to remove the background, reduce noise, and enhance contrast. The setup is based on a commercial microscope with a minimal set of inexpensive components, suitable for implementation in a student laboratory. We show how this approach can be used in a demonstration motility assay, tracking the gliding motions of microtubules driven by the motor protein kinesin.

  5. Optical synthesizer for a large quadrant-array CCD camera: Center director's discretionary fund

    NASA Technical Reports Server (NTRS)

    Hagyard, Mona J.

    1992-01-01

    The objective of this program was to design and develop an optical device, an optical synthesizer, that focuses four contiguous quadrants of a solar image on four spatially separated CCD arrays that are part of a unique CCD camera system. This camera and the optical synthesizer will be part of the new NASA-Marshall Experimental Vector Magnetograph, and instrument developed to measure the Sun's magnetic field as accurately as present technology allows. The tasks undertaken in the program are outlined and the final detailed optical design is presented.

  6. Video Mosaicking for Inspection of Gas Pipelines

    NASA Technical Reports Server (NTRS)

    Magruder, Darby; Chien, Chiun-Hong

    2005-01-01

    A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.

  7. A Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Software

    NASA Astrophysics Data System (ADS)

    Oh, S. H.; Kang, Y. W.; Byun, Y. I.

    2007-12-01

    We present a software which we developed for the multi-purpose CCD camera. This software can be used on the all 3 types of CCD - KAF-0401E (768×512), KAF-1602E (15367times;1024), KAF-3200E (2184×1472) made in KODAK Co.. For the efficient CCD camera control, the software is operated with two independent processes of the CCD control program and the temperature/shutter operation program. This software is designed to fully automatic operation as well as manually operation under LINUX system, and is controled by LINUX user signal procedure. We plan to use this software for all sky survey system and also night sky monitoring and sky observation. As our results, the read-out time of each CCD are about 15sec, 64sec, 134sec for KAF-0401E, KAF-1602E, KAF-3200E., because these time are limited by the data transmission speed of parallel port. For larger format CCD, the data transmission is required more high speed. we are considering this control software to one using USB port for high speed data transmission.

  8. The Mars Science Laboratory Curiosity rover Mastcam instruments: Preflight and in-flight calibration, validation, and data archiving

    NASA Astrophysics Data System (ADS)

    Bell, J. F.; Godber, A.; McNair, S.; Caplinger, M. A.; Maki, J. N.; Lemmon, M. T.; Van Beek, J.; Malin, M. C.; Wellington, D.; Kinch, K. M.; Madsen, M. B.; Hardgrove, C.; Ravine, M. A.; Jensen, E.; Harker, D.; Anderson, R. B.; Herkenhoff, K. E.; Morris, R. V.; Cisneros, E.; Deen, R. G.

    2017-07-01

    The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted 2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) "true color" images, multispectral images in nine additional bands spanning 400-1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration.

  9. Proton radiation damage experiment on P-Channel CCD for an X-ray CCD camera onboard the ASTRO-H satellite

    NASA Astrophysics Data System (ADS)

    Mori, Koji; Nishioka, Yusuke; Ohura, Satoshi; Koura, Yoshiaki; Yamauchi, Makoto; Nakajima, Hiroshi; Ueda, Shutaro; Kan, Hiroaki; Anabuki, Naohisa; Nagino, Ryo; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Kohmura, Takayoshi; Ikeda, Shoma; Murakami, Hiroshi; Ozaki, Masanobu; Dotani, Tadayasu; Maeda, Yukie; Sagara, Kenshi

    2013-12-01

    We report on a proton radiation damage experiment on P-channel CCD newly developed for an X-ray CCD camera onboard the ASTRO-H satellite. The device was exposed up to 109 protons cm-2 at 6.7 MeV. The charge transfer inefficiency (CTI) was measured as a function of radiation dose. In comparison with the CTI currently measured in the CCD camera onboard the Suzaku satellite for 6 years, we confirmed that the new type of P-channel CCD is radiation tolerant enough for space use. We also confirmed that a charge-injection technique and lowering the operating temperature efficiently work to reduce the CTI for our device. A comparison with other P-channel CCD experiments is also discussed. We performed a proton radiation damage experiment on a new P-channel CCD. The device was exposed up to 109 protons cm-2 at 6.7 MeV. We confirmed that it is radiation tolerant enough for space use. We confirmed that a charge-injection technique reduces the CTI. We confirmed that lowering the operating temperature also reduces the CTI.

  10. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (<= 25 e- read noise and <= 10 e- /sec/pixel dark current), in addition to maintaining a stable gain of ≍ 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Three flight cameras and one engineering camera were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise and dark current of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV, EUV and X-ray science cameras at MSFC.

  11. CCD imaging system for the EUV solar telescope

    NASA Astrophysics Data System (ADS)

    Gong, Yan; Song, Qian; Ye, Bing-Xun

    2006-01-01

    In order to develop the detector adapted to the space solar telescope, we have built a CCD camera system capable of working in the extra ultraviolet (EUV) band, which is composed of one phosphor screen, one intensified system using a photocathode/micro-channel plate(MCP)/ phosphor, one optical taper and one chip of front-illuminated (FI) CCD without screen windows. All of them were stuck one by one with optical glue. The working principle of the camera system is presented; moreover we have employed the mesh experiment to calibrate and test the CCD camera system in 15~24nm, the position resolution of about 19 μm is obtained at the wavelength of 17.1nm and 19.5nm.

  12. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effortmore » was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.« less

  13. Modified modular imaging system designed for a sounding rocket experiment

    NASA Astrophysics Data System (ADS)

    Veach, Todd J.; Scowen, Paul A.; Beasley, Matthew; Nikzad, Shouleh

    2012-09-01

    We present the design and system calibration results from the fabrication of a charge-coupled device (CCD) based imaging system designed using a modified modular imager cell (MIC) used in an ultraviolet sounding rocket mission. The heart of the imaging system is the MIC, which provides the video pre-amplifier circuitry and CCD clock level filtering. The MIC is designed with standard four-layer FR4 printed circuit board (PCB) with surface mount and through-hole components for ease of testing and lower fabrication cost. The imager is a 3.5k by 3.5k LBNL p-channel CCD with enhanced quantum efficiency response in the UV using delta-doping technology at JPL. The recently released PCIe/104 Small-Cam CCD controller from Astronomical Research Cameras, Inc (ARC) performs readout of the detector. The PCIe/104 Small-Cam system has the same capabilities as its larger PCI brethren, but in a smaller form factor, which makes it ideally suited for sub-orbital ballistic missions. The overall control is then accomplished using a PCIe/104 computer from RTD Embedded Technologies, Inc. The design, fabrication, and testing was done at the Laboratory for Astronomical and Space Instrumentation (LASI) at Arizona State University. Integration and flight calibration are to be completed at the University of Colorado Boulder before integration into CHESS.

  14. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; de Vries, Sjoerd C.

    2010-10-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, observers judged the orientation of the triangles in order to determine VA and CS thresholds at the 75% correct level. Three camera velocities (0, 1.0, and 2.0 deg/s, or 0, 4.1, and 8.1 pixels/frame) and four compression rates (no compression, 4 Mb/s, 2 Mb/s, and 1 Mb/s) were used. VA is shown to be rather robust to any combination of motion and compression. CS, however, dramatically decreases when motion is combined with high compression ratios. The measured thresholds were fed into the TOD target acquisition model to predict the effect of motion and compression on acquisition ranges for tactical military vehicles. The effect of compression on static performance is limited but strong with motion video. The data suggest that with the MPEG2 algorithm, the emphasis is on the preservation of image detail at the cost of contrast loss.

  15. Millikan Movies

    NASA Astrophysics Data System (ADS)

    Zou, Xueli; Dietz, Eric; McGuire, Trevor; Fox, Louise; Norris, Tiara; Diamond, Brendan; Chavez, Ricardo; Cheng, Stephen

    2008-09-01

    Since Robert Millikan discovered the quantization of electric charge and measured its fundamental value over 90 years ago, his oil-drop experiment has become essential in physics laboratory classes at both the high school and college level. As physics instructors, however, many of us have used the traditional setup and experienced the tedium of collecting data and the frustration of students who obtain disappointing results for the charges on individual oil drops after two or three hours of hard work. Some novel approaches have been developed to make the data collection easier and more accurate. One method is to attach a CCD (charge coupled device) camera to the microscope of the traditional setup.1,2 Through the CCD camera, the motion of an oil drop can be displayed on a TV monitor and/or on a computer.2 This allows several students to view the image of a droplet simultaneously instead of taking turns squinting through the tiny microscope eyepiece on the traditional setup. Furthermore, the motion of an oil drop can be captured and analyzed using software such as VideoPoint,3 which enhances the accuracy of the measurement of the charge on each oil drop.2 While these innovative methods improve the convenience and efficiency with which data can be collected, an instructor has to invest a considerable amount of money and time so as to adapt the new techniques to his or her own classroom. In this paper, we will report on the QuickTime movies we made, which can be used to analyze the motions of 16 selected oil drops. These digital videos are available on the web4 for teachers to download and use with their own students. We will also share the procedure for analyzing the videos using Logger Pro,5 as well as our results for the charges on the oil drops and some pedagogical aspects of using the movies with students.

  16. High-precision gauging of metal rings

    NASA Astrophysics Data System (ADS)

    Carlin, Mats; Lillekjendlie, Bjorn

    1994-11-01

    Raufoss AS designs and produces air brake fittings for trucks and buses on the international market. One of the critical components in the fittings is a small, circular metal ring, which is going through 100% dimension control. This article describes a low-price, high accuracy solution developed at SINTEF Instrumentation based on image metrology and a subpixel resolution algorithm. The measurement system consists of a PC-plugg-in transputer video board, a CCD camera, telecentric optics and a machine vision strobe. We describe the measurement technique in some detail, as well as the robust statistical techniques found to be essential in the real life environment.

  17. Structure Formation in Complex Plasma

    DTIC Science & Technology

    2011-08-24

    Dewer bottle (upper figures) or in the vapor of liquid helium (lower figures). Liq. He Ring electrode Particles Green Laser RF Plasma ... Ring electrode CCD camera Prism mirror Liq. He Glass Tube Liq. N2 Glass Dewar Acrylic particles Gas Helium Green Laser CCD camera Pressure

  18. eJAAVSO | aavso.org

    Science.gov Websites

    Institute CCD School Videos Student Projects Two Eyes, 3D Variable Star Astronomy H-R Diagram Plotting CHOICE Online Institute CCD School Videos Student Projects Two Eyes, 3D Variable Star Astronomy H-R

  19. Simple and cost-effective hardware and software for functional brain mapping using intrinsic optical signal imaging.

    PubMed

    Harrison, Thomas C; Sigler, Albrecht; Murphy, Timothy H

    2009-09-15

    We describe a simple and low-cost system for intrinsic optical signal (IOS) imaging using stable LED light sources, basic microscopes, and commonly available CCD cameras. IOS imaging measures activity-dependent changes in the light reflectance of brain tissue, and can be performed with a minimum of specialized equipment. Our system uses LED ring lights that can be mounted on standard microscope objectives or video lenses to provide a homogeneous and stable light source, with less than 0.003% fluctuation across images averaged from 40 trials. We describe the equipment and surgical techniques necessary for both acute and chronic mouse preparations, and provide software that can create maps of sensory representations from images captured by inexpensive 8-bit cameras or by 12-bit cameras. The IOS imaging system can be adapted to commercial upright microscopes or custom macroscopes, eliminating the need for dedicated equipment or complex optical paths. This method can be combined with parallel high resolution imaging techniques such as two-photon microscopy.

  20. Study of Cryogenic Complex Plasma

    DTIC Science & Technology

    2007-04-26

    enabled us to detect the formation of the Coulomb crystals as shown in Fig. 2. Liq. He Ring electrode Particles Green Laser RF Plasma ... Ring electrode CCD camera Prism mirror Liq. He Glass Tube Liq. N2 Glass Dewar Acrylic particles Gas Helium Green Laser CCD camera Pressure

  1. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  2. VizieR Online Data Catalog: IC 361 Vilnius photometry (Zdanavicius+, 2010)

    NASA Astrophysics Data System (ADS)

    Zdanavicius, J.; Bartasiute, S.; Boyle, R. P.; Vrba, F. J.; Zdanavicius, K.

    2015-03-01

    CCD observations in seven filters U,P,X,Y,Z,V,S of the Vilnius system plus the filter I of the Cousins system were carried out in December of 1999 with a 2K CCD camera on the 1m telescope of the USNO Flagstaff Station (Arizona), which gives a field of the diameter of 20'. Repeated observations in the Vilnius filters were done with the same telescope and a new 2Kx2K CCD camera in March of 2009. During the latter run we have obtained well-calibrated CCD data only for filters Y, Z, V, S, since observations through the remaining three filters on the succeeding night were curtailed by cirrus clouds. Additional frames in the Vilnius filters U,Y,V were taken for the central part of the field (12'x12') in December of 2008 with a 4K CCD camera on the 1.8m Vatican Advanced Technology Telescope (VATT) on Mt. Graham (Arizona). (1 data file).

  3. Time-resolved spectra of dense plasma focus using spectrometer, streak camera, and CCD combination.

    PubMed

    Goldin, F J; Meehan, B T; Hagen, E C; Wilkins, P R

    2010-10-01

    A time-resolving spectrographic instrument has been assembled with the primary components of a spectrometer, image-converting streak camera, and CCD recording camera, for the primary purpose of diagnosing highly dynamic plasmas. A collection lens defines the sampled region and couples light from the plasma into a step index, multimode fiber which leads to the spectrometer. The output spectrum is focused onto the photocathode of the streak camera, the output of which is proximity-coupled to the CCD. The spectrometer configuration is essentially Czerny-Turner, but off-the-shelf Nikon refraction lenses, rather than mirrors, are used for practicality and flexibility. Only recently assembled, the instrument requires significant refinement, but has now taken data on both bridge wire and dense plasma focus experiments.

  4. Research on detecting heterogeneous fibre from cotton based on linear CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-bin; Cao, Bing; Zhang, Xin-peng; Shi, Wei

    2009-07-01

    The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton, advance the quality of cotton textile and reduce production cost. There are favorable market value and future development for this technology. An optical detecting system obtains the widespread application. In this system, we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first, then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.

  5. CTK: A new CCD Camera at the University Observatory Jena

    NASA Astrophysics Data System (ADS)

    Mugrauer, M.

    2009-05-01

    The Cassegrain-Teleskop-Kamera (CTK) is a new CCD imager which is operated at the University Observatory Jena since begin of 2006. This article describes the main characteristics of the new camera. The properties of the CCD detector, the CTK image quality, as well as its detection limits for all filters are presented. Based on observations obtained with telescopes of the University Observatory Jena, which is operated by the Astrophysical Institute of the Friedrich-Schiller-University.

  6. LED characterization for development of on-board calibration unit of CCD-based advanced wide-field sensor camera of Resourcesat-2A

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Verma, Anurag

    2016-05-01

    The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.

  7. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  8. The In-flight Spectroscopic Performance of the Swift XRT CCD Camera During 2006-2007

    NASA Technical Reports Server (NTRS)

    Godet, O.; Beardmore, A.P.; Abbey, A.F.; Osborne, J.P.; Page, K.L.; Evans, P.; Starling, R.; Wells, A.A.; Angelini, L.; Burrows, D.N.; hide

    2007-01-01

    The Swift X-ray Telescope focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 135 eV FWHM at 5.9 keV as measured before launch. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Comparison of observed spectra with models folded through the instrument response produces negative residuals around and below the Oxygen edge. We discuss several possible causes for such residuals. Traps created by proton damage on the CCD increase the charge transfer inefficiency (CTI) over time. We describe the evolution of the CTI since the launch and its effect on the CCD spectral resolution and the gain.

  9. CCD image sensor induced error in PIV applications

    NASA Astrophysics Data System (ADS)

    Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.

    2014-06-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

  10. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  11. An Overview of the CBERS-2 Satellite and Comparison of the CBERS-2 CCD Data with the L5 TM Data

    NASA Technical Reports Server (NTRS)

    Chandler, Gyanesh

    2007-01-01

    CBERS satellite carries on-board a multi sensor payload with different spatial resolutions and collection frequencies. HRCCD (High Resolution CCD Camera), IRMSS (Infrared Multispectral Scanner), and WFI (Wide-Field Imager). The CCD and the WFI camera operate in the VNIR regions, while the IRMSS operates in SWIR and thermal region. In addition to the imaging payload, the satellite carries a Data Collection System (DCS) and Space Environment Monitor (SEM).

  12. CTK-II & RTK: The CCD-cameras operated at the auxiliary telescopes of the University Observatory Jena

    NASA Astrophysics Data System (ADS)

    Mugrauer, M.

    2016-03-01

    The Cassegrain-Teleskop-Kamera (CTK-II) and the Refraktor-Teleskop-Kamera (RTK) are two CCD-imagers which are operated at the 25 cm Cassegrain and 20 cm refractor auxiliary telescopes of the University Observatory Jena. This article describes the main characteristics of these instruments. The properties of the CCD-detectors, the astrometry, the image quality, and the detection limits of both CCD-cameras, as well as some results of ongoing observing projects, carried out with these instruments, are presented. Based on observations obtained with telescopes of the University Observatory Jena, which is operated by the Astrophysical Institute of the Friedrich-Schiller-University.

  13. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  14. Measurement precision and noise analysis of CCD cameras

    NASA Astrophysics Data System (ADS)

    Wu, ZhenSen; Li, Zhiyang; Zhang, Ping

    1993-09-01

    CHINA The lirait precision of CCD camera with 1O. bit analogue to digital conversion is estimated in this paper . The noise effect on ineasurenent precision and the noise characteristics are analyzed in details. The noise process means are also discussed and the diagram of noise properties is given in this paper.

  15. Three-dimensional shape measurement system applied to superficial inspection of non-metallic pipes for the hydrocarbons transport

    NASA Astrophysics Data System (ADS)

    Arciniegas, Javier R.; González, Andrés. L.; Quintero, L. A.; Contreras, Carlos R.; Meneses, Jaime E.

    2014-05-01

    Three-dimensional shape measurement is a subject that consistently produces high scientific interest and provides information for medical, industrial and investigative applications, among others. In this paper, it is proposed to implement a three-dimensional (3D) reconstruction system for applications in superficial inspection of non-metallic pipes for the hydrocarbons transport. The system is formed by a CCD camera, a video-projector and a laptop and it is based on fringe projection technique. System functionality is evidenced by evaluating the quality of three-dimensional reconstructions obtained, which allow observing the failures and defects on the study object surface.

  16. A Motionless Camera

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  17. Stereo imaging velocimetry for microgravity applications

    NASA Technical Reports Server (NTRS)

    Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.

    1994-01-01

    Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.

  18. [Present and prospects of telepathology].

    PubMed

    Takahashi, M; Mernyei, M; Shibuya, C; Toshima, S

    1999-01-01

    Nearly ten years have passed since telepathology was introduced and real-time pathology consultations were conducted. Long distance consultations in pathology, cytology, computed tomography and magnetic resonance imaging, which are referred to as telemedicine, clearly enhance the level of medical care in remote hospitals where no full-time specialists are employed. To transmit intraoperative frozen section images, we developed a unique hybrid system "Hi-SPEED". The imaging view through the CCD camera is controlled by a camera controller that provides NTSC composite video output for low resolution motion pictures and high resolution digital output for final interpretation on computer display. The results of intraoperative frozen section diagnosis between the Gihoku General Hospital 410 km from SRL showed a sensitivity of 97.6% for 82 cases of breast carcinoma and a false positive rate of 1.2%. This system can be used for second opinions as well as for consultations between cytologists and cytotechnologists.

  19. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    NASA Astrophysics Data System (ADS)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  20. The Mars Science Laboratory Curiosity rover Mastcam instruments: Preflight and in-flight calibration, validation, and data archiving

    USGS Publications Warehouse

    Bell, James F.; Godber, A.; McNair, S.; Caplinger, M.A.; Maki, J.N.; Lemmon, M.T.; Van Beek, J.; Malin, M.C.; Wellington, D.; Kinch, K.M.; Madsen, M.B.; Hardgrove, C.; Ravine, M.A.; Jensen, E.; Harker, D.; Anderson, Ryan; Herkenhoff, Kenneth E.; Morris, R.V.; Cisneros, E.; Deen, R.G.

    2017-01-01

    The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted ~2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) “true color” images, multispectral images in nine additional bands spanning ~400–1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration

  1. Adjustment of multi-CCD-chip-color-camera heads

    NASA Astrophysics Data System (ADS)

    Guyenot, Volker; Tittelbach, Guenther; Palme, Martin

    1999-09-01

    The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.

  2. Inexpensive Neutron Imaging Cameras Using CCDs for Astronomy

    NASA Astrophysics Data System (ADS)

    Hewat, A. W.

    We have developed inexpensive neutron imaging cameras using CCDs originally designed for amateur astronomical observation. The low-light, high resolution requirements of such CCDs are similar to those for neutron imaging, except that noise as well as cost is reduced by using slower read-out electronics. For example, we use the same 2048x2048 pixel ;Kodak; KAI-4022 CCD as used in the high performance PCO-2000 CCD camera, but our electronics requires ∼5 sec for full-frame read-out, ten times slower than the PCO-2000. Since neutron exposures also require several seconds, this is not seen as a serious disadvantage for many applications. If higher frame rates are needed, the CCD unit on our camera can be easily swapped for a faster readout detector with similar chip size and resolution, such as the PCO-2000 or the sCMOS PCO.edge 4.2.

  3. Investigation of solar active regions at high resolution by balloon flights of the solar optical universal polarimeter, extended definition phase

    NASA Technical Reports Server (NTRS)

    Tarbell, Theodore D.

    1993-01-01

    Technical studies of the feasibility of balloon flights of the former Spacelab instrument, the Solar Optical Universal Polarimeter, with a modern charge-coupled device (CCD) camera, to study the structure and evolution of solar active regions at high resolution, are reviewed. In particular, different CCD cameras were used at ground-based solar observatories with the SOUP filter, to evaluate their performance and collect high resolution images. High resolution movies of the photosphere and chromosphere were successfully obtained using four different CCD cameras. Some of this data was collected in coordinated observations with the Yohkoh satellite during May-July, 1992, and they are being analyzed scientifically along with simultaneous X-ray observations.

  4. A design of driving circuit for star sensor imaging camera

    NASA Astrophysics Data System (ADS)

    Li, Da-wei; Yang, Xiao-xu; Han, Jun-feng; Liu, Zhao-hui

    2016-01-01

    The star sensor is a high-precision attitude sensitive measuring instruments, which determine spacecraft attitude by detecting different positions on the celestial sphere. Imaging camera is an important portion of star sensor. The purpose of this study is to design a driving circuit based on Kodak CCD sensor. The design of driving circuit based on Kodak KAI-04022 is discussed, and the timing of this CCD sensor is analyzed. By the driving circuit testing laboratory and imaging experiments, it is found that the driving circuits can meet the requirements of Kodak CCD sensor.

  5. Curved CCD detector devices and arrays for multispectral astrophysical applications and terrestrial stereo panoramic cameras

    NASA Astrophysics Data System (ADS)

    Swain, Pradyumna; Mark, David

    2004-09-01

    The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.

  6. VizieR Online Data Catalog: GSC04778-00152 photometry and spectroscopy (Tuvikene+, 2008)

    NASA Astrophysics Data System (ADS)

    Tuvikene, T.; Sterken, C.; Eenmae, T.; Hinojosa-Goni, R.; Brogt, E.; Longa Pena, P.; Liimets, T.; Ahumada, M.; Troncoso, P.; Vogt, N.

    2012-04-01

    CCD photometry of GSC04778-00152 was carried out on 54 nights during 9 observing runs. In January 2006 the observations were made with the 41-cm Meade telescope at Observatorio Cerro Armazones (OCA), Chile, using an SBIG STL-6303E CCD camera (3072x2048 pixels, FOV 23.0'x15.4') and Johnson V filter. On 3 nights in December 2006 and on 2 nights in October 2007 we used the 2.4-m Hiltner telescope at the MDM Observatory, Arizona, USA, equipped with the 8kx8k Mosaic imager (FOV 23.6'x23.6'). In December 2006 and January 2007, we also used the 41-cm Meade telescope at OCA, using an SBIG ST-7XME CCD camera (FOV 5.9'x3.9') with no filter. Figure 3 shows all OCA light curves obtained with this configuration. At Tartu Observatory the observations were carried out in December 2006 and January 2007, using the 60-cm telescope with a SpectraSource Instruments HPC-1 camera (1024x1024 pixels, FOV 11.2'x11.2') and V filter. >From January to March 2007 the system was observed using the 1.0-m telescope at SAAO, Sutherland, South Africa with an STE4 CCD camera (1024x1024 pixels, FOV 5.3'x5.3') and UBVRI filters. Spectroscopic observations were carried out at the Tartu Observatory, Estonia, using the 1.5-m telescope with the Cassegrain spectrograph ASP-32 and an Andor Newton CCD camera. (3 data files).

  7. PN-CCD camera for XMM: performance of high time resolution/bright source operating modes

    NASA Astrophysics Data System (ADS)

    Kendziorra, Eckhard; Bihler, Edgar; Grubmiller, Willy; Kretschmar, Baerbel; Kuster, Markus; Pflueger, Bernhard; Staubert, Ruediger; Braeuninger, Heinrich W.; Briel, Ulrich G.; Meidinger, Norbert; Pfeffermann, Elmar; Reppin, Claus; Stoetter, Diana; Strueder, Lothar; Holl, Peter; Kemmer, Josef; Soltau, Heike; von Zanthier, Christoph

    1997-10-01

    The pn-CCD camera is developed as one of the focal plane instruments for the European photon imaging camera (EPIC) on board the x-ray multi mirror (XMM) mission to be launched in 1999. The detector consists of four quadrants of three pn-CCDs each, which are integrated on one silicon wafer. Each CCD has 200 by 64 pixels (150 micrometer by 150 micrometers) with 280 micrometers depletion depth. One CCD of a quadrant is read out at a time, while the four quadrants can be processed independently of each other. In standard imaging mode the CCDs are read out sequentially every 70 ms. Observations of point sources brighter than 1 mCrab will be effected by photon pile- up. However, special operating modes can be used to observe bright sources up to 150 mCrab in timing mode with 30 microseconds time resolution and very bright sources up to several crab in burst mode with 7 microseconds time resolution. We have tested one quadrant of the EPIC pn-CCD camera at line energies from 0.52 keV to 17.4 keV at the long beam test facility Panter in the focus of the qualification mirror module for XMM. In order to test the time resolution of the system, a mechanical chopper was used to periodically modulate the beam intensity. Pulse periods down to 0.7 ms were generated. This paper describes the performance of the pn-CCD detector in timing and burst readout modes with special emphasis on energy and time resolution.

  8. VizieR Online Data Catalog: Observation of six NSVS eclipsing binaries (Dimitrov+, 2015)

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. P.; Kjurkchieva, D. P.

    2017-11-01

    We managed to separate a sample of about 40 ultrashort-period candidates from the Northern Sky Variability Survey (NSVS, Wozniak et al. 2004AJ....127.2436W) appropriate for follow-up observations at Rozhen observatory (δ>-10°). Follow-up CCD photometry of the targets in the VRI bands was carried out with the three telescopes of the Rozhen National Astronomical Observatory. The 2-m RCC telescope is equipped with a VersArray CCD camera (1340x1300 pixels, 20 μm/pixel, field of 5.35x5.25 arcmin2). The 60-cm Cassegrain telescope is equipped with a FLI PL09000 CCD camera (3056x3056 pixels, 12 μm/pixel, field of 17.1x17.1 arcmin2). The 50/70 cm Schmidt telescope has a field of view (FoV) of around 1° and is equipped with a FLI PL 16803 CCD camera, 4096x4096 pixels, 9 μm/pixel size. (4 data files).

  9. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  10. Novel low-cost vision-sensing technology with controllable of exposal time for welding

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzeng; Wang, Bin; Chen, Nian; Cao, Yipeng

    2005-02-01

    In the process of robot Welding, position of welding seam and welding pool shape is detected by CCD camera for quality control and seam tracking in real-time. It is difficult to always get a clear welding image in some welding methods, such as TIG welding. A novel idea that the exposal time of CCD camera is automatically controlled by arc voltage or arc luminance is proposed to get clear welding image. A set of special device and circuits are added to a common industrial CCD camera in order to flexibly control the CCD to start or close exposal by control of the internal clearing signal of the accumulated charge. Two special vision sensors according to the idea are developed. Their exposal grabbing can be triggered respectively by the arc voltage and the variety of the arc luminance. Two prototypes have been designed and manufactured. Experiments show that they can stably grab clear welding images at appointed moment, which is a basic for the feedback control of automatic welding.

  11. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  12. Camera for Quasars in the Early Universe (CQUEAN)

    NASA Astrophysics Data System (ADS)

    Kim, Eunbin; Park, W.; Lim, J.; Jeong, H.; Kim, J.; Oh, H.; Pak, S.; Im, M.; Kuehne, J.

    2010-05-01

    The early universe of z ɳ is where the first stars, galaxies, and quasars formed, starting the re-ionization of the universe. The discovery and the study of quasars in the early universe allow us to witness the beginning of history of astronomical objects. In order to perform a medium-deep, medium-wide, imaging survey of quasars, we are developing an optical CCD camera, CQUEAN (Camera for QUasars in EArly uNiverse) which uses a 1024*1024 pixel deep-depletion CCD. It has an enhanced QE than conventional CCD at wavelength band around 1μm, thus it will be an efficient tool for observation of quasars at z > 7. It will be attached to the 2.1m telescope at McDonald Observatory, USA. A focal reducer is designed to secure a larger field of view at the cassegrain focus of 2.1m telescope. For long stable exposures, auto-guiding system will be implemented by using another CCD camera viewing an off-axis field. All these instruments will be controlled by the software written in python on linux platform. CQUEAN is expected to see the first light during summer in 2010.

  13. CCD Photometer Installed on the Telescope - 600 OF the Shamakhy Astrophysical Observatory II. The Technique of Observation and Data Processing of CCD Photometry

    NASA Astrophysics Data System (ADS)

    Abdullayev, B. I.; Gulmaliyev, N. I.; Majidova, S. O.; Mikayilov, Kh. M.; Rustamov, B. N.

    2009-12-01

    Basic technical characteristics of CCD matrix U-47 made by the Apogee Alta Instruments Inc. are provided. Short description and features of various noises introduced by optical system and CCD camera are presented. The technique of getting calibration frames: bias, dark, flat field and main stages of processing of results CCD photometry are described.

  14. PC-based high-speed video-oculography for measuring rapid eye movements in mice.

    PubMed

    Sakatani, Tomoya; Isa, Tadashi

    2004-05-01

    We newly developed an infrared video-oculographic system for on-line tracking of the eye position in awake and head-fixed mice, with high temporal resolution (240 Hz). The system consists of a commercially available high-speed CCD camera and an image processing software written in LabVIEW run on IBM-PC with a plug-in video grabber board. This software calculates the center and area of the pupil by fitting circular function to the pupil boundary, and allows robust and stable tracking of the eye position in small animals like mice. On-line calculation is performed to obtain reasonable circular fitting of the pupil boundary even if a part of the pupil is covered with shadows or occluded by eyelids or corneal reflections. The pupil position in the 2-D video plane is converted to the rotation angle of the eyeball by estimating its rotation center based on the anatomical eyeball model. By this recording system, it is possible to perform quantitative analysis of rapid eye movements such as saccades in mice. This will provide a powerful tool for analyzing molecular basis of oculomotor and cognitive functions by using various lines of mutant mice.

  15. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  16. Scientific and technical collaboration between Russian and Ukranian researchers and manufacturers on the development of astronomical instruments equipped with advanced detection services

    NASA Astrophysics Data System (ADS)

    Vishnevsky, G. I.; Galyatkin, I. A.; Zhuk, A. A.; Iblyaminova, A. F.; Kossov, V. G.; Levko, G. V.; Nesterov, V. K.; Rivkind, V. L.; Rogalev, Yu. N.; Smirnov, A. V.; Gumerov, R. I.; Bikmaev, I. F.; Pinigin, G. I.; Shulga, A. V.; Kovalchyk, A. V.; Protsyuk, Yu. I.; Malevinsky, S. V.; Abrosimov, V. M.; Mironenko, V. N.; Savchenko, V. V.; Ivaschenko, Yu. N.; Andruk, V. M.; Dalinenko, I. N.; Vydrevich, M. G.

    2003-01-01

    The paper presents the possibilities and a list of tasks that are solved by collaboration between research and production companies, and astronomical observatories of Russia and Ukraine in the field of development, modernization and equipping of various telescopes (the AMC, RTT-150, Zeiss-600 and quantum-optical system Sazhen-S types) with advanced charge-coupled device (CCD) cameras. CCD imagers and ditital CCD cameras designed and manufactured by the "Electron-Optronic" Research & Production Company, St Petersburg, to equip astronomical telescopes and scientific instruments are described.

  17. Low-cost digital dynamic visualization system

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    1995-05-01

    High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.

  18. The research on visual industrial robot which adopts fuzzy PID control algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Yifei; Lu, Guoping; Yue, Lulin; Jiang, Weifeng; Zhang, Ye

    2017-03-01

    The control system of six degrees of freedom visual industrial robot based on the control mode of multi-axis motion control cards and PC was researched. For the variable, non-linear characteristics of industrial robot`s servo system, adaptive fuzzy PID controller was adopted. It achieved better control effort. In the vision system, a CCD camera was used to acquire signals and send them to video processing card. After processing, PC controls the six joints` motion by motion control cards. By experiment, manipulator can operate with machine tool and vision system to realize the function of grasp, process and verify. It has influence on the manufacturing of the industrial robot.

  19. Software manual for operating particle displacement tracking data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.

  20. MIDAS: Software for the detection and analysis of lunar impact flashes

    NASA Astrophysics Data System (ADS)

    Madiedo, José M.; Ortiz, José L.; Morales, Nicolás; Cabrera-Caño, Jesús

    2015-06-01

    Since 2009 we are running a project to identify flashes produced by the impact of meteoroids on the surface of the Moon. For this purpose we are employing small telescopes and high-sensitivity CCD video cameras. To automatically identify these events a software package called MIDAS was developed and tested. This package can also perform the photometric analysis of these flashes and estimate the value of the luminous efficiency. Besides, we have implemented in MIDAS a new method to establish which is the likely source of the meteoroids (known meteoroid stream or sporadic background). The main features of this computer program are analyzed here, and some examples of lunar impact events are presented.

  1. Design principles and applications of a cooled CCD camera for electron microscopy.

    PubMed

    Faruqi, A R

    1998-01-01

    Cooled CCD cameras offer a number of advantages in recording electron microscope images with CCDs rather than film which include: immediate availability of the image in a digital format suitable for further computer processing, high dynamic range, excellent linearity and a high detective quantum efficiency for recording electrons. In one important respect however, film has superior properties: the spatial resolution of CCD detectors tested so far (in terms of point spread function or modulation transfer function) are inferior to film and a great deal of our effort has been spent in designing detectors with improved spatial resolution. Various instrumental contributions to spatial resolution have been analysed and in this paper we discuss the contribution of the phosphor-fibre optics system in this measurement. We have evaluated the performance of a number of detector components and parameters, e.g. different phosphors (and a scintillator), optical coupling with lens or fibre optics with various demagnification factors, to improve the detector performance. The camera described in this paper, which is based on this analysis, uses a tapered fibre optics coupling between the phosphor and the CCD and is installed on a Philips CM12 electron microscope equipped to perform cryo-microscopy. The main use of the camera so far has been in recording electron diffraction patterns from two dimensional crystals of bacteriorhodopsin--from wild type and from different trapped states during the photocycle. As one example of the type of data obtained with the CCD camera a two dimensional Fourier projection map from the trapped O-state is also included. With faster computers, it will soon be possible to undertake this type of work on an on-line basis. Also, with improvements in detector size and resolution, CCD detectors, already ideal for diffraction, will be able to compete with film in the recording of high resolution images.

  2. Double and Multiple Star Measurements at the Southern Sky with a 50cm-Cassegrain and a Fast CCD Camera in 2008

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2011-04-01

    Using a 50cm Cassegrain in Namibia, recordings of double and multiple stars were made with a fast CCD camera and a notebook computer. From superpositions of "lucky images", measurements of 149 systems were obtained and compared with literature data. B/W and color images of some remarkable systems are also presented.

  3. Double and Multiple Star Measurements in the Northern Sky with a 10" Newtonian and a Fast CCD Camera in 2006 through 2009

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2010-07-01

    Using a 10" Newtonian and a fast CCD camera, recordings of double and multiple stars were made at high frame rates with a notebook computer. From superpositions of "lucky images", measurements of 139 systems were obtained and compared with literature data. B/w and color images of some noteworthy systems are also presented.

  4. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  5. Leonid Storm Flux Analysis From One Leonid MAC Video AL50R

    NASA Technical Reports Server (NTRS)

    Gural, Peter S.; Jenniskens, Peter; DeVincenzi, Donald L. (Technical Monitor)

    2000-01-01

    A detailed meteor flux analysis is presented of a seventeen-minute portion of one videotape, collected on November 18, 1999, during the Leonid Multi-instrument Aircraft Campaign. The data was recorded around the peak of the Leonid meteor storm using an intensified CCD camera pointed towards the low southern horizon. Positions of meteors on the sky were measured. These measured meteor distributions were compared to a Monte Carlo simulation, which is a new approach to parameter estimation for mass ratio and flux. Comparison of simulated flux versus observed flux levels, seen between 1:50:00 and 2:06:41 UT, indicate a magnitude population index of r = 1.8 +/- 0.1 and mass ratio of s = 1.64 +/- 0.06. The average spatial density of the material contributing to the Leonid storm peak is measured at 0.82 +/- 0.19 particles per square kilometer per hour for particles of at least absolute visual magnitude +6.5. Clustering analysis of the arrival times of Leonids impacting the earth's atmosphere over the total observing interval shows no enhancement or clumping down to time scales of the video frame rate. This indicates a uniformly random temporal distribution of particles in the stream encountered during the 1999 epoch. Based on the observed distribution of meteors on the sky and the model distribution, recommendations am made for the optimal pointing directions for video camera meteor counts during future ground and airborne missions.

  6. Dynamic Deformation Measurements of an Aeroelastic Semispan Model. [conducted in the Transonic Dynamics Tunnel at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Graves, Sharon S.; Burner, Alpheus W.; Edwards, John W.; Schuster, David M.

    2001-01-01

    The techniques used to acquire, reduce, and analyze dynamic deformation measurements of an aeroelastic semispan wind tunnel model are presented. Single-camera, single-view video photogrammetry (also referred to as videogrammetric model deformation, or VMD) was used to determine dynamic aeroelastic deformation of the semispan 'Models for Aeroelastic Validation Research Involving Computation' (MAVRIC) model in the Transonic Dynamics Tunnel at the NASA Langley Research Center. Dynamic deformation was determined from optical retroreflective tape targets at five semispan locations located on the wing from the root to the tip. Digitized video images from a charge coupled device (CCD) camera were recorded and processed to automatically determine target image plane locations that were then corrected for sensor, lens, and frame grabber spatial errors. Videogrammetric dynamic data were acquired at a 60-Hz rate for time records of up to 6 seconds during portions of this flutter/Limit Cycle Oscillation (LCO) test at Mach numbers from 0.3 to 0.96. Spectral analysis of the deformation data is used to identify dominant frequencies in the wing motion. The dynamic data will be used to separate aerodynamic and structural effects and to provide time history deflection data for Computational Aeroelasticity code evaluation and validation.

  7. Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems

    NASA Technical Reports Server (NTRS)

    Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.

    2011-01-01

    The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.

  8. MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera

    NASA Astrophysics Data System (ADS)

    Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.

    2017-10-01

    An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).

  9. Dynamic photoelasticity by TDI imaging

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    2001-06-01

    High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.

  10. Double Star Measurements at the Southern Sky with 50 cm Reflectors and Fast CCD Cameras in 2012

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2014-07-01

    A Cassegrain and a Ritchey-Chrétien reflector, both with 50 cm aperture, were used in Namibia for recordings of double stars with fast CCD cameras and a notebook computer. From superposition of "lucky images", measurements of 39 double and multiple systems were obtained and compared with literature data. Occasional deviations are discussed. Images of some remarkable systems are also presented.

  11. Extreme Faint Flux Imaging with an EMCCD

    NASA Astrophysics Data System (ADS)

    Daigle, Olivier; Carignan, Claude; Gach, Jean-Luc; Guillaume, Christian; Lessard, Simon; Fortin, Charles-Anthony; Blais-Ouellette, Sébastien

    2009-08-01

    An EMCCD camera, designed from the ground up for extreme faint flux imaging, is presented. CCCP, the CCD Controller for Counting Photons, has been integrated with a CCD97 EMCCD from e2v technologies into a scientific camera at the Laboratoire d’Astrophysique Expérimentale (LAE), Université de Montréal. This new camera achieves subelectron readout noise and very low clock-induced charge (CIC) levels, which are mandatory for extreme faint flux imaging. It has been characterized in laboratory and used on the Observatoire du Mont Mégantic 1.6 m telescope. The performance of the camera is discussed and experimental data with the first scientific data are presented.

  12. Typical effects of laser dazzling CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zhang, Jianmin; Shao, Bibo; Cheng, Deyan; Ye, Xisheng; Feng, Guobin

    2015-05-01

    In this article, an overview of laser dazzling effect to buried channel CCD camera is given. The CCDs are sorted into staring and scanning types. The former includes the frame transfer and interline transfer types. The latter includes linear and time delay integration types. All CCDs must perform four primary tasks in generating an image, which are called charge generation, charge collection, charge transfer and charge measurement. In camera, the lenses are needed to input the optical signal to the CCD sensors, in which the techniques for erasing stray light are used. And the electron circuits are needed to process the output signal of CCD, in which many electronic techniques are used. The dazzling effects are the conjunct result of light distribution distortion and charge distribution distortion, which respectively derive from the lens and the sensor. Strictly speaking, in lens, the light distribution is not distorted. In general, the lens are so well designed and fabricated that its stray light can be neglected. But the laser is of much enough intensity to make its stray light obvious. In CCD image sensors, laser can induce a so large electrons generation. Charges transfer inefficiency and charges blooming will cause the distortion of the charge distribution. Commonly, the largest signal outputted from CCD sensor is restricted by capability of the collection well of CCD, and can't go beyond the dynamic range for the subsequent electron circuits maintaining normal work. So the signal is not distorted in the post-processing circuits. But some techniques in the circuit can make some dazzling effects present different phenomenon in final image.

  13. Taking the Observatory to the Astronomer

    NASA Astrophysics Data System (ADS)

    Bisque, T. M.

    1997-05-01

    Since 1992, Software Bisque's Remote Astronomy Software has been used by the Mt. Wilson Institute to allow interactive control of a 24" telescope and digital camera via modem. Software Bisque now introduces a comparable, relatively low-cost observatory system that allows powerful, yet "user-friendly" telescope and CCD camera control via the Internet. Utilizing software developed for the Windows 95/NT operating systems, the system offers point-and-click access to comprehensive celestial databases, extremely accurate telescope pointing, rapid download of digital CCD images by one or many users and flexible image processing software for data reduction and analysis. Our presentation will describe how the power of the personal computer has been leveraged to provide professional-level tools to the amateur astronomer, and include a description of this system's software and hardware components. The system software includes TheSky Astronomy Software?, CCDSoft CCD Astronomy Software?, TPoint Telescope Pointing Analysis System? software, Orchestrate? and, optionally, the RealSky CDs. The system hardware includes the Paramount GT-1100? Robotic Telescope Mount, as well as third party CCD cameras, focusers and optical tube assemblies.

  14. An Acoustic Charge Transport Imager for High Definition Television

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard

    1999-01-01

    This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode with an output data rate of 5MHz, which gives a maximum frame rate of 4 frames per second. The MIT/Polaroid group developed two cameras under this program. The cameras have effectively four times the current video spatial resolution and at 60 frames per second are double the normal video frame rate.

  15. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  16. A USB 2.0 computer interface for the UCO/Lick CCD cameras

    NASA Astrophysics Data System (ADS)

    Wei, Mingzhi; Stover, Richard J.

    2004-09-01

    The new UCO/Lick Observatory CCD camera uses a 200 MHz fiber optic cable to transmit image data and an RS232 serial line for low speed bidirectional command and control. Increasingly RS232 is a legacy interface supported on fewer computers. The fiber optic cable requires either a custom interface board that is plugged into the mainboard of the image acquisition computer to accept the fiber directly or an interface converter that translates the fiber data onto a widely used standard interface. We present here a simple USB 2.0 interface for the UCO/Lick camera. A single USB cable connects to the image acquisition computer and the camera's RS232 serial and fiber optic cables plug into the USB interface. Since most computers now support USB 2.0 the Lick interface makes it possible to use the camera on essentially any modern computer that has the supporting software. No hardware modifications or additions to the computer are needed. The necessary device driver software has been written for the Linux operating system which is now widely used at Lick Observatory. The complete data acquisition software for the Lick CCD camera is running on a variety of PC style computers as well as an HP laptop.

  17. High-speed line-scan camera with digital time delay integration

    NASA Astrophysics Data System (ADS)

    Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.

  18. Multi-scale auroral observations in Apatity: winter 2010-2011

    NASA Astrophysics Data System (ADS)

    Kozelov, B. V.; Pilgaev, S. V.; Borovkov, L. P.; Yurov, V. E.

    2012-03-01

    Routine observations of the aurora are conducted in Apatity by a set of five cameras: (i) all-sky TV camera Watec WAT-902K (1/2"CCD) with Fujinon lens YV2.2 × 1.4A-SA2; (ii) two monochromatic cameras Guppy F-044B NIR (1/2"CCD) with Fujinon HF25HA-1B (1:1.4/25 mm) lens for 18° field of view and glass filter 558 nm; (iii) two color cameras Guppy F-044C NIR (1/2"CCD) with Fujinon DF6HA-1B (1:1.2/6 mm) lens for 67° field of view. The observational complex is aimed at investigating spatial structure of the aurora, its scaling properties, and vertical distribution in the rayed forms. The cameras were installed on the main building of the Apatity division of the Polar Geophysical Institute and at the Apatity stratospheric range. The distance between these sites is nearly 4 km, so the identical monochromatic cameras can be used as a stereoscopic system. All cameras are accessible and operated remotely via Internet. For 2010-2011 winter season the equipment was upgraded by special blocks of GPS-time triggering, temperature control and motorized pan-tilt rotation mounts. This paper presents the equipment, samples of observed events and the web-site with access to available data previews.

  19. Multi-scale auroral observations in Apatity: winter 2010-2011

    NASA Astrophysics Data System (ADS)

    Kozelov, B. V.; Pilgaev, S. V.; Borovkov, L. P.; Yurov, V. E.

    2011-12-01

    Routine observations of the aurora are conducted in Apatity by a set of five cameras: (i) all-sky TV camera Watec WAT-902K (1/2"CCD) with Fujinon lens YV2.2 × 1.4A-SA2; (ii) two monochromatic cameras Guppy F-044B NIR (1/2"CCD) with Fujinon HF25HA-1B (1:1.4/25 mm) lens for 18° field of view and glass filter 558 nm; (iii) two color cameras Guppy F-044C NIR (1/2"CCD) with Fujinon DF6HA-1B (1:1.2/6 mm) lens for 67° field of view. The observational complex is aimed at investigating spatial structure of the aurora, its scaling properties, and vertical distribution in the rayed forms. The cameras were installed on the main building of the Apatity division of the Polar Geophysical Institute and at the Apatity stratospheric range. The distance between these sites is nearly 4 km, so the identical monochromatic cameras can be used as a stereoscopic system. All cameras are accessible and operated remotely via Internet. For 2010-2011 winter season the equipment was upgraded by special blocks of GPS-time triggering, temperature control and motorized pan-tilt rotation mounts. This paper presents the equipment, samples of observed events and the web-site with access to available data previews.

  20. High-resolution CCD imaging alternatives

    NASA Astrophysics Data System (ADS)

    Brown, D. L.; Acker, D. E.

    1992-08-01

    High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.

  1. Double Star Measurements at the Southern Sky with a 50 cm Reflector and a Fast CCD Camera in 2014

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2015-04-01

    A Ritchey-Chrétien reflector with 50 cm aperture was used in Namibia for recordings of double stars with a fast CCD camera and a notebook computer. From superposition of "lucky images", measurements of 91 pairings in 79 double and multiple systems were obtained and compared with literature data. Occasional deviations are discussed. Some images of noteworthy systems are also presented.

  2. Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras

    NASA Astrophysics Data System (ADS)

    Quinn, Mark Kenneth

    2018-05-01

    Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.

  3. Measuring high-resolution sky luminance distributions with a CCD camera.

    PubMed

    Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther

    2013-03-10

    We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively.

  4. Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera

    NASA Technical Reports Server (NTRS)

    Stanojev, B. J.; Houts, M.

    2004-01-01

    Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.

  5. The development of a multifunction lens test instrument by using computer aided variable test patterns

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Jen; Wu, Wen-Hong; Huang, Kuo-Cheng

    2009-08-01

    A multi-function lens test instrument is report in this paper. This system can evaluate the image resolution, image quality, depth of field, image distortion and light intensity distribution of the tested lens by changing the tested patterns. This system consists of a tested lens, a CCD camera, a linear motorized stage, a system fixture, an observer LCD monitor, and a notebook for pattern providing. The LCD monitor displays a serious of specified tested patterns sent by the notebook. Then each displayed pattern goes through the tested lens and images in the CCD camera sensor. Consequently, the system can evaluate the performance of the tested lens by analyzing the image of CCD camera with special designed software. The major advantage of this system is that it can complete whole test quickly without interruption due to part replacement, because the tested patterns are statically displayed on monitor and controlled by the notebook.

  6. Realization of Vilnius UPXYZVS photometric system for AltaU42 CCD camera at the MAO NAS of Ukraine

    NASA Astrophysics Data System (ADS)

    Vid'Machenko, A. P.; Andruk, V. M.; Samoylov, V. S.; Delets, O. S.; Nevodovsky, P. V.; Ivashchenko, Yu. M.; Kovalchuk, G. U.

    2005-06-01

    The description of two-inch glass filters of the Vilnius UPXYZVS photometric system, which are made at the Main Astronomical Observatory of NAS of Ukraine for AltaU42 CCD camera with format of 2048×2048 pixels, is presented in the paper. Reaction curves of instrumental system are shown. Estimations of minimal star's magnitudes for each filter's band in comparison with the visual V one are obtained. New software for automation of CCD frames processing is developed in program shell of LINUX/MIDAS/ROMAFOT. It is planned to carry out observations with the purpose to create the catalogue of primary UPXYZVS CCD standards in selected field of the sky for some radio-sources, globular and open clusters, etc. Numerical estimations of astrometric and photometric accuracy are obtained.

  7. The Speckle Toolbox: A Powerful Data Reduction Tool for CCD Astrometry

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard; Rowe, David; Genet, Russell

    2017-01-01

    Recent advances in high-speed low-noise CCD and CMOS cameras, coupled with breakthroughs in data reduction software that runs on desktop PCs, has opened the domain of speckle interferometry and high-accuracy CCD measurements of double stars to amateurs, allowing them to do useful science of high quality. This paper describes how to use a speckle interferometry reduction program, the Speckle Tool Box (STB), to achieve this level of result. For over a year the author (Harshaw) has been using STB (and its predecessor, Plate Solve 3) to obtain measurements of double stars based on CCD camera technology for pairs that are either too wide (the stars not sharing the same isoplanatic patch, roughly 5 arc-seconds in diameter) or too faint to image in the coherence time required for speckle (usually under 40ms). This same approach - using speckle reduction software to measure CCD pairs with greater accuracy than possible with lucky imaging - has been used, it turns out, for several years by the U. S. Naval Observatory.

  8. [Development of an original computer program FISHMet: use for molecular cytogenetic diagnosis and genome mapping by fluorescent in situ hybridization (FISH)].

    PubMed

    Iurov, Iu B; Khazatskiĭ, I A; Akindinov, V A; Dovgilov, L V; Kobrinskiĭ, B A; Vorsanova, S G

    2000-08-01

    Original software FISHMet has been developed and tried for improving the efficiency of diagnosis of hereditary diseases caused by chromosome aberrations and for chromosome mapping by fluorescent in situ hybridization (FISH) method. The program allows creation and analysis of pseudocolor chromosome images and hybridization signals in the Windows 95 system, allows computer analysis and editing of the results of pseudocolor hybridization in situ, including successive imposition of initial black-and-white images created using fluorescent filters (blue, green, and red), and editing of each image individually or of a summary pseudocolor image in BMP, TIFF, and JPEG formats. Components of image computer analysis system (LOMO, Leitz Ortoplan, and Axioplan fluorescent microscopes, COHU 4910 and Sanyo VCB-3512P CCD cameras, Miro-Video, Scion LG-3 and VG-5 image capture maps, and Pentium 100 and Pentium 200 computers) and specialized software for image capture and visualization (Scion Image PC and Video-Cup) have been used with good results in the study.

  9. Single-shot color fringe projection for three-dimensional shape measurement of objects with discontinuities.

    PubMed

    Dai, Meiling; Yang, Fujun; He, Xiaoyuan

    2012-04-20

    A simple but effective fringe projection profilometry is proposed to measure 3D shape by using one snapshot color sinusoidal fringe pattern. One color fringe pattern encoded with a sinusoidal fringe (as red component) and one uniform intensity pattern (as blue component) is projected by a digital video projector, and the deformed fringe pattern is recorded by a color CCD camera. The captured color fringe pattern is separated into its RGB components and division operation is applied to red and blue channels to reduce the variable reflection intensity. Shape information of the tested object is decoded by applying an arcsine algorithm on the normalized fringe pattern with subpixel resolution. In the case of fringe discontinuities caused by height steps, or spatially isolated surfaces, the separated blue component is binarized and used for correcting the phase demodulation. A simple and robust method is also introduced to compensate for nonlinear intensity response of the digital video projector. The experimental results demonstrate the validity of the proposed method.

  10. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  11. Automated Wing Twist And Bending Measurements Under Aerodynamic Load

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Martinson, S. D.

    1996-01-01

    An automated system to measure the change in wing twist and bending under aerodynamic load in a wind tunnel is described. The basic instrumentation consists of a single CCD video camera and a frame grabber interfaced to a computer. The technique is based upon a single view photogrammetric determination of two dimensional coordinates of wing targets with a fixed (and known) third dimensional coordinate, namely the spanwise location. The measurement technique has been used successfully at the National Transonic Facility, the Transonic Dynamics Tunnel, and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The advantages and limitations (including targeting) of the technique are discussed. A major consideration in the development was that use of the technique must not appreciably reduce wind tunnel productivity.

  12. Videogrammetric Model Deformation Measurement System User's Manual

    NASA Technical Reports Server (NTRS)

    Dismond, Harriett R.

    2002-01-01

    The purpose of this manual is to provide the user of the NASA VMD system, running the MDef software, Version 1.10, all information required to operate the system. The NASA Videogrammetric Model Deformation system consists of an automated videogrammetric technique used to measure the change in wing twist and bending under aerodynamic load in a wind tunnel. The basic instrumentation consists of a single CCD video camera and a frame grabber interfaced to a computer. The technique is based upon a single view photogrammetric determination of two-dimensional coordinates of wing targets with fixed (and known) third dimensional coordinate, namely the span-wise location. The major consideration in the development of the measurement system was that productivity must not be appreciably reduced.

  13. An imaging system for PLIF/Mie measurements for a combusting flow

    NASA Technical Reports Server (NTRS)

    Wey, C. C.; Ghorashi, B.; Marek, C. J.; Wey, C.

    1990-01-01

    The equipment required to establish an imaging system can be divided into four parts: (1) the light source and beam shaping optics; (2) camera and recording; (3) image acquisition and processing; and (4) computer and output systems. A pulsed, Nd:YAG-pummped, frequency-doubled dye laser which can freeze motion in the flowfield is used for an illumination source. A set of lenses is used to form the laser beam into a sheet. The induced fluorescence is collected by an UV-enhanced lens and passes through an UV-enhanced microchannel plate intensifier which is optically coupled to a gated solid state CCD camera. The output of the camera is simultaneously displayed on a monitor and recorded on either a laser videodisc set of a Super VHS VCR. This videodisc set is controlled by a minicomputer via a connection to the RS-232C interface terminals. The imaging system is connected to the host computer by a bus repeater and can be multiplexed between four video input sources. Sample images from a planar shear layer experiment are presented to show the processing capability of the imaging system with the host computer.

  14. OPSO - The OpenGL based Field Acquisition and Telescope Guiding System

    NASA Astrophysics Data System (ADS)

    Škoda, P.; Fuchs, J.; Honsa, J.

    2006-07-01

    We present OPSO, a modular pointing and auto-guiding system for the coudé spectrograph of the Ondřejov observatory 2m telescope. The current field and slit viewing CCD cameras with image intensifiers are giving only standard TV video output. To allow the acquisition and guiding of very faint targets, we have designed an image enhancing system working in real time on TV frames grabbed by BT878-based video capture card. Its basic capabilities include the sliding averaging of hundreds of frames with bad pixel masking and removal of outliers, display of median of set of frames, quick zooming, contrast and brightness adjustment, plotting of horizontal and vertical cross cuts of seeing disk within given intensity range and many more. From the programmer's point of view, the system consists of three tasks running in parallel on a Linux PC. One C task controls the video capturing over Video for Linux (v4l2) interface and feeds the frames into the large block of shared memory, where the core image processing is done by another C program calling the OpenGL library. The GUI is, however, dynamically built in Python from XML description of widgets prepared in Glade. All tasks are exchanging information by IPC calls using the shared memory segments.

  15. Optics design of laser spotter camera for ex-CCD sensor

    NASA Astrophysics Data System (ADS)

    Nautiyal, R. P.; Mishra, V. K.; Sharma, P. K.

    2015-06-01

    Development of Laser based instruments like laser range finder and laser ranger designator has received prominence in modern day military application. Aiming the laser on the target is done with the help of a bore sighted graticule as human eye cannot see the laser beam directly. To view Laser spot there are two types of detectors available, InGaAs detector and Ex-CCD detector, the latter being a cost effective solution. In this paper optics design for Ex-CCD based camera is discussed. The designed system is light weight and compact and has the ability to see the 1064nm pulsed laser spot upto a range of 5 km.

  16. HERCULES/MSI: a multispectral imager with geolocation for STS-70

    NASA Astrophysics Data System (ADS)

    Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta

    1995-11-01

    A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.

  17. Environmental performance evaluation of an advanced-design solid-state television camera

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The development of an advanced-design black-and-white solid-state television camera which can survive exposure to space environmental conditions was undertaken. A 380 x 488 element buried-channel CCD is utilized as the image sensor to ensure compatibility with 525-line transmission and display equipment. Specific camera design approaches selected for study and analysis included: (1) component and circuit sensitivity to temperature; (2) circuit board thermal and mechanical design; and (3) CCD temperature control. Preferred approaches were determined and integrated into the final design for two deliverable solid-state TV cameras. One of these cameras was subjected to environmental tests to determine stress limits for exposure to vibration, shock, acceleration, and temperature-vacuum conditions. These tests indicate performance at the design goal limits can be achieved for most of the specified conditions.

  18. New method for obtaining position and time structure of source in HDR remote afterloading brachytherapy unit utilizing light emission from scintillator

    PubMed Central

    Hanada, Takashi; Katsuta, Shoichi; Yorozu, Atsunori; Maruyama, Koichi

    2009-01-01

    When using a HDR remote afterloading brachytherapy unit, results of treatment can be greatly influenced by both source position and treatment time. The purpose of this study is to obtain information on the source of the HDR remote afterloading unit, such as its position and time structure, with the use of a simple system consisting of a plastic scintillator block and a charge‐coupled device (CCD) camera. The CCD camera was used for recording images of scintillation luminescence at a fixed rate of 30 frames per second in real time. The source position and time structure were obtained by analyzing the recorded images. For a preset source‐step‐interval of 5 mm, the measured value of the source position was 5.0±1.0mm, with a pixel resolution of 0.07 mm in the recorded images. For a preset transit time of 30 s, the measured value was 30.0±0.6 s, when the time resolution of the CCD camera was 1/30 s. This system enabled us to obtain the source dwell time and movement time. Therefore, parameters such as I192r source position, transit time, dwell time, and movement time at each dwell position can be determined quantitatively using this plastic scintillator‐CCD camera system. PACS number: 87.53.Jw

  19. Linear CCD attitude measurement system based on the identification of the auxiliary array CCD

    NASA Astrophysics Data System (ADS)

    Hu, Yinghui; Yuan, Feng; Li, Kai; Wang, Yan

    2015-10-01

    Object to the high precision flying target attitude measurement issues of a large space and large field of view, comparing existing measurement methods, the idea is proposed of using two array CCD to assist in identifying the three linear CCD with multi-cooperative target attitude measurement system, and to address the existing nonlinear system errors and calibration parameters and more problems with nine linear CCD spectroscopic test system of too complicated constraints among camera position caused by excessive. The mathematical model of binocular vision and three linear CCD test system are established, co-spot composition triangle utilize three red LED position light, three points' coordinates are given in advance by Cooperate Measuring Machine, the red LED in the composition of the three sides of a triangle adds three blue LED light points as an auxiliary, so that array CCD is easier to identify three red LED light points, and linear CCD camera is installed of a red filter to filter out the blue LED light points while reducing stray light. Using array CCD to measure the spot, identifying and calculating the spatial coordinates solutions of red LED light points, while utilizing linear CCD to measure three red LED spot for solving linear CCD test system, which can be drawn from 27 solution. Measured with array CCD coordinates auxiliary linear CCD has achieved spot identification, and has solved the difficult problems of multi-objective linear CCD identification. Unique combination of linear CCD imaging features, linear CCD special cylindrical lens system is developed using telecentric optical design, the energy center of the spot position in the depth range of convergence in the direction is perpendicular to the optical axis of the small changes ensuring highprecision image quality, and the entire test system improves spatial object attitude measurement speed and precision.

  20. Experimental setup for camera-based measurements of electrically and optically stimulated luminescence of silicon solar cells and wafers.

    PubMed

    Hinken, David; Schinke, Carsten; Herlufsen, Sandra; Schmidt, Arne; Bothe, Karsten; Brendel, Rolf

    2011-03-01

    We report in detail on the luminescence imaging setup developed within the last years in our laboratory. In this setup, the luminescence emission of silicon solar cells or silicon wafers is analyzed quantitatively. Charge carriers are excited electrically (electroluminescence) using a power supply for carrier injection or optically (photoluminescence) using a laser as illumination source. The luminescence emission arising from the radiative recombination of the stimulated charge carriers is measured spatially resolved using a camera. We give details of the various components including cameras, optical filters for electro- and photo-luminescence, the semiconductor laser and the four-quadrant power supply. We compare a silicon charged-coupled device (CCD) camera with a back-illuminated silicon CCD camera comprising an electron multiplier gain and a complementary metal oxide semiconductor indium gallium arsenide camera. For the detection of the luminescence emission of silicon we analyze the dominant noise sources along with the signal-to-noise ratio of all three cameras at different operation conditions.

  1. A compact high-speed pnCCD camera for optical and x-ray applications

    NASA Astrophysics Data System (ADS)

    Ihle, Sebastian; Ordavo, Ivan; Bechteler, Alois; Hartmann, Robert; Holl, Peter; Liebel, Andreas; Meidinger, Norbert; Soltau, Heike; Strüder, Lothar; Weber, Udo

    2012-07-01

    We developed a camera with a 264 × 264 pixel pnCCD of 48 μm size (thickness 450 μm) for X-ray and optical applications. It has a high quantum efficiency and can be operated up to 400 / 1000 Hz (noise≍ 2:5 ° ENC / ≍4:0 ° ENC). High-speed astronomical observations can be performed with low light levels. Results of test measurements will be presented. The camera is well suitable for ground based preparation measurements for future X-ray missions. For X-ray single photons, the spatial position can be determined with significant sub-pixel resolution.

  2. Optical registration of spaceborne low light remote sensing camera

    NASA Astrophysics Data System (ADS)

    Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long

    2018-02-01

    For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.

  3. VizieR Online Data Catalog: Observed light curve of (3200) Phaethon (Ansdell+, 2014)

    NASA Astrophysics Data System (ADS)

    Ansdell, M.; Meech, K. J.; Hainaut, O.; Buie, M. W.; Kaluna, H.; Bauer, J.; Dundon, L.

    2017-04-01

    We obtained time series photometry over 15 nights from 1994 to 2013. All but three nights used the Tektronix 2048x2048 pixel CCD camera on the University of Hawaii 2.2 m telescope on Mauna Kea. Two nights used the PRISM 2048x2048 pixel CCD camera on the Perkins 72 inch telescope at the Lowell Observatory in Flagstaff, Arizona, while one night used the Optic 2048x4096 CCD camera also on the University of Hawaii 2.2 m telescope. All observations used the standard Kron-Cousins R filter with the telescope guiding on (3200) Phaethon at non-sidereal rates. Raw images were processed with standard IRAF routines for bias subtraction, flat-fielding, and cosmic ray removal (Tody, 1986SPIE..627..733T). We constructed reference flat fields by median combining dithered images of either twilight or the object field (in both cases, flattening reduced gradients to <1% across the CCD). We performed photometry using the IRAF phot routine with circular apertures typically 5'' in radius, although aperture sizes changed depending on the night and/or exposure as they were chosen to consistently include 99.5% of the object's light. (1 data file).

  4. Cryostat and CCD for MEGARA at GTC

    NASA Astrophysics Data System (ADS)

    Castillo-Domínguez, E.; Ferrusca, D.; Tulloch, S.; Velázquez, M.; Carrasco, E.; Gallego, J.; Gil de Paz, A.; Sánchez, F. M.; Vílchez Medina, J. M.

    2012-09-01

    MEGARA (Multi-Espectrógrafo en GTC de Alta Resolución para Astronomía) is the new integral field unit (IFU) and multi-object spectrograph (MOS) instrument for the GTC. The spectrograph subsystems include the pseudo-slit, the shutter, the collimator with a focusing mechanism, pupil elements on a volume phase holographic grating (VPH) wheel and the camera joined to the cryostat through the last lens, with a CCD detector inside. In this paper we describe the full preliminary design of the cryostat which will harbor the CCD detector for the spectrograph. The selected cryogenic device is an LN2 open-cycle cryostat which has been designed by the "Astronomical Instrumentation Lab for Millimeter Wavelengths" at INAOE. A complete description of the cryostat main body and CCD head is presented as well as all the vacuum and temperature sub-systems to operate it. The CCD is surrounded by a radiation shield to improve its performance and is placed in a custom made mechanical mounting which will allow physical adjustments for alignment with the spectrograph camera. The 4k x 4k pixel CCD231 is our selection for the cryogenically cooled detector of MEGARA. The characteristics of this CCD, the internal cryostat cabling and CCD controller hardware are discussed. Finally, static structural finite element modeling and thermal analysis results are shown to validate the cryostat model.

  5. Platform for intraoperative analysis of video streams

    NASA Astrophysics Data System (ADS)

    Clements, Logan; Galloway, Robert L., Jr.

    2004-05-01

    Interactive, image-guided surgery (IIGS) has proven to increase the specificity of a variety of surgical procedures. However, current IIGS systems do not compensate for changes that occur intraoperatively and are not reflected in preoperative tomograms. Endoscopes and intraoperative ultrasound, used in minimally invasive surgery, provide real-time (RT) information in a surgical setting. Combining the information from RT imaging modalities with traditional IIGS techniques will further increase surgical specificity by providing enhanced anatomical information. In order to merge these techniques and obtain quantitative data from RT imaging modalities, a platform was developed to allow both the display and processing of video streams in RT. Using a Bandit-II CV frame grabber board (Coreco Imaging, St. Laurent, Quebec) and the associated library API, a dynamic link library was created in Microsoft Visual C++ 6.0 such that the platform could be incorporated into the IIGS system developed at Vanderbilt University. Performance characterization, using two relatively inexpensive host computers, has shown the platform capable of performing simple image processing operations on frames captured from a CCD camera and displaying the processed video data at near RT rates both independent of and while running the IIGS system.

  6. Assessment of the DoD Embedded Media Program

    DTIC Science & Technology

    2004-09-01

    Classified and Sensitive Information ................... VII-22 3. Weapons Systems Video, Gun Camera Video, and Lipstick Cameras...Weapons Systems Video, Gun Camera Video, and Lipstick Cameras A SECDEF and CJCS message to commanders stated, “Put in place mechanisms and processes...of public communication activities.”126 The 10 February 2003 PAG stated, “Use of lipstick and helmet-mounted cameras on combat sorties is approved

  7. Design of an ROV-based lidar for seafloor monitoring

    NASA Astrophysics Data System (ADS)

    Harsdorf, Stefan; Janssen, Manfred; Reuter, Rainer; Wachowicz, Bernhard

    1997-05-01

    In recent years, accidents of ships with chemical cargo have led to strong impacts on the marine ecosystem, and to risks for pollution control and clean-up teams. In order to enable a fast, safe, and efficient reaction, a new optical instrument has been designed for the inspection of objects on the seafloor by range-gated scattered light images as well as for the detection of substances by measuring the laser induced emission on the seafloor and within the water column. This new lidar is operated as a payload of a remotely operated vehicle (ROV). A Nd:YAG laser is employed as the light source of the lidar. In the video mode, the submarine lidar system uses the 2nd harmonic laser pulse to illuminate the seafloor. Elastically scattered and reflected light is collected with a gateable intensified CCD camera. The beam divergence of the laser is the same as the camera field-of-view. Synchronization of laser emission and camera gate time allows to suppress backscattered light from the water column and to record only the light backscattered by the object. This results in a contrast enhanced video image which increases the visibility range in turbid water up to four times. Substances seeping out from a container are often invisible in video images because of their low contrast. Therefore, a fluorescence lidar mode is integrated into the submarine lidar. the 3rd harmonic Nd:YAG laser pulse is applied, and the emission response of the water body between ROV and seafloor and of the seafloor itself is recorded at variable wavelengths with a maximum depth resolution is realized by a 2D scanner, which allows to select targets within the range-gated image for a measurement of fluorescence. The analysis of the time- and spectral-resolved signals permits the detection, the exact location, and a classification of fluorescent and/or absorbing substances.

  8. A goggle navigation system for cancer resection surgery

    NASA Astrophysics Data System (ADS)

    Xu, Junbin; Shao, Pengfei; Yue, Ting; Zhang, Shiwu; Ding, Houzhu; Wang, Jinkun; Xu, Ronald

    2014-02-01

    We describe a portable fluorescence goggle navigation system for cancer margin assessment during oncologic surgeries. The system consists of a computer, a head mount display (HMD) device, a near infrared (NIR) CCD camera, a miniature CMOS camera, and a 780 nm laser diode excitation light source. The fluorescence and the background images of the surgical scene are acquired by the CCD camera and the CMOS camera respectively, co-registered, and displayed on the HMD device in real-time. The spatial resolution and the co-registration deviation of the goggle navigation system are evaluated quantitatively. The technical feasibility of the proposed goggle system is tested in an ex vivo tumor model. Our experiments demonstrate the feasibility of using a goggle navigation system for intraoperative margin detection and surgical guidance.

  9. Flame Imaging System

    NASA Technical Reports Server (NTRS)

    Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)

    1998-01-01

    A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.

  10. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  11. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  12. Optical Meteor Systems Used by the NASA Meteoroid Environment Office

    NASA Technical Reports Server (NTRS)

    Kingery, A. M.; Blaauw, R. C.; Cooke, W. J.; Moser, D. E.

    2015-01-01

    The NASA Meteoroid Environment Office (MEO) uses two main meteor camera networks to characterize the meteoroid environment: an all sky system and a wide field system to study cm and mm size meteors respectively. The NASA All Sky Fireball Network consists of fifteen meteor video cameras in the United States, with plans to expand to eighteen cameras by the end of 2015. The camera design and All-Sky Guided and Real-time Detection (ASGARD) meteor detection software [1, 2] were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN). After seven years of operation, the network has detected over 12,000 multi-station meteors, including meteors from at least 53 different meteor showers. The network is used for speed distribution determination, characterization of meteor showers and sporadic sources, and for informing the public on bright meteor events. The NASA Wide Field Meteor Network was established in December of 2012 with two cameras and expanded to eight cameras in December of 2014. The two camera configuration saw 5470 meteors over two years of operation with two cameras, and has detected 3423 meteors in the first five months of operation (Dec 12, 2014 - May 12, 2015) with eight cameras. We expect to see over 10,000 meteors per year with the expanded system. The cameras have a 20 degree field of view and an approximate limiting meteor magnitude of +5. The network's primary goal is determining the nightly shower and sporadic meteor fluxes. Both camera networks function almost fully autonomously with little human interaction required for upkeep and analysis. The cameras send their data to a central server for storage and automatic analysis. Every morning the servers automatically generates an e-mail and web page containing an analysis of the previous night's events. The current status of the networks will be described, alongside with preliminary results. In addition, future projects, CCD photometry and broadband meteor color camera system, will be discussed.

  13. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    NASA Technical Reports Server (NTRS)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  14. Aluminum/ammonia heat pipe gas generation and long term system impact for the Space Telescope's Wide Field Planetary Camera

    NASA Technical Reports Server (NTRS)

    Jones, J. A.

    1983-01-01

    In the Space Telescope's Wide Field Planetary Camera (WFPC) project, eight heat pipes (HPs) are used to remove heat from the camera's inner electronic sensors to the spacecraft's outer, cold radiator surface. For proper device functioning and maximization of the signal-to-noise ratios, the Charge Coupled Devices (CCD's) must be maintained at -95 C or lower. Thermoelectric coolers (TEC's) cool the CCD's, and heat pipes deliver each TEC's nominal six to eight watts of heat to the space radiator, which reaches an equilibrium temperature between -15 C to -70 C. An initial problem was related to the difficulty to produce gas-free aluminum/ammonia heat pipes. An investigation was, therefore, conducted to determine the cause of the gas generation and the impact of this gas on CCD cooling. In order to study the effect of gas slugs in the WFPC system, a separate HP was made. Attention is given to fabrication, testing, and heat pipe gas generation chemistry studies.

  15. Wide field NEO survey 1.0-m telescope with 10 2k×4k mosaic CCD camera

    NASA Astrophysics Data System (ADS)

    Isobe, Syuzo; Asami, Atsuo; Asher, David J.; Hashimoto, Toshiyasu; Nakano, Shi-ichi; Nishiyama, Kota; Ohshima, Yoshiaki; Terazono, Junya; Umehara, Hiroaki; Yoshikawa, Makoto

    2002-12-01

    We developed a new 1.0 m telescope with a 3 degree flat focal plane to which a mosaic CCD camera with 10 2k×4k chips is fixed. The system was set up in February 2002, and is now undergoing the final fine adjustments. Since the telescope has a focal length of 3 m, a field of 7.5 square degrees is covered in one image. In good seeing conditions, 1.5 arc seconds, at the site located in Bisei town, Okayama prefecture in Japan, we can expect to detect down to 20th magnitude stars with an exposure time of 60 seconds. Considering a read-out time, 46 seconds, of the CCD camera, one image is taken in every two minutes, and about 2,100 square degrees of field is expected to be covered in one clear night. This system is very effective for survey work, especially for Near-Earth-Asteroid detection.

  16. ACS Data Handbook v.6.0

    NASA Astrophysics Data System (ADS)

    Gonzaga, S.; et al.

    2011-03-01

    ACS was designed to provide a deep, wide-field survey capability from the visible to near-IR using the Wide Field Camera (WFC), high resolution imaging from the near-UV to near-IR with the now-defunct High Resolution Camera (HRC), and solar-blind far-UV imaging using the Solar Blind Camera (SBC). The discovery efficiency of ACS's Wide Field Channel (i.e., the product of WFC's field of view and throughput) is 10 times greater than that of WFPC2. The failure of ACS's CCD electronics in January 2007 brought a temporary halt to CCD imaging until Servicing Mission 4 in May 2009, when WFC functionality was restored. Unfortunately, the high-resolution optical imaging capability of HRC was not recovered.

  17. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  18. Foam Experiment Hardware are Flown on Microgravity Rocket MAXUS 4

    NASA Astrophysics Data System (ADS)

    Lockowandt, C.; Löth, K.; Jansson, O.; Holm, P.; Lundin, M.; Schneider, H.; Larsson, B.

    2002-01-01

    The Foam module was developed by Swedish Space Corporation and was used for performing foam experiments on the sounding rocket MAXUS 4 launched from Esrange 29 April 2001. The development and launch of the module has been financed by ESA. Four different foam experiments were performed, two aqueous foams by Doctor Michele Adler from LPMDI, University of Marne la Vallée, Paris and two non aqueous foams by Doctor Bengt Kronberg from YKI, Institute for Surface Chemistry, Stockholm. The foam was generated in four separate foam systems and monitored in microgravity with CCD cameras. The purpose of the experiment was to generate and study the foam in microgravity. Due to loss of gravity there is no drainage in the foam and the reactions in the foam can be studied without drainage. Four solutions with various stabilities were investigated. The aqueous solutions contained water, SDS (Sodium Dodecyl Sulphate) and dodecanol. The organic solutions contained ethylene glycol a cationic surfactant, cetyl trimethyl ammonium bromide (CTAB) and decanol. Carbon dioxide was used to generate the aqueous foam and nitrogen was used to generate the organic foam. The experiment system comprised four complete independent systems with injection unit, experiment chamber and gas system. The main part in the experiment system is the experiment chamber where the foam is generated and monitored. The chamber inner dimensions are 50x50x50 mm and it has front and back wall made of glass. The front window is used for monitoring the foam and the back window is used for back illumination. The front glass has etched crosses on the inside as reference points. In the bottom of the cell is a glass frit and at the top is a gas in/outlet. The foam was generated by injecting the experiment liquid in a glass frit in the bottom of the experiment chamber. Simultaneously gas was blown through the glass frit and a small amount of foam was generated. This procedure was performed at 10 bar. Then the pressure was lowered in the experiment chamber to approximately 0,1 bar to expand the foam to a dry foam that filled the experiment chamber. The foam was regenerated during flight by pressurise the cell and repeat the foam generation procedures. The module had 4 individual experiment chambers for the four different solutions. The four experiment chambers were controlled individually with individual experiment parameters and procedures. The gas system comprise on/off valves and adjustable valves to control the pressure and the gas flow and liquid flow during foam generation. The gas system can be divided in four sections, each section serving one experiment chamber. The sections are partly connected in two pairs with common inlet and outlet. The two pairs are supplied with a 1l gas bottle each filled to a pressure of 40 bar and a pressure regulator lowering the pressure from 40 bar to 10 bar. Two sections are connected to the same outlet. The gas outlets from the experiment chambers are connected to two symmetrical placed outlets on the outer structure with diffusers not to disturb the g-levels. The foam in each experiment chamber was monitored with one tomography camera and one overview camera (8 CCD cameras in total). The tomography camera is placed on a translation table which makes it possible to move it in the depth direction of the experiment chamber. The video signal from the 8 CCD cameras were stored onboard with two DV recorders. Two video signals were also transmitted to ground for real time evaluation and operation of the experiment. Which camera signal that was transmitted to ground could be selected with telecommands. With help of the tomography system it was possible to take sequences of images of the foam at different depths in the foam. This sequences of images are used for constructing a 3-D model of the foam after flight. The overview camera has a fixed position and a field of view that covers the total experiment chamber. This camera is used for monitoring the generation of foam and the overall behaviour of the foam. The experiment was performed successfully with foam generation in all 4 experiment chambers. Foam was also regenerated during flight with telecommands. The experiment data is under evaluation.

  19. Evaluation of Suppression of Hydroprocessed Renewable Jet (HRJ) Fuel Fires with Aqueous Film Forming Foam (AFFF)

    DTIC Science & Technology

    2011-07-01

    cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were

  20. The Development of the Spanish Fireball Network Using a New All-Sky CCD System

    NASA Astrophysics Data System (ADS)

    Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.; Llorca, J.; Fabregat, J.; Martínez, V. J.; Reglero, V.; Jelínek, M.; Kubánek, P.; Mateo, T.; Postigo, A. De Ugarte

    2004-12-01

    We have developed an all-sky charge coupled devices (CCD) automatic system for detecting meteors and fireballs that will be operative in four stations in Spain during 2005. The cameras were developed following the BOOTES-1 prototype installed at the El Arenosillo Observatory in 2002, which is based on a CCD detector of 4096 × 4096 pixels with a fish-eye lens that provides an all-sky image with enough resolution to make accurate astrometric measurements. Since late 2004, a couple of cameras at two of the four stations operate for 30 s in alternate exposures, allowing 100% time coverage. The stellar limiting magnitude of the images is +10 in the zenith, and +8 below ~ 65° of zenithal angle. As a result, the images provide enough comparison stars to make astrometric measurements of faint meteors and fireballs with an accuracy of ~ 2°arcminutes. Using this prototype, four automatic all-sky CCD stations have been developed, two in Andalusia and two in the Valencian Community, to start full operation of the Spanish Fireball Network. In addition to all-sky coverage, we are developing a fireball spectroscopy program using medium field lenses with additional CCD cameras. Here we present the first images obtained from the El Arenosillo and La Mayora stations in Andalusia during their first months of activity. The detection of the Jan 27, 2003 superbolide of ± 17 ± 1 absolute magnitude that overflew Algeria and Morocco is an example of the detection capability of our prototype.

  1. Analysis of crystalline lens coloration using a black and white charge-coupled device camera.

    PubMed

    Sakamoto, Y; Sasaki, K; Kojima, M

    1994-01-01

    To analyze lens coloration in vivo, we used a new type of Scheimpflug camera that is a black and white type of charge-coupled device (CCD) camera. A new methodology was proposed. Scheimpflug images of the lens were taken three times through red (R), green (G), and blue (B) filters, respectively. Three images corresponding with the R, G, and B channels were combined into one image on the cathode-ray tube (CRT) display. The spectral transmittance of the tricolor filters and the spectral sensitivity of the CCD camera were used to correct the scattering-light intensity of each image. Coloration of the lens was expressed on a CIE standard chromaticity diagram. The lens coloration of seven eyes analyzed by this method showed values almost the same as those obtained by the previous method using color film.

  2. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  3. Portable Airborne Laser System Measures Forest-Canopy Height

    NASA Technical Reports Server (NTRS)

    Nelson, Ross

    2005-01-01

    (PALS) is a combination of laser ranging, video imaging, positioning, and data-processing subsystems designed for measuring the heights of forest canopies along linear transects from tens to thousands of kilometers long. Unlike prior laser ranging systems designed to serve the same purpose, the PALS is not restricted to use aboard a single aircraft of a specific type: the PALS fits into two large suitcases that can be carried to any convenient location, and the PALS can be installed in almost any local aircraft for hire, thereby making it possible to sample remote forests at relatively low cost. The initial cost and the cost of repairing the PALS are also lower because the PALS hardware consists mostly of commercial off-the-shelf (COTS) units that can easily be replaced in the field. The COTS units include a laser ranging transceiver, a charge-coupled-device camera that images the laser-illuminated targets, a differential Global Positioning System (dGPS) receiver capable of operation within the Wide Area Augmentation System, a video titler, a video cassette recorder (VCR), and a laptop computer equipped with two serial ports. The VCR and computer are powered by batteries; the other units are powered at 12 VDC from the 28-VDC aircraft power system via a low-pass filter and a voltage converter. The dGPS receiver feeds location and time data, at an update rate of 0.5 Hz, to the video titler and the computer. The laser ranging transceiver, operating at a sampling rate of 2 kHz, feeds its serial range and amplitude data stream to the computer. The analog video signal from the CCD camera is fed into the video titler wherein the signal is annotated with position and time information. The titler then forwards the annotated signal to the VCR for recording on 8-mm tapes. The dGPS and laser range and amplitude serial data streams are processed by software that displays the laser trace and the dGPS information as they are fed into the computer, subsamples the laser range and amplitude data, interleaves the subsampled data with the dGPS information, and records the resulting interleaved data stream.

  4. Analytical Tools for Cloudscope Ice Measurement

    NASA Technical Reports Server (NTRS)

    Arnott, W. Patrick

    1998-01-01

    The cloudscope is a ground or aircraft instrument for viewing ice crystals impacted on a sapphire window. It is essentially a simple optical microscope with an attached compact CCD video camera whose output is recorded on a Hi-8 mm video cassette recorder equipped with digital time and date recording capability. In aircraft operation the window is at a stagnation point of the flow so adiabatic compression heats the window to sublimate the ice crystals so that later impacting crystals can be imaged as well. A film heater is used for ground based operation to provide sublimation, and it can also be used to provide extra heat for aircraft operation. The compact video camera can be focused manually by the operator, and a beam splitter - miniature bulb combination provide illumination for night operation. Several shutter speeds are available to accommodate daytime illumination conditions by direct sunlight. The video images can be directly used to qualitatively assess the crystal content of cirrus clouds and contrails. Quantitative size spectra are obtained with the tools described in this report. Selected portions of the video images are digitized using a PCI bus frame grabber to form a short movie segment or stack using NIH (National Institute of Health) Image software with custom macros developed at DRI. The stack can be Fourier transform filtered with custom, easy to design filters to reduce most objectionable video artifacts. Particle quantification of each slice of the stack is performed using digital image analysis. Data recorded for each particle include particle number and centroid, frame number in the stack, particle area, perimeter, equivalent ellipse maximum and minimum radii, ellipse angle, and pixel number. Each valid particle in the stack is stamped with a unique number. This output can be used to obtain a semiquantitative appreciation of the crystal content. The particle information becomes the raw input for a subsequent program (FORTRAN) that synthesizes each slice and separates the new from the sublimating particles. The new particle information is used to generate quantitative particle concentration, area, and mass size spectra along with total concentration, solar extinction coefficient, and ice water content. This program directly creates output in html format for viewing with a web browser.

  5. Digital holographic interferometry applied to the investigation of ignition process.

    PubMed

    Pérez-Huerta, J S; Saucedo-Anaya, Tonatiuh; Moreno, I; Ariza-Flores, D; Saucedo-Orozco, B

    2017-06-12

    We use the digital holographic interferometry (DHI) technique to display the early ignition process for a butane-air mixture flame. Because such an event occurs in a short time (few milliseconds), a fast CCD camera is used to study the event. As more detail is required for monitoring the temporal evolution of the process, less light coming from the combustion is captured by the CCD camera, resulting in a deficient and underexposed image. Therefore, the CCD's direct observation of the combustion process is limited (down to 1000 frames per second). To overcome this drawback, we propose the use of DHI along with a high power laser in order to supply enough light to increase the speed capture, thus improving the visualization of the phenomenon in the initial moments. An experimental optical setup based on DHI is used to obtain a large sequence of phase maps that allows us to observe two transitory stages in the ignition process: a first explosion which slightly emits visible light, and a second stage induced by variations in temperature when the flame is emerging. While the last stage can be directly monitored by the CCD camera, the first stage is hardly detected by direct observation, and DHI clearly evidences this process. Furthermore, our method can be easily adapted for visualizing other types of fast processes.

  6. LSST camera readout chip ASPIC: test tools

    NASA Astrophysics Data System (ADS)

    Antilogus, P.; Bailly, Ph; Jeglot, J.; Juramy, C.; Lebbolo, H.; Martin, D.; Moniez, M.; Tocut, V.; Wicek, F.

    2012-02-01

    The LSST camera will have more than 3000 video-processing channels. The readout of this large focal plane requires a very compact readout chain. The correlated ''Double Sampling technique'', which is generally used for the signal readout of CCDs, is also adopted for this application and implemented with the so called ''Dual Slope integrator'' method. We have designed and implemented an ASIC for LSST: the Analog Signal Processing asIC (ASPIC). The goal is to amplify the signal close to the output, in order to maximize signal to noise ratio, and to send differential outputs to the digitization. Others requirements are that each chip should process the output of half a CCD, that is 8 channels and should operate at 173 K. A specific Back End board has been designed especially for lab test purposes. It manages the clock signals, digitizes the analog differentials outputs of ASPIC and stores data into a memory. It contains 8 ADCs (18 bits), 512 kwords memory and an USB interface. An FPGA manages all signals from/to all components on board and generates the timing sequence for ASPIC. Its firmware is written in Verilog and VHDL languages. Internals registers permit to define various tests parameters of the ASPIC. A Labview GUI allows to load or update these registers and to check a proper operation. Several series of tests, including linearity, noise and crosstalk, have been performed over the past year to characterize the ASPIC at room and cold temperature. At present, the ASPIC, Back-End board and CCD detectors are being integrated to perform a characterization of the whole readout chain.

  7. Upwelling Radiance at 976 nm Measured from Space Using a CCD Camera

    NASA Technical Reports Server (NTRS)

    Biswas, Abhijit; Kovalik, Joseph M.; Oaida, Bogdan V.; Abrahamson, Matthew J.; Wright, Malcolm W.

    2015-01-01

    The Optical Payload for Lasercomm Science (OPALS) Flight System on-board the International Space Station uses a charge coupled device (CCD) camera for receiving a beacon laser from Earth. Relative measurements of the background contributed by upwelling radiance under diverse illumination conditions and varying terrain is presented. In some cases clouds in the field-of-view allowed a comparison of terrestrial and cloud-top upwelling radiance. In this paper we will report these measurements and examine the extent of agreement with atmospheric model predictions.

  8. STK: A new CCD camera at the University Observatory Jena

    NASA Astrophysics Data System (ADS)

    Mugrauer, M.; Berthold, T.

    2010-04-01

    The Schmidt-Teleskop-Kamera (STK) is a new CCD-imager, which is operated since begin of 2009 at the University Observatory Jena. This article describes the main characteristics of the new camera. The properties of the STK detector, the astrometry and image quality of the STK, as well as its detection limits at the 0.9 m telescope of the University Observatory Jena are presented. Based on observations obtained with telescopes of the University Observatory Jena, which is operated by the Astrophysical Institute of the Friedrich-Schiller-University.

  9. Remote media vision-based computer input device

    NASA Astrophysics Data System (ADS)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  10. Computerized lateral-shear interferometer

    NASA Astrophysics Data System (ADS)

    Hasegan, Sorin A.; Jianu, Angela; Vlad, Valentin I.

    1998-07-01

    A lateral-shear interferometer, coupled with a computer for laser wavefront analysis, is described. A CCD camera is used to transfer the fringe images through a frame-grabber into a PC. 3D phase maps are obtained by fringe pattern processing using a new algorithm for direct spatial reconstruction of the optical phase. The program describes phase maps by Zernike polynomials yielding an analytical description of the wavefront aberration. A compact lateral-shear interferometer has been built using a laser diode as light source, a CCD camera and a rechargeable battery supply, which allows measurements in-situ, if necessary.

  11. Periodicity analysis on cat-eye reflected beam profiles of optical detectors

    NASA Astrophysics Data System (ADS)

    Gong, Mali; He, Sifeng

    2017-05-01

    The cat-eye effect reflected beam profiles of most optical detectors have a certain characteristic of periodicity, which is caused by array arrangement of sensors at their optical focal planes. It is the first time to find and prove that the reflected beam profile becomes several periodic spots at the reflected propagation distance corresponding to half the imaging distance of a CCD camera. Furthermore, the spatial cycle of these spots is approximately constant, independent of the CCD camera's imaging distance, which is related only to the focal length and pixel size of the CCD sensor. Thus, we can obtain the imaging distance and intrinsic parameters of the optical detector by analyzing its cat-eye reflected beam profiles. This conclusion can be applied in the field of non-cooperative cat-eye target recognition.

  12. Using multi-disciplinary optimization and numerical simulation on the transiting exoplanet survey satellite

    NASA Astrophysics Data System (ADS)

    Stoeckel, Gerhard P.; Doyle, Keith B.

    2017-08-01

    The Transiting Exoplanet Survey Satellite (TESS) is an instrument consisting of four, wide fieldof- view CCD cameras dedicated to the discovery of exoplanets around the brightest stars, and understanding the diversity of planets and planetary systems in our galaxy. Each camera utilizes a seven-element lens assembly with low-power and low-noise CCD electronics. Advanced multivariable optimization and numerical simulation capabilities accommodating arbitrarily complex objective functions have been added to the internally developed Lincoln Laboratory Integrated Modeling and Analysis Software (LLIMAS) and used to assess system performance. Various optical phenomena are accounted for in these analyses including full dn/dT spatial distributions in lenses and charge diffusion in the CCD electronics. These capabilities are utilized to design CCD shims for thermal vacuum chamber testing and flight, and verify comparable performance in both environments across a range of wavelengths, field points and temperature distributions. Additionally, optimizations and simulations are used for model correlation and robustness optimizations.

  13. Beyond detection: nuclear physics with a webcam in an educational setting

    NASA Astrophysics Data System (ADS)

    Pallone, Arthur

    2015-03-01

    Nuclear physics affects our daily lives in such diverse fields from medicine to art. I believe three obstacles - limited time, lack of subject familiarity and thus comfort on the part of educators, and equipment expense - must be overcome to produce a nuclear-educated populace. Educators regularly use webcams to actively engage students in scientific discovery as evidenced by a literature search for the term webcam paired with topics such as astronomy, biology, and physics. Inspired by YouTube videos that demonstrate alpha particle detection by modified webcams, I searched for examples that go beyond simple detection with only one education-oriented result - the determination of the in-air range of alphas using a modified CCD camera. Custom-built, radiation-hardened CMOS detectors exist in high energy physics and for soft x-ray detection. Commercial CMOS cameras are used for direct imaging in electron microscopy. I demonstrate charged-particle spectrometry with a slightly modified CMOS-based webcam. When used with inexpensive sources of radiation and free software, the webcam charged-particle spectrometer presents educators with a simple, low-cost technique to include nuclear physics in science education.

  14. SWUIS-A: A Versatile, Low-Cost UV/VIS/IR Imaging System for Airborne Astronomy and Aeronomy Research

    NASA Technical Reports Server (NTRS)

    Durda, Daniel D.; Stern, S. Alan; Tomlinson, William; Slater, David C.; Vilas, Faith

    2001-01-01

    We have developed and successfully flight-tested on 14 different airborne missions the hardware and techniques for routinely conducting valuable astronomical and aeronomical observations from high-performance, two-seater military-type aircraft. The SWUIS-A (Southwest Universal Imaging System - Airborne) system consists of an image-intensified CCD camera with broad band response from the near-UV to the near IR, high-quality foreoptics, a miniaturized video recorder, an aircraft-to-camera power and telemetry interface with associated camera controls, and associated cables, filters, and other minor equipment. SWUIS-A's suite of high-quality foreoptics gives it selectable, variable focal length/variable field-of-view capabilities. The SWUIS-A camera frames at 60 Hz video rates, which is a key requirement for both jitter compensation and high time resolution (useful for occultation, lightning, and auroral studies). Broadband SWUIS-A image coadds can exceed a limiting magnitude of V = 10.5 in <1 sec with dark sky conditions. A valuable attribute of SWUIS-A airborne observations is the fact that the astronomer flies with the instrument, thereby providing Space Shuttle-like "payload specialist" capability to "close-the-loop" in real-time on the research done on each research mission. Key advantages of the small, high-performance aircraft on which we can fly SWUIS-A include significant cost savings over larger, more conventional airborne platforms, worldwide basing obviating the need for expensive, campaign-style movement of specialized large aircraft and their logistics support teams, and ultimately faster reaction times to transient events. Compared to ground-based instruments, airborne research platforms offer superior atmospheric transmission, the mobility to reach remote and often-times otherwise unreachable locations over the Earth, and virtually-guaranteed good weather for observing the sky. Compared to space-based instruments, airborne platforms typically offer substantial cost advantages and the freedom to fly along nearly any groundtrack route for transient event tracking such as occultations and eclipses.

  15. LAMOST CCD camera-control system based on RTS2

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  16. Development, characterization, and modeling of a tunable filter camera

    NASA Astrophysics Data System (ADS)

    Sartor, Mark Alan

    1999-10-01

    This paper describes the development, characterization, and modeling of a Tunable Filter Camera (TFC). The TFC is a new multispectral instrument with electronically tuned spectral filtering and low-light-level sensitivity. It represents a hybrid between hyperspectral and multispectral imaging spectrometers that incorporates advantages from each, addressing issues such as complexity, cost, lack of sensitivity, and adaptability. These capabilities allow the TFC to be applied to low- altitude video surveillance for real-time spectral and spatial target detection and image exploitation. Described herein are the theory and principles of operation for the TFC, which includes a liquid crystal tunable filter, an intensified CCD, and a custom apochromatic lens. The results of proof-of-concept testing, and characterization of two prototype cameras are included, along with a summary of the design analyses for the development of a multiple-channel system. A significant result of this effort was the creation of a system-level model, which was used to facilitate development and predict performance. It includes models for the liquid crystal tunable filter and intensified CCD. Such modeling was necessary in the design of the system and is useful for evaluation of the system in remote-sensing applications. Also presented are characterization data from component testing, which included quantitative results for linearity, signal to noise ratio (SNR), linearity, and radiometric response. These data were used to help refine and validate the model. For a pre-defined source, the spatial and spectral response, and the noise of the camera, system can now be predicted. The innovation that sets this development apart is the fact that this instrument has been designed for integrated, multi-channel operation for the express purpose of real-time detection/identification in low- light-level conditions. Many of the requirements for the TFC were derived from this mission. In order to provide background for the design requirements for the TFC development, the mission and principles of operation behind the multi-channel system will be reviewed. Given the combination of the flexibility, simplicity, and sensitivity, the TFC and its multiple-channel extension can play a significant role in the next generation of remote-sensing instruments.

  17. A compact multichannel spectrometer for Thomson scatteringa)

    NASA Astrophysics Data System (ADS)

    Schoenbeck, N. L.; Schlossberg, D. J.; Dowd, A. S.; Fonck, R. J.; Winz, G. R.

    2012-10-01

    The availability of high-efficiency volume phase holographic (VPH) gratings and intensified CCD (ICCD) cameras have motivated a simplified, compact spectrometer for Thomson scattering detection. Measurements of Te < 100 eV are achieved by a 2971 l/mm VPH grating and measurements Te > 100 eV by a 2072 l/mm VPH grating. The spectrometer uses a fast-gated (˜2 ns) ICCD camera for detection. A Gen III image intensifier provides ˜45% quantum efficiency in the visible region. The total read noise of the image is reduced by on-chip binning of the CCD to match the 8 spatial channels and the 10 spectral bins on the camera. Three spectrometers provide a minimum of 12 spatial channels and 12 channels for background subtraction.

  18. A compact multichannel spectrometer for Thomson scattering.

    PubMed

    Schoenbeck, N L; Schlossberg, D J; Dowd, A S; Fonck, R J; Winz, G R

    2012-10-01

    The availability of high-efficiency volume phase holographic (VPH) gratings and intensified CCD (ICCD) cameras have motivated a simplified, compact spectrometer for Thomson scattering detection. Measurements of T(e) < 100 eV are achieved by a 2971 l∕mm VPH grating and measurements T(e) > 100 eV by a 2072 l∕mm VPH grating. The spectrometer uses a fast-gated (~2 ns) ICCD camera for detection. A Gen III image intensifier provides ~45% quantum efficiency in the visible region. The total read noise of the image is reduced by on-chip binning of the CCD to match the 8 spatial channels and the 10 spectral bins on the camera. Three spectrometers provide a minimum of 12 spatial channels and 12 channels for background subtraction.

  19. Biotube

    NASA Technical Reports Server (NTRS)

    Richards, Stephanie E. (Compiler); Levine, Howard G.; Romero, Vergel

    2016-01-01

    Biotube was developed for plant gravitropic research investigating the potential for magnetic fields to orient plant roots as they grow in microgravity. Prior to flight, experimental seeds are placed into seed cassettes, that are capable of containing up to 10 seeds, and inserted between two magnets located within one of three Magnetic Field Chamber (MFC). Biotube is stored within an International Space Station (ISS) stowage locker and provides three levels of containment for chemical fixatives. Features include monitoring of temperature, fixative/ preservative delivery to specimens, and real-time video imaging downlink. Biotube's primary subsystems are: (1) The Water Delivery System that automatically activates and controls the delivery of water (to initiate seed germination). (2) The Fixative Storage and Delivery System that stores and delivers chemical fixative or RNA later to each seed cassette. (3) The Digital Imaging System consisting of 4 charge-coupled device (CCD) cameras, a video multiplexer, a lighting multiplexer, and 16 infrared light-emitting diodes (LEDs) that provide illumination while the photos are being captured. (4) The Command and Data Management System that provides overall control of the integrated subsystems, graphical user interface, system status and error message display, image display, and other functions.

  20. CCDs in the Mechanics Lab--A Competitive Alternative? (Part I).

    ERIC Educational Resources Information Center

    Pinto, Fabrizio

    1995-01-01

    Reports on the implementation of a relatively low-cost, versatile, and intuitive system to teach basic mechanics based on the use of a Charge-Coupled Device (CCD) camera and inexpensive image-processing and analysis software. Discusses strengths and limitations of CCD imaging technologies. (JRH)

  1. Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range

    NASA Astrophysics Data System (ADS)

    Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.

    2013-12-01

    The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/ΔE) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 μm from the current 24 μm spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 μm square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 μm and 3.9±0.1 μm at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these worst-case resolution measurements, estimating the spatial resolution to be approximately 3.5 μm and 3.0 μm at 530 eV and 680 eV, well below the resolution limit of 5 μm required to improve the spectral resolution by a factor of 2.

  2. Head-mounted display for use in functional endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Wong, Brian J.; Lee, Jon P.; Dugan, F. Markoe; MacArthur, Carol J.

    1995-05-01

    Since the introduction of functional endoscopic sinus surgery (FESS), the procedure has undergone rapid change with evolution keeping pace with technological advances. The advent of low cost charge coupled device 9CCD) cameras revolutionized the practice and instruction of FESS. Video-based FESS has allowed for documentation of the surgical procedure as well as interactive instruction during surgery. Presently, the technical requirements of video-based FESS include the addition of one or more television monitors positioned strategically in the operating room. Thought video monitors have greatly enhanced surgical endoscopy by re- involving nurses and assistants in the actual mechanics of surgery, video monitors require the operating surgeon to be focused on the screen instead of the patient. In this study, we describe the use of a new low-cost liquid crystal display (LCD) based device that functions as a monitor but is mounted on the head on a visor (PT-O1, O1 Products, Westlake Village, CA). This study illustrates the application of these HMD devices to FESS operations. The same surgeon performed the operation in each patient. In one nasal fossa, surgery was performed using conventional video FESS methods. The contralateral side was operated on while wearing the head mounted video display. The device had adequate resolution for the purposes of FESS. No adverse effects were noted intraoperatively. The results on the patients ipsalateral and contralateral sides were similar. The visor did eliminated significant torsion of the surgeon's neck during the operation, while at the same time permitted simultaneous viewing of both the patient and the intranasal surgical field.

  3. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  4. A Flight Photon Counting Camera for the WFIRST Coronagraph

    NASA Astrophysics Data System (ADS)

    Morrissey, Patrick

    2018-01-01

    A photon counting camera based on the Teledyne-e2v CCD201-20 electron multiplying CCD (EMCCD) is being developed for the NASA WFIRST coronagraph, an exoplanet imaging technology development of the Jet Propulsion Laboratory (Pasadena, CA) that is scheduled to launch in 2026. The coronagraph is designed to directly image planets around nearby stars, and to characterize their spectra. The planets are exceedingly faint, providing signals similar to the detector dark current, and require the use of photon counting detectors. Red sensitivity (600-980nm) is preferred to capture spectral features of interest. Since radiation in space affects the ability of the EMCCD to transfer the required single electron signals, care has been taken to develop appropriate shielding that will protect the cameras during a five year mission. In this poster, consideration of the effects of space radiation on photon counting observations will be described with the mitigating features of the camera design. An overview of the current camera flight system electronics requirements and design will also be described.

  5. Development of a CCD array as an imaging detector for advanced X-ray astrophysics facilities

    NASA Technical Reports Server (NTRS)

    Schwartz, D. A.

    1981-01-01

    The development of a charge coupled device (CCD) X-ray imager for a large aperture, high angular resolution X-ray telescope is discussed. Existing CCDs were surveyed and three candidate concepts were identified. An electronic camera control and computer interface, including software to drive a Fairchild 211 CCD, is described. In addition a vacuum mounting and cooling system is discussed. Performance data for the various components are given.

  6. CCD Astrometry with Robotic Telescopes

    NASA Astrophysics Data System (ADS)

    AlZaben, Faisal; Li, Dewei; Li, Yongyao; Dennis, Aren Fene, Michael; Boyce, Grady; Boyce, Pat

    2016-01-01

    CCD images were acquired of three binary star systems: WDS06145+1148, WDS06206+1803, and WDS06224+2640. The astrometric solution, position angle, and separation of each system were calculated with MaximDL v6 and Mira Pro x64 software suites. The results were consistent with historical measurements in the Washington Double Star Catalog. Our analysis found some differences in measurements between single-shot color CCD cameras and traditional monochrome CCDs using a filter wheel.

  7. Using a trichromatic CCD camera for spectral skylight estimation.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Olmo, F J; Cazorla, A; Alados-Arboledas, L

    2008-12-01

    In a previous work [J. Opt. Soc. Am. A 24, 942-956 (2007)] we showed how to design an optimum multispectral system aimed at spectral recovery of skylight. Since high-resolution multispectral images of skylight could be interesting for many scientific disciplines, here we also propose a nonoptimum but much cheaper and faster approach to achieve this goal by using a trichromatic RGB charge-coupled device (CCD) digital camera. The camera is attached to a fish-eye lens, hence permitting us to obtain a spectrum of every point of the skydome corresponding to each pixel of the image. In this work we show how to apply multispectral techniques to the sensors' responses of a common trichromatic camera in order to obtain skylight spectra from them. This spectral information is accurate enough to estimate experimental values of some climate parameters or to be used in algorithms for automatic cloud detection, among many other possible scientific applications.

  8. Hyperspectral Image-Based Night-Time Vehicle Light Detection Using Spectral Normalization and Distance Mapper for Intelligent Headlight Control.

    PubMed

    Kim, Heekang; Kwon, Soon; Kim, Sungho

    2016-07-08

    This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen).

  9. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  10. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  11. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  12. The Multi-site All-Sky CAmeRA (MASCARA). Finding transiting exoplanets around bright (mV < 8) stars

    NASA Astrophysics Data System (ADS)

    Talens, G. J. J.; Spronck, J. F. P.; Lesage, A.-L.; Otten, G. P. P. L.; Stuik, R.; Pollacco, D.; Snellen, I. A. G.

    2017-05-01

    This paper describes the design, operations, and performance of the Multi-site All-Sky CAmeRA (MASCARA). Its primary goal is to find new exoplanets transiting bright stars, 4 < mV < 8, by monitoring the full sky. MASCARA consists of one northern station on La Palma, Canary Islands (fully operational since February 2015), one southern station at La Silla Observatory, Chile (operational from early 2017), and a data centre at Leiden Observatory in the Netherlands. Both MASCARA stations are equipped with five interline CCD cameras using wide field lenses (24 mm focal length) with fixed pointings, which together provide coverage down to airmass 3 of the local sky. The interline CCD cameras allow for back-to-back exposures, taken at fixed sidereal times with exposure times of 6.4 sidereal seconds. The exposures are short enough that the motion of stars across the CCD does not exceed one pixel during an integration. Astrometry and photometry are performed on-site, after which the resulting light curves are transferred to Leiden for further analysis. The final MASCARA archive will contain light curves for 70 000 stars down to mV = 8.4, with a precision of 1.5% per 5 minutes at mV = 8.

  13. Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

  14. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  15. Soft X-ray and XUV imaging with a charge-coupled device /CCD/-based detector

    NASA Technical Reports Server (NTRS)

    Loter, N. G.; Burstein, P.; Krieger, A.; Ross, D.; Harrison, D.; Michels, D. J.

    1981-01-01

    A soft X-ray/XUV imaging camera which uses a thinned, back-illuminated, all-buried channel RCA CCD for radiation sensing has been built and tested. The camera is a slow-scan device which makes possible frame integration if necessary. The detection characteristics of the device have been tested over the 15-1500 eV range. The response was linear with exposure up to 0.2-0.4 erg/sq cm; saturation occurred at greater exposures. Attention is given to attempts to resolve single photons with energies of 1.5 keV.

  16. Sensory Interactive Teleoperator Robotic Grasping

    NASA Technical Reports Server (NTRS)

    Alark, Keli; Lumia, Ron

    1997-01-01

    As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.

  17. An Investigation into the Spectral Imaging of Hall Thruster Plumes

    DTIC Science & Technology

    2015-07-01

    imaging experiment. It employs a Kodak KAF-3200E 3 megapixel CCD (2184×1472 with 6.8 µm pixels). The camera was designed for astronomical imaging and thus...19 mml 14c--7_0_m_m_~•... ,. ,. 50 mm I· ·I ,. 41 mm I Kodak KAF- 3200E ceo 2184 x 1472 px 14.9 x 10.0 mm 6.8 x 6.8J..Lm pixel size SBIG ST...It employs a Kodak KAF-3200E 3 megapixel CCD (2184×1472 with 6.8 µm pixels). The camera was designed for astronomical imaging and thus long exposure

  18. Optical readout of a two phase liquid argon TPC using CCD camera and THGEMs

    NASA Astrophysics Data System (ADS)

    Mavrokoridis, K.; Ball, F.; Carroll, J.; Lazos, M.; McCormick, K. J.; Smith, N. A.; Touramanis, C.; Walker, J.

    2014-02-01

    This paper presents a preliminary study into the use of CCDs to image secondary scintillation light generated by THick Gas Electron Multipliers (THGEMs) in a two phase LAr TPC. A Sony ICX285AL CCD chip was mounted above a double THGEM in the gas phase of a 40 litre two-phase LAr TPC with the majority of the camera electronics positioned externally via a feedthrough. An Am-241 source was mounted on a rotatable motion feedthrough allowing the positioning of the alpha source either inside or outside of the field cage. Developed for and incorporated into the TPC design was a novel high voltage feedthrough featuring LAr insulation. Furthermore, a range of webcams were tested for operation in cryogenics as an internal detector monitoring tool. Of the range of webcams tested the Microsoft HD-3000 (model no:1456) webcam was found to be superior in terms of noise and lowest operating temperature. In ambient temperature and atmospheric pressure 1 ppm pure argon gas, the THGEM gain was ≈ 1000 and using a 1 msec exposure the CCD captured single alpha tracks. Successful operation of the CCD camera in two-phase cryogenic mode was also achieved. Using a 10 sec exposure a photograph of secondary scintillation light induced by the Am-241 source in LAr has been captured for the first time.

  19. Noninvasive imaging of protein-protein interactions from live cells and living subjects using bioluminescence resonance energy transfer.

    PubMed

    De, Abhijit; Gambhir, Sanjiv Sam

    2005-12-01

    This study demonstrates a significant advancement of imaging of a distance-dependent physical process, known as the bioluminescent resonance energy transfer (BRET2) signal in living subjects, by using a cooled charge-coupled device (CCD) camera. A CCD camera-based spectral imaging strategy enables simultaneous visualization and quantitation of BRET signal from live cells and cells implanted in living mice. We used the BRET2 system, which utilizes Renilla luciferase (hRluc) protein and its substrate DeepBlueC (DBC) as an energy donor and a mutant green fluorescent protein (GFP2) as the acceptor. To accomplish this objective in this proof-of-principle study, the donor and acceptor proteins were fused to FKBP12 and FRB, respectively, which are known to interact only in the presence of the small molecule mediator rapamycin. Mammalian cells expressing these fusion constructs were imaged using a cooled-CCD camera either directly from culture dishes or by implanting them into mice. By comparing the emission photon yields in the presence and absence of rapamycin, the specific BRET signal was determined. The CCD imaging approach of BRET signal is particularly appealing due to its capacity to seamlessly bridge the gap between in vitro and in vivo studies. This work validates BRET as a powerful tool for interrogating and observing protein-protein interactions directly at limited depths in living mice.

  20. Leveraging traffic and surveillance video cameras for urban traffic.

    DOT National Transportation Integrated Search

    2014-12-01

    The objective of this project was to investigate the use of existing video resources, such as traffic : cameras, police cameras, red light cameras, and security cameras for the long-term, real-time : collection of traffic statistics. An additional ob...

  1. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  2. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  3. A conceptual design study for a two-dimensional, electronically scanned thinned array radiometer

    NASA Technical Reports Server (NTRS)

    Mutton, Philip; Chromik, Christopher C.; Dixon, Iain; Statham, Richard B.; Stillwagen, Frederic H.; Vontheumer, Alfred E.; Sasamoto, Washito A.; Garn, Paul A.; Cosgrove, Patrick A.; Ganoe, George G.

    1993-01-01

    A conceptual design for the Two-Dimensional, Electronically Steered Thinned Array Radiometer (ESTAR) is described. This instrument is a synthetic aperture microwave radiometer that operates in the L-band frequency range for the measurement of soil moisture and ocean salinity. Two auxiliary instruments, an 8-12 micron, scanning infrared radiometer and a 0.4-1.0 micron, charge coupled device (CCD) video camera, are included to provided data for sea surface temperature measurements and spatial registration of targets respectively. The science requirements were defined by Goddard Space Flight Center. Instrument and the spacecraft configurations are described for missions using the Pegasus and Taurus launch vehicles. The analyses and design trades described include: estimations of size, mass and power, instrument viewing coverage, mechanical design trades, structural and thermal analyses, data and communications performance assessments, and cost estimation.

  4. Space telescope optical telescope assembly/scientific instruments. Phase B: -Preliminary design and program definition study; Volume 2A: Planetary camera report

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Development of the F/48, F/96 Planetary Camera for the Large Space Telescope is discussed. Instrument characteristics, optical design, and CCD camera submodule thermal design are considered along with structural subsystem and thermal control subsystem. Weight, electrical subsystem, and support equipment requirements are also included.

  5. Instant Video Revisiting: The Video Camera as a "Tool of the Mind" for Young Children.

    ERIC Educational Resources Information Center

    Forman, George

    1999-01-01

    Once used only to record special events in the classroom, video cameras are now small enough and affordable enough to be used to document everyday events. Video cameras, with foldout screens, allow children to watch their activities immediately after they happen and to discuss them with a teacher. This article coins the term instant video…

  6. TLE Balloon experiment campaign carried out on 25 August 2006 in Japan

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Chikada, S.; Yoshida, A.; Adachi, T.; Sakanoi, T.

    2006-12-01

    The balloon observation campaign for TLE and lightning study was carried out 25 August 2006 in Japan by Tohoku University, supported by JAXA. The balloon was successfully launched at 18:33 LT at Sanriku Balloon Center of JAXA located in the east coast of northern part of Japan (Iwate prefecture). Three types of scientific payloads were installed at the 1 m-cubic gondola, that is, 3-axis VLF electric filed antenna and receiver (VLFR), 4 video frame CCD cameras (CCDI) and 2-color photometer (PM). The video images were stored in 4 HD video recorders, which have 20GB memories respectively, at 30 frames/sec and VLFR and PM data were put into digital data recorder with 30 GB memory at sampling rate of 100 kHz. The balloon floated at the altitude of 13 km until about 20:30 LT, going eastward and went up to 26 km at a distance of 130 km from the coast. And it went back westward at the altitude of 26 km until midnight. The total observation period is about 5 hours. Most of the equipments worked properly except for one video recorder. Some thunderstorms existed within the direct FOV from the balloon in the range of 400-600 km and more than about 400 lightning flashes were recorded as video images. We confirmed that, at least, one sprite halo was captured by CCDI which occurred in the oceanic thunderstorm at a distance of about 500 km from balloon. This is the first TLE image obtained by a balloon-borne camera. Simultaneous measurements of VLF sferics and lightning/TLE images will clarify the role of intracloud (IC) currents in producing and/or modulating TLEs as well as cloud-to-ground discharges (CG). Especially the effect of horizontal components will be investigated in detail, which cannot be detected on the ground, to explain the unsolved properties of TLEs, such as long time delay of TLE from the timing of stroke and large horizontal displacement between CG and TLEs.

  7. Medición de coeficientes de extinción en CASLEO y características del CCD ROPER-2048B del telescopio JS

    NASA Astrophysics Data System (ADS)

    Fernández-Lajús, E.; Gamen, R.; Sánchez, M.; Scalia, M. C.; Baume, G. L.

    2016-08-01

    From observations made with the ``Jorge Sahade'' telescope of the Complejo Astronomico El Leoncito, the UBVRI-band extinction coeficients were measured, and some parameters and characteristics of the direct-image CCD camera ROPER 2048B were determined.

  8. VizieR Online Data Catalog: Photometry of multiple stars at NAOR&ASV in 2015 (Cvetkovic+, 2017)

    NASA Astrophysics Data System (ADS)

    Cvetkovic, Z.; Pavlovic, R.; Boeva, S.

    2018-05-01

    This is the ninth series of CCD observations of double and multiple stars, obtained at the Bulgarian National Astronomical Observatory at Rozhen (NAOR) over five nights. As previously, the CCD camera VersArray 1300B was used, which was attached to the 2 m telescope. For each double or multiple star, five CCD frames in the Johnson B filter and five frames in the Johnson V filter were taken, which enabled us to determine the magnitude difference for these filters. In 2015 at the Astronomical Station at Vidojevica (ASV), over a total of 23 nights, observations were carried out by using the 60 cm telescope with a Cassegrain optical system. This is the fourth observational series at ASV since the work started there in 2011. In the observations we used the Apogee Alta U42 CCD camera whose characteristics can be found in the paper by Cvetkovic et al. (2016, J/AJ/151/58). Every pair was observed five times in the Cousins/Bessel B filter and five times in the Cousins/Bessel V one. (3 data files).

  9. The Soft X-ray Imager (SXI) for the ASTRO-H Mission

    NASA Astrophysics Data System (ADS)

    Tanaka, Takaaki; Tsunemi, Hiroshi; Hayashida, Kiyoshi; Tsuru, Takeshi G.; Dotani, Tadayasu; Nakajima, Hiroshi; Anabuki, Naohisa; Nagino, Ryo; Uchida, Hiroyuki; Nobukawa, Masayoshi; Ozaki, Masanobu; Natsukari, Chikara; Tomida, Hiroshi; Ueda, Shutaro; Kimura, Masashi; Hiraga, Junko S.; Kohmura, Takayoshi; Murakami, Hiroshi; Mori, Koji; Yamauchi, Makoto; Hatsukade, Isamu; Nishioka, Yusuke; Bamba, Aya; Doty, John P.

    2015-09-01

    The Soft X-ray Imager (SXI) is an X-ray CCD camera onboard the ASTRO-H X-ray observatory. The CCD chip used is a P-channel back-illuminated type, and has a 200-µm thick depletion layer, with which the SXI covers the energy range between 0.4 keV and 12 keV. Its imaging area has a size of 31 mm x 31 mm. We arrange four of the CCD chips in a 2 by 2 grid so that we can cover a large field-of-view of 38' x 38'. We cool the CCDs to -120 °C with a single-stage Stirling cooler. As was done for the CCD camera of the Suzaku satellite, XIS, artificial charges are injected to selected rows in order to recover charge transfer inefficiency due to radiation damage caused by in-orbit cosmic rays. We completed fabrication of flight models of the SXI and installed them into the satellite. We verified the performance of the SXI in a series of satellite tests. On-ground calibrations were also carried out and detailed studies are ongoing.

  10. Nonchronological video synopsis and indexing.

    PubMed

    Pritch, Yael; Rav-Acha, Alex; Peleg, Shmuel

    2008-11-01

    The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video Synopsis can be applied to create a synopsis of an endless video streams, as generated by webcams and by surveillance cameras. It can address queries like "Show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) An online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.

  11. Design of a CCD Camera for Space Surveillance

    DTIC Science & Technology

    2016-03-05

    Laboratory fabricated CCID-51M, a 2048x1024 pixel Charge Couple Device (CCD) imager. [1] The mission objective is to observe and detect satellites in...phased to transfer the charge to the outputs. An electronic shutter is created by having an equal area of pixels covered by an opaque metal mask. The...Figure 4 CDS Timing Diagram By design the CCD readout rate is 400 KHz. This rate was chosen so reading the 2E6 pixels from one output is less than

  12. CCD Photometer Installed on the Telescope - 600 OF the Shamakhy Astrophysical Observatory: I. Adjustment of CCD Photometer with Optics - 600

    NASA Astrophysics Data System (ADS)

    Lyuty, V. M.; Abdullayev, B. I.; Alekberov, I. A.; Gulmaliyev, N. I.; Mikayilov, Kh. M.; Rustamov, B. N.

    2009-12-01

    Short description of optical and electric scheme of CCD photometer with camera U-47 installed on the Cassegrain focus of ZEISS-600 telescope of the ShAO NAS Azerbaijan is provided. The reducer of focus with factor of reduction 1.7 is applied. It is calculated equivalent focal distances of a telescope with a focus reducer. General calculations of optimum distance from focal plane and t sizes of optical filters of photometer are presented.

  13. Acquiring neural signals for developing a perception and cognition model

    NASA Astrophysics Data System (ADS)

    Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert

    2012-06-01

    The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.

  14. A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Iida, Muneo; Kobayashi, Yukio

    1990-04-01

    This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.

  15. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  16. The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.

    PubMed

    Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco

    2015-01-01

    Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.

  17. Line scanning system for direct digital chemiluminescence imaging of DNA sequencing blots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karger, A.E.; Weiss, R.; Gesteland, R.F.

    A cryogenically cooled charge-coupled device (CCD) camera equipped with an area CCD array is used in a line scanning system for low-light-level imaging of chemiluminescent DNA sequencing blots. Operating the CCD camera in time-delayed integration (TDI) mode results in continuous data acquisition independent of the length of the CCD array. Scanning is possible with a resolution of 1.4 line pairs/mm at the 50% level of the modulation transfer function. High-sensitivity, low-light-level scanning of chemiluminescent direct-transfer electrophoresis (DTE) DNA sequencing blots is shown. The detection of DNA fragments on the blot involves DNA-DNA hybridization with oligonucleotide-alkaline phosphatase conjugate and 1,2-dioxetane-based chemiluminescence.more » The width of the scan allows the recording of up to four sequencing reactions (16 lanes) on one scan. The scan speed of 52 cm/h used for the sequencing blots corresponds to a data acquisition rate of 384 pixels/s. The chemiluminescence detection limit on the scanned images is 3.9 [times] 10[sup [minus]18] mol of plasmid DNA. A conditional median filter is described to remove spikes caused by cosmic ray events from the CCD images. 39 refs., 9 refs.« less

  18. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    PubMed

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  19. Automated Meteor Fluxes with a Wide-Field Meteor Camera Network

    NASA Technical Reports Server (NTRS)

    Blaauw, R. C.; Campbell-Brown, M. D.; Cooke, W.; Weryk, R. J.; Gill, J.; Musci, R.

    2013-01-01

    Within NASA, the Meteoroid Environment Office (MEO) is charged to monitor the meteoroid environment in near ]earth space for the protection of satellites and spacecraft. The MEO has recently established a two ]station system to calculate automated meteor fluxes in the millimeter ]size ]range. The cameras each consist of a 17 mm focal length Schneider lens on a Watec 902H2 Ultimate CCD video camera, producing a 21.7 x 16.3 degree field of view. This configuration has a red ]sensitive limiting meteor magnitude of about +5. The stations are located in the South Eastern USA, 31.8 kilometers apart, and are aimed at a location 90 km above a point 50 km equidistant from each station, which optimizes the common volume. Both single station and double station fluxes are found, each having benefits; more meteors will be detected in a single camera than will be seen in both cameras, producing a better determined flux, but double station detections allow for non ]ambiguous shower associations and permit speed/orbit determinations. Video from the cameras are fed into Linux computers running the ASGARD (All Sky and Guided Automatic Real ]time Detection) software, created by Rob Weryk of the University of Western Ontario Meteor Physics Group. ASGARD performs the meteor detection/photometry, and invokes the MILIG and MORB codes to determine the trajectory, speed, and orbit of the meteor. A subroutine in ASGARD allows for the approximate shower identification in single station meteors. The ASGARD output is used in routines to calculate the flux in units of #/sq km/hour. The flux algorithm employed here differs from others currently in use in that it does not assume a single height for all meteors observed in the common camera volume. In the MEO system, the volume is broken up into a set of height intervals, with the collecting areas determined by the radiant of active shower or sporadic source. The flux per height interval is summed to obtain the total meteor flux. As ASGARD also computes the meteor mass from the photometry, a mass flux can be also calculated. Weather conditions in the southeastern United States are seldom ideal, which introduces the difficulty of a variable sky background. First a weather algorithm indicates if sky conditions are clear enough to calculate fluxes, at which point a limiting magnitude algorithm is employed. The limiting magnitude algorithm performs a fit of stellar magnitudes vs camera intensities. The stellar limiting magnitude is derived from this and easily converted to a limiting meteor magnitude for the active shower or sporadic source.

  20. A new video laryngo-pharyngoscope with shape-holding coiled tube and surgical forceps: a preliminary study.

    PubMed

    Tamura, Koichi; Kim, Masanobu; Abe, Koji; Toda, Naoki; Jinouchi, Osamu; Kalubi, Bukasa; Takeda, Noriaki

    2009-12-01

    We developed a new video laryngo-pharyngoscope with a shape-holding coiled tube and examined its effectiveness in some patients. The video laryngo-pharyngoscope is designed to inspect the pharynx and larynx transorally and to perform surgical manipulations. The scope consists of a coiled tube, a grip with trigger connected to the forceps and a CCD Camera with a battery. The stainless coiled tube of the scope is flexible but shape-holding, so that its shape can be changed by hand with the characteristic that the new orientation remains invariable during both inspection and operation in the pharynx and larynx. After a local anesthesia, the operator holds the scope in one hand and pulls the patient's tongue by the other hand. The operator then inserted the scope transorally while monitoring video images that were wirelessly transferred to the display to ensure that the forceps has reached the area of interest and treated lesions successfully. Using the scope, we successfully examined the upper airway lesions and removed foreign bodies from the pharynx and performed both resection of a benign tumor and taking a biopsy of a malignant tumor from the pharynx and larynx. But, we could hardly remove vocal fold polyps because of the structural limitation of the scope. We demonstrated that the new video laryngo-pharyngoscope can be used safely and successfully in the inspection and removal of lesions in the oropharynx and supraglottic area of the larynx and will be a useful tool for minimally invasive office-based surgery.

  1. Control and protection of outdoor embedded camera for astronomy

    NASA Astrophysics Data System (ADS)

    Rigaud, F.; Jegouzo, I.; Gaudemard, J.; Vaubaillon, J.

    2012-09-01

    The purpose of CABERNET- Podet-Met (CAmera BEtter Resolution NETwork, Pole sur la Dynamique de l'Environnement Terrestre - Meteor) project is the automated observation, by triangulation with three cameras, of meteor showers to perform a calculation of meteoroids trajectory and velocity. The scientific goal is to search the parent body, comet or asteroid, for each observed meteor. To install outdoor cameras in order to perform astronomy measurements for several years with high reliability requires a very specific design for the box. For these cameras, this contribution shows how we fulfilled the various functions of their boxes, such as cooling of the CCD, heating to melt snow and ice, the protecting against moisture, lightning and Solar light. We present the principal and secondary functions, the product breakdown structure, the technical solutions evaluation grid of criteria, the adopted technology products and their implementation in multifunction subsets for miniaturization purpose. To manage this project, we aim to get the lowest manpower and development time for every part. In appendix, we present the measurements the image quality evolution during the CCD cooling, and some pictures of the prototype.

  2. The 2011 October Draconids outburst - I. Orbital elements, meteoroid fluxes and 21P/Giacobini-Zinner delivered mass to Earth

    NASA Astrophysics Data System (ADS)

    Trigo-Rodríguez, Josep M.; Madiedo, José M.; Williams, I. P.; Dergham, Joan; Cortés, Jordi; Castro-Tirado, Alberto J.; Ortiz, José L.; Zamorano, Jaime; Ocaña, Francisco; Izquierdo, Jaime; Sánchez de Miguel, Alejandro; Alonso-Azcárate, Jacinto; Rodríguez, Diego; Tapia, Mar; Pujols, Pep; Lacruz, Juan; Pruneda, Francesc; Oliva, Armand; Pastor Erades, Juan; Francisco Marín, Antonio

    2013-07-01

    On 2011 October 8, the Earth crossed the dust trails left by comet 21P/Giacobini-Zinner during its 19th and 20th century perihelion approaches with the comet being close to perihelion. The geometric circumstances of that encounter were thus favourable to produce a meteor storm, but the trails were much older than in the 1933 and 1946 historical encounters. As a consequence the 2011 October Draconid display exhibited several activity peaks with Zenithal Hourly Rates of about 400 meteors h-1. In fact, if the display had not been forecasted, it could have passed almost unnoticed as was strongly attenuated for visual observers due to the Moon. This suggests that most meteor storms of a similar nature could have passed historically unnoticed under unfavourable weather and Moon observing conditions. The possibility of obtaining information on the physical properties of cometary meteoroids penetrating the atmosphere under low geocentric velocity encounter circumstances motivated us to set up a special observing campaign. Added to the Spanish Fireball Network wide-field all-sky and CCD video monitoring, other high-sensitivity 1/2 arcsec black and white CCD video cameras were attached to the modified medium-field lenses for obtaining high-resolution orbital information. The trajectory, radiant and orbital data of October 16 Draconid meteors observed at multiple stations are presented. The results show that the meteors appeared from a geocentric radiant located at α = 263.0 ± 0.4° and δ = +55.3 ± 0.3° that is in close agreement with the radiant predicted for the 1873-1894 and the 1900 dust trails. The estimated mass of material from 21P/Giacobini-Zinner delivered to Earth during the 6 h outburst was around 950 ± 150 kg.

  3. Improving Radar Snowfall Measurements Using a Video Disdrometer

    NASA Astrophysics Data System (ADS)

    Newman, A. J.; Kucera, P. A.

    2005-05-01

    A video disdrometer has been recently developed at NASA/Wallops Flight Facility in an effort to improve surface precipitation measurements. The recent upgrade of the UND C-band weather radar to dual-polarimetric capabilities along with the development of the UND Glacial Ridge intensive atmospheric observation site has presented a valuable opportunity to attempt to improve radar estimates of snowfall. The video disdrometer, referred to as the Rain Imaging System (RIS), has been deployed at the Glacial Ridge site for most of the 2004-2005 winter season to measure size distributions, precipitation rate, and density estimates of snowfall. The RIS uses CCD grayscale video camera with a zoom lens to observe hydrometers in a sample volume located 2 meters from end of the lens and approximately 1.5 meters away from an independent light source. The design of the RIS may eliminate sampling errors from wind flow around the instrument. The RIS has proven its ability to operate continuously in the adverse conditions often observed in the Northern Plains. The RIS is able to provide crystal habit information, variability of particle size distributions for the lifecycle of the storm, snowfall rates, and estimates of snow density. This information, in conjunction with hand measurements of density and crystal habit, will be used to build a database for comparisons with polarimetric data from the UND radar. This database will serve as the basis for improving snowfall estimates using polarimetric radar observations. Preliminary results from several case studies will be presented.

  4. General Model of Photon-Pair Detection with an Image Sensor

    NASA Astrophysics Data System (ADS)

    Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.

    2018-05-01

    We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.

  5. Imagers for digital still photography

    NASA Astrophysics Data System (ADS)

    Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge

    2006-04-01

    This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.

  6. CCD detector development projects by the Beamline Technical Support Group at the Advanced Photon Source

    NASA Astrophysics Data System (ADS)

    Lee, John H.; Fernandez, Patricia; Madden, Tim; Molitsky, Michael; Weizeorick, John

    2007-11-01

    This paper will describe two ongoing detector projects being developed by the Beamline Technical Support Group at the Advanced Photon Source (APS) at Argonne National Laboratory (ANL). The first project is the design and construction of two detectors: a single-CCD system and a two-by-two Mosaic CCD camera for Small-Angle X-ray Scattering (SAXS). Both of these systems utilize the Kodak KAF-4320E CCD coupled to fiber optic tapers, custom mechanical hardware, electronics, and software developed at ANL. The second project is a Fast-CCD (FCCD) detector being developed in a collaboration between ANL and Lawrence Berkeley National Laboratory (LBNL). This detector will use ANL-designed readout electronics and a custom LBNL-designed CCD, with 480×480 pixels and 96 outputs, giving very fast readout.

  7. Transient full-field vibration measurement using spectroscopical stereo photogrammetry.

    PubMed

    Yue, Kaiduan; Li, Zhongke; Zhang, Ming; Chen, Shan

    2010-12-20

    Contrasted with other vibration measurement methods, a novel spectroscopical photogrammetric approach is proposed. Two colored light filters and a CCD color camera are used to achieve the function of two traditional cameras. Then a new calibration method is presented. It focuses on the vibrating object rather than the camera and has the advantage of more accuracy than traditional camera calibration. The test results have shown an accuracy of 0.02 mm.

  8. First Results of Digital Topography Applied to Macromolecular Crystals

    NASA Technical Reports Server (NTRS)

    Lovelace, J.; Soares, A. S.; Bellamy, H.; Sweet, R. M.; Snell, E. H.; Borgstahl, G.

    2004-01-01

    An inexpensive digital CCD camera was used to record X-ray topographs directly from large imperfect crystals of cubic insulin. The topographs recorded were not as detailed as those which can be measured with film or emulsion plates but do show great promise. Six reflections were recorded using a set of finely spaced stills encompassing the rocking curve of each reflection. A complete topographic reflection profile could be digitally imaged in minutes. Interesting and complex internal structure was observed by this technique.The CCD chip used in the camera has anti-blooming circuitry and produced good data quality even when pixels became overloaded.

  9. Cat-eye effect reflected beam profiles of an optical system with sensor array.

    PubMed

    Gong, Mali; He, Sifeng; Guo, Rui; Wang, Wei

    2016-06-01

    In this paper, we propose an applicable propagation model for Gaussian beams passing through any cat-eye target instead of traditional simplification consisting of only a mirror placed at the focal plane of a lens. According to the model, the cat-eye effect of CCD cameras affected by defocus is numerically simulated. An excellent agreement of experiment results with theoretical analysis is obtained. It is found that the reflectivity distribution at the focal plane of the cat-eye optical lens has great influence on the results, while the cat-eye effect reflected beam profiles of CCD cameras show obvious periodicity.

  10. Upgrading the Arecibo Potassium Lidar Receiver for Meridional Wind Measurements

    NASA Astrophysics Data System (ADS)

    Piccone, A. N.; Lautenbach, J.

    2017-12-01

    Lidar can be used to measure a plethora of variables: temperature, density of metals, and wind. This REU project is focused on the set up of a semi steerable telescope that will allow the measurement of meridional wind in the mesosphere (80-105 km) with Arecibo Observatory's potassium resonance lidar. This includes the basic design concept of a steering system that is able to turn the telescope to a maximum of 40°, alignment of the mirror with the telescope frame to find the correct focusing, and the triggering and programming of a CCD camera. The CCD camera's purpose is twofold: looking though the telescope and matching the stars in the field of view with a star map to accurately calibrate the steering system and determining the laser beam properties and position. Using LabVIEW, the frames from the CCD camera can be analyzed to identify the most intense pixel in the image (and therefore the brightest point in the laser beam or stars) by plotting average pixel values per row and column and locating the peaks of these plots. The location of this pixel can then be plotted, determining the jitter in the laser and position within the field of view of the telescope.

  11. Time-resolved imaging of the plasma development in a triggered vacuum switch

    NASA Astrophysics Data System (ADS)

    Park, Wung-Hoa; Kim, Moo-Sang; Son, Yoon-Kyoo; Frank, Klaus; Lee, Byung-Joon; Ackerman, Thilo; Iberler, Marcus

    2017-12-01

    Triggered vacuum switches (TVS) are particularly used in pulsed power technology as closing switches for high voltages and high charge transfer. A non-sealed-off prototype was designed with a side-on quartz window to investigate the evolution of the trigger discharge into the main discharge. The image acquisition was done with a fast CCD camera PI-MAX2 from Princeton Instruments. The CCD camera has a maximum exposure time of 2 ns. The electrode configuration of the prototype is a conventional six-rod gap type, a capacitor bank with C = 16.63 μF, which corresponds at 20 kV charging voltage to a total stored charge of 0.3 C or a total energy of 3.3 kJ. The peak current is 88 kA. According to the tremendously highly different light intensities during the trigger and main discharge, the complete discharge is split into three phases: a trigger breakdown phase, an intermediate phase and a main discharge phase. The CCD camera images of the first phase show instabilities of the trigger breakdown, in phase 2 three different discharge modes are observed. After the first current maximum the discharge behavior is reproducible.

  12. Hyperspectral Image-Based Night-Time Vehicle Light Detection Using Spectral Normalization and Distance Mapper for Intelligent Headlight Control

    PubMed Central

    Kim, Heekang; Kwon, Soon; Kim, Sungho

    2016-01-01

    This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen). PMID:27399720

  13. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  14. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  15. Three-dimensional shape measurement and calibration for fringe projection by considering unequal height of the projector and the camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu Feipeng; Shi Hongjian; Bai Pengxiang

    In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less

  16. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  17. Low-cost laser speckle contrast imaging of blood flow using a webcam.

    PubMed

    Richards, Lisa M; Kazmi, S M Shams; Davis, Janel L; Olin, Katherine E; Dunn, Andrew K

    2013-01-01

    Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion.

  18. Low-cost laser speckle contrast imaging of blood flow using a webcam

    PubMed Central

    Richards, Lisa M.; Kazmi, S. M. Shams; Davis, Janel L.; Olin, Katherine E.; Dunn, Andrew K.

    2013-01-01

    Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion. PMID:24156082

  19. [Virtual reality in ophthalmological education].

    PubMed

    Wagner, C; Schill, M; Hennen, M; Männer, R; Jendritza, B; Knorz, M C; Bender, H J

    2001-04-01

    We present a computer-based medical training workstation for the simulation of intraocular eye surgery. The surgeon manipulates two original instruments inside a mechanical model of the eye. The instrument positions are tracked by CCD cameras and monitored by a PC which renders the scenery using a computer-graphic model of the eye and the instruments. The simulator incorporates a model of the operation table, a mechanical eye, three CCD cameras for the position tracking, the stereo display, and a computer. The three cameras are mounted under the operation table from where they can observe the interior of the mechanical eye. Using small markers the cameras recognize the instruments and the eye. Their position and orientation in space is determined by stereoscopic back projection. The simulation runs with more than 20 frames per second and provides a realistic impression of the surgery. It includes the cold light source which can be moved inside the eye and the shadow of the instruments on the retina which is important for navigational purposes.

  20. Plane development of lateral surfaces for inspection systems

    NASA Astrophysics Data System (ADS)

    Francini, F.; Fontani, D.; Jafrancesco, D.; Mercatelli, L.; Sansoni, P.

    2006-08-01

    The problem of developing the lateral surfaces of a 3D object can arise in item inspection using automated imaging systems. In an industrial environment, these control systems typically work at high rate and they have to assure a reliable inspection of the single item. For compactness requirements it is not convenient to utilise three or four CCD cameras to control all the lateral surfaces of an object. Moreover it is impossible to mount optical components near the object if it is placed on a conveyor belt. The paper presents a system that integrates on a single CCD picture the images of both the frontal surface and the lateral surface of an object. It consists of a freeform lens mounted in front of a CCD camera with a commercial lens. The aim is to have a good magnification of the lateral surface, maintaining a low aberration level for exploiting the pictures in an image processing software. The freeform lens, made in plastics, redirects the light coming from the object to the camera lens. The final result is to obtain on the CCD: - the frontal and lateral surface images, with a selected magnification (even with two different values for the two images); - a gap between these two images, so an automatic method to analyse the images can be easily applied. A simple method to design the freeform lens is illustrated. The procedure also allows to obtain the imaging system modifying a current inspection system reducing the cost.

  1. Texture-adaptive hyperspectral video acquisition system with a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Fang, Xiaojing; Feng, Jiao; Wang, Yongjin

    2014-10-01

    We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.

  2. Method for 3D noncontact measurements of cut trees package area

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Vizilter, Yuri V.

    2001-02-01

    Progress in imaging sensors and computers create the background for numerous 3D imaging application for wide variety of manufacturing activity. Many demands for automated precise measurements are in wood branch of industry. One of them is the accurate volume definition for cut trees carried on the truck. The key point for volume estimation is determination of the front area of the cut tree package. To eliminate slow and inaccurate manual measurements being now in practice the experimental system for automated non-contact wood measurements is developed. The system includes two non-metric CCD video cameras, PC as central processing unit, frame grabbers and original software for image processing and 3D measurements. The proposed method of measurement is based on capturing the stereo pair of front of trees package and performing the image orthotranformation into the front plane. This technique allows to process transformed image for circle shapes recognition and calculating their area. The metric characteristics of the system are provided by special camera calibration procedure. The paper presents the developed method of 3D measurements, describes the hardware used for image acquisition and the software realized the developed algorithms, gives the productivity and precision characteristics of the system.

  3. The closing behavior of mechanical aortic heart valve prostheses.

    PubMed

    Lu, Po-Chien; Liu, Jia-Shing; Huang, Ren-Hong; Lo, Chi-Wen; Lai, Ho-Cheng; Hwang, Ned H C

    2004-01-01

    Mechanical artificial heart valves rely on reverse flow to close their leaflets. This mechanism creates regurgitation and water hammer effects that may form cavitations, damage blood cells, and cause thromboembolism. This study analyzes closing mechanisms of monoleaflet (Medtronic Hall 27), bileaflet (Carbo-Medics 27; St. Jude Medical 27; Duromedics 29), and trileaflet valves in a circulatory mock loop, including an aortic root with three sinuses. Downstream flow field velocity was measured via digital particle image velocimetry (DPIV). A high speed camera (PIVCAM 10-30 CCD video camera) tracked leaflet movement at 1000 frames/s. All valves open in 40-50 msec, but monoleaflet and bileaflet valves close in much less time (< 35 msec) than the trileaflet valve (>75 msec). During acceleration phase of systole, the monoleaflet forms a major and minor flow, the bileaflet has three jet flows, and the trileaflet produces a single central flow like physiologic valves. In deceleration phase, the aortic sinus vortices hinder monoleaflet and bileaflet valve closure until reverse flows and high negative transvalvular pressure push the leaflets rapidly for a hard closure. Conversely, the vortices help close the trileaflet valve more softly, probably causing less damage, lessening back flow, and providing a washing effect that may prevent thrombosis formation.

  4. Evaluation of the ImmerVision IMV1-1/3NI Panomorph Lens on a Small Unmanned Ground Vehicle (SUGV)

    DTIC Science & Technology

    2013-07-01

    360°. For the above reason, a 1.3-MP Chameleon color universal serial bus (USB) camera with a 1/3-in CCD from PGR was selected instead of...recommended qualified cameras to host the panomorph lens. Having the advantage of a small footprint, the Chameleon camera with the IMV1 lens can be easily

  5. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  6. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  7. Thrust Measurements in Ballistic Pendulum Ablative Laser Propulsion Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brazolin, H.; Rodrigues, N. A. S.; Minucci, M. A. S.

    This paper describes a setup for thrust measurement in ablative laser propulsion experiments, based on a simple ballistic pendulum associated to an imaging system, which is being assembled at IEAv. A light aluminium pendulum holding samples is placed inside a 100 liters vacuum chamber with two optical windows: the first (in ZnSe) for the laser beam and the second (in fused quartz) for the pendulum visualization. A TEA-CO{sub 2} laser beam is focused to the samples providing ablation and transferring linear moment to the pendulum as a whole. A CCD video camera captures the oscillatory movement of the pendulum andmore » the its trajectory is obtained by image processing. By fitting the trajectory of the pendulum to a dumped sinusoidal curve is possible to obtain the amplitude of the movement which is directly related to the momentum transfered to the sample.« less

  8. In-situ measurement of concentrated solar flux and distribution at the aperture of a central solar receiver

    NASA Astrophysics Data System (ADS)

    Ferriere, Alain; Volut, Mikael; Perez, Antoine; Volut, Yann

    2016-05-01

    A flux mapping system has been designed, implemented and experimented at the top of the Themis solar tower in France. This system features a moving bar associated to a CCD video camera and a flux gauge mounted onto the bar used as reference measurement for calibration purpose. Images and flux signal are acquired separately. The paper describes the equipment and focus on the data processing to issue the distribution of flux density and concentration at the aperture of the solar receiver. Finally, the solar power entering into the receiver is estimated by integration of flux density. The processing is largely automated in the form of a dedicated software with fast execution. A special attention is paid to the accuracy of the results, to the robustness of the algorithm and to the velocity of the processing.

  9. Rolling Shutter Effect aberration compensation in Digital Holographic Microscopy

    NASA Astrophysics Data System (ADS)

    Monaldi, Andrea C.; Romero, Gladis G.; Cabrera, Carlos M.; Blanc, Adriana V.; Alanís, Elvio E.

    2016-05-01

    Due to the sequential-readout nature of most CMOS sensors, each row of the sensor array is exposed at a different time, resulting in the so-called rolling shutter effect that induces geometric distortion to the image if the video camera or the object moves during image acquisition. Particularly in digital holograms recording, while the sensor captures progressively each row of the hologram, interferometric fringes can oscillate due to external vibrations and/or noises even when the object under study remains motionless. The sensor records each hologram row in different instants of these disturbances. As a final effect, phase information is corrupted, distorting the reconstructed holograms quality. We present a fast and simple method for compensating this effect based on image processing tools. The method is exemplified by holograms of microscopic biological static objects. Results encourage incorporating CMOS sensors over CCD in Digital Holographic Microscopy due to a better resolution and less expensive benefits.

  10. [Observation of oral actions using digital image processing system].

    PubMed

    Ichikawa, T; Komoda, J; Horiuchi, M; Ichiba, H; Hada, M; Matsumoto, N

    1990-04-01

    A new digital image processing system to observe oral actions is proposed. The system provides analyses of motion pictures along with other physiological signals. The major components are a video tape recorder, a digital image processor, a percept scope, a CCD camera, an A/D converter and a personal computer. Five reference points were marked on the lip and eyeglasses of 9 adult subjects. Lip movements were recorded and analyzed using the system when uttering five vowels and [ka, sa, ta, ha, ra, ma, pa, ba[. 1. Positions of the lip when uttering five vowels were clearly classified. 2. Active articulatory movements of the lip were not recognized when uttering consonants [k, s, t, h, r[. It seemed lip movements were dependent on tongue and mandibular movements. Downward and rearward movements of the upper lip, and upward and forward movements of the lower lip were observed when uttering consonants [m, p, b[.

  11. Analysis on laser plasma emission for characterization of colloids by video-based computer program

    NASA Astrophysics Data System (ADS)

    Putri, Kirana Yuniati; Lumbantoruan, Hendra Damos; Isnaeni

    2016-02-01

    Laser-induced breakdown detection (LIBD) is a sensitive technique for characterization of colloids with small size and low concentration. There are two types of detection, optical and acoustic. Optical LIBD employs CCD camera to capture the plasma emission and uses the information to quantify the colloids. This technique requires sophisticated technology which is often pricey. In order to build a simple, home-made LIBD system, a dedicated computer program based on MATLAB™ for analyzing laser plasma emission was developed. The analysis was conducted by counting the number of plasma emissions (breakdowns) during a certain period of time. Breakdown probability provided information on colloid size and concentration. Validation experiment showed that the computer program performed well on analyzing the plasma emissions. Optical LIBD has A graphical user interface (GUI) was also developed to make the program more user-friendly.

  12. Excimer-laser-induced shock wave and its dependence on atmospheric environment

    NASA Astrophysics Data System (ADS)

    Krueger, Ronald R.; Krasinski, Jerzy S.; Radzewicz, Czeslaw

    1993-06-01

    High speed shadow photography is performed on excimer laser ablated porcine corneas and rubber stoppers to capture the excimer laser induced shock waves at various time delays between 40 and 320 nanoseconds. The shock waves in air, nitrogen, and helium are recorded by tangentially illuminating the ablated surface with a tunable dye laser, the XeCl excimer laser pulse. The excimer laser ablates the specimen and excites the dye laser, which is then passed through an optical delay line before illuminating the specimen. The shadow of the shock wave produced during ablation is then cast on a screen and photographed with a CCD video camera. The system is pulsed at 30 times per second to allow a video recording of the shock wave at a fixed time delay. We conclude that high energy acoustic waves and gaseous particles are liberated during excimer laser corneal ablation, and dissipate on a submicrosecond time scale. The velocity of their dissipation is dependent on the atmospheric environment and can be increased two-fold when the ablation is performed in a helium atmosphere. Therefore, local temperature increases due to the liberation of high energy gases may be reduced by using helium during corneal photoablation.

  13. Active Flow Control: Instrumentation Automation and Experimental Technique

    NASA Technical Reports Server (NTRS)

    Gimbert, N. Wes

    1995-01-01

    In investigating the potential of a new actuator for use in an active flow control system, several objectives had to be accomplished, the largest of which was the experimental setup. The work was conducted at the NASA Langley 20x28 Shear Flow Control Tunnel. The actuator named Thunder, is a high deflection piezo device recently developed at Langley Research Center. This research involved setting up the instrumentation, the lighting, the smoke, and the recording devices. The instrumentation was automated by means of a Power Macintosh running LabVIEW, a graphical instrumentation package developed by National Instruments. Routines were written to allow the tunnel conditions to be determined at a given instant at the push of a button. This included determination of tunnel pressures, speed, density, temperature, and viscosity. Other aspects of the experimental equipment included the set up of a CCD video camera with a video frame grabber, monitor, and VCR to capture the motion. A strobe light was used to highlight the smoke that was used to visualize the flow. Additional effort was put into creating a scale drawing of another tunnel on site and a limited literature search in the area of active flow control.

  14. Telescopes and recording systems used by amateurs for studying planets in our solar system - an overview

    NASA Astrophysics Data System (ADS)

    Kowollik, S.; Gaehrken, B.; Fiedler, M.; Gerstheimer, R.; Sohl, F.; Koschny, D.

    2008-09-01

    During the last couple of years, engaged amateur astronomers have benefited by the rapid development in the field of commercial CCD cameras, video techniques, and the availability of mirror telescopes with high quality. Until recently, such technical equipment and the related handling experience had been reserved to research institutes. This contribution presents the potential capabilities of amateur astronomers and describes the approach to the production of data. The quality of the used telescopes is described with respect to aperture and resolving power; as well as the quantum efficiency of the used sensitive b/w CCD cameras with respect to the detectable wavelength. Beyond these facts the necessary exposure times for CCD images using special filters are discussed. Today's amateur astronomers are able to image the bodies of the solar system in the wavelength range between 340 and 1050 nm [1], [2], [3], [4]. This covers a wide range of the spectrum which is investigated with cameras on board of space telescopes or planetary probes. While space probes usually obtain high-resolution images of individual Surface or atmospheric features of the planets, the images of amateur astronomers show the entire surface of the observed planet. Both datasets together permit a more comprehensive analysis of the data aquired in each case. The "Venus Amateur Observing Project" of the European Space Agency [5] is a first step into a successful co-operation between amateur astronomers and planetary scientists. Individual CCD images captured through the turbulent atmosphere of the Earth usually show characteristic distortions of the arriving wave fronts. If one captures hundreds or thousands of images on a video stream in very short time, there will be always also undistorted images within the data. Computer programmes are available to identify and retrieve these undistorted images and store them for further processing [7]. This method is called "Lucky Imaging" and it allows to achieve nearly the theoretical limit of telescopic resolution. By stacking the undistorted images, the signal-to-noise ratio of the data can be increased significantly. "Lucky Imaging" has become a standard in the amateur community since several years. Contrary to space based observations the data rate is not limited by the capacity of any radio transmission, but only limited by the scanning rate and capacity of a modern computer hard disk. An individual video with the uncompreesed raw data can be as large as 4 to 5 GB. EPSC Abstracts, Vol. 3, EPSC2008-A-00191, 2008 European Planetary Science Congress, Author(s) 2008 In addition to the video data, so-called meta data such as the observing location, the recording time, the used filter, environmental conditions (air temperature, wind velocity, air humidity and Seeing) are also documented. From these meta data, the central meridian (CM) of the observed planet during the time of image acqusition can be determined. After data reduction the resulting images can be used to produce map projections or position measurements of albedo structures on the planetary surface or of details within atmospheric features. Amateur astronomers can observe objects in the solar system for large continuous time periods due to the large number of the existing observers e. g. the members of the Association of Lunar & Planetary Observers [6] and their telescopes. They can and react very fast to special events, since they do not have to submit requests for telescope time to a national or international organization. References: [1] Venusimages in uv-light: B. Gährken: http://www.astrode.de/venus07.htm R. Gerstheimer: http://www.astromanie.de/astromania/galerie/venus/venus.html S. Kowollik: http://www.sternwarte-zollern-alb.de/mitarbeiterseiten/kowollik/venus M. Weigand: http://www.skytrip.de/venus2007.htm [2] Images of planets in visible light: M. Fiedler: http://bilder.astroclub-radebeul.de/kategorien.php?action=showukats&kat=0 R. Gerstheimer: http://www.astromanie.de/ S. Kowollik: http://www.sternwarte-zollern-alb.de/mitarbeiterseiten/kowollik [3] Images of planets in methane band light: S. Kowollik: http://www.sternwarte-zollern-alb.de/beobachtungen/methanband/index-gb.htm [4] Images of planets in ir-light: S. Kowollik: http://www.sternwarte-zollern-alb.de/beobachtungen/ir/index-gb.htm [5] ESA amateur astronomer observing campaign: http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=38833 http://www.rssd.esa.int/index.php?project=VENUS [6] Association of Lunar & Planetary Observation (ALPO): http://alpo-astronomy.org/ [7] Software: Cor Berrevoets (Registax): http://www.astronomie.be/registax/ Christian Buil (IRIS): http://www.astrosurf.com/buil/us/iris/iris.htm Georg Dittié (Giotto): http://www.videoastronomy.org/giotto.htm Grischa Hahn (WinJupos): http://www.grischa-hahn.homepage.t-online.de/astro/winjupos/index.htm

  15. Evaluation of large format electron bombarded virtual phase CCDs as ultraviolet imaging detectors

    NASA Technical Reports Server (NTRS)

    Opal, Chet B.; Carruthers, George R.

    1989-01-01

    In conjunction with an external UV-sensitive cathode, an electron-bombarded CCD may be used as a high quantum efficiency/wide dynamic range photon-counting UV detector. Results are presented for the case of a 1024 x 1024, 18-micron square pixel virtual phase CCD used with an electromagnetically focused f/2 Schmidt camera, which yields excellent simgle-photoevent discrimination and counting efficiency. Attention is given to the vacuum-chamber arrangement used to conduct system tests and the CCD electronics and data-acquisition systems employed.

  16. Voss with video camera in Service Module

    NASA Image and Video Library

    2001-04-08

    ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.

  17. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    DTIC Science & Technology

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High -Speed Video...Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras 5a. CONTRACT

  18. [Objective evaluation of driving fatigue by using variability of pupil diameter under spontaneous pupillary fluctuation conditions].

    PubMed

    Xiong, Xingliang; Zhang, Yan; Chen, Mengmeng; Chen, Longcong

    2013-04-01

    Objective evaluation of driver drowsiness is necessary toward suppression of fatigued driving and prevention of traffic accident. We have developed a new method in which we utilized pupillary diameter variability (PDV) under spontaneous pupillary fluctuation conditions. The method consists of three main steps. Firstly, we use a 90s long infrared video of pupillogram infrared-sensitive CCD camera. Secondly, we employed edge detection algorithm based on curvature characteristics of pupil boundary to extract a set of points of visible pupil boundary, and then we adopted these points to fit a circle to obtain the diameter of the pupil in current frame of video. Finally, the values of PDV in 90s long video is calculated. In an experimental pilot study, the values of PDV of two groups were measured. One group rated themselves as alert (12 men), the other group as sleepy (13 men). The results showed that significant differences could be found between the two groups, and the values were 0.06 +/- 0.005 and 0.141 +/- 0.042, respectively. Taking into account of the knowledge that spontaneous pupillary fluctuation is innervated by autonomic nervous system which activity is known to change in parallel with drowsiness and cannot be influenced by subjective motive of people. From the results of the experiments, we concluded that PDV could be used to evaluate driver fatigue objectively.

  19. Single-Pulse Dual-Energy Mammography Using a Binary Screen Coupled to Dual CCD Cameras

    DTIC Science & Technology

    1999-08-01

    Fossum, "Active pixel sensors—Are CCD’s Dinosaurs ?," Proc. SPIE 1900, 2-14 (1993). "S. Mendis, S. E. Kemeny, R. Gee, B. Pain, and E. R. Fossum, "Progress...Clin Oncol 13:1470-1477, 1995 12. Wahl RL, Zasadny K, Helvie M, et al: Metabolic monitoring of breast cancer chemohormonotherapy using posi- tron

  20. Ground-based observations of 951 Gaspra: CCD lightcurves and spectrophotometry with the Galileo filters

    NASA Technical Reports Server (NTRS)

    Mottola, Stefano; Dimartino, M.; Gonano-Beurer, M.; Hoffmann, H.; Neukum, G.

    1992-01-01

    This paper reports the observations of 951 Gaspra carried out at the European Southern Observatory (La Silla, Chile) during the 1991 apparition, using the DLR CCD Camera equipped with a spare set of the Galileo SSI filters. Time-resolved spectrophotometric measurements are presented. The occurrence of spectral variations with rotation suggests the presence of surface variegation.

  1. Ultrafast Imaging using Spectral Resonance Modulation

    NASA Astrophysics Data System (ADS)

    Huang, Eric; Ma, Qian; Liu, Zhaowei

    2016-04-01

    CCD cameras are ubiquitous in research labs, industry, and hospitals for a huge variety of applications, but there are many dynamic processes in nature that unfold too quickly to be captured. Although tradeoffs can be made between exposure time, sensitivity, and area of interest, ultimately the speed limit of a CCD camera is constrained by the electronic readout rate of the sensors. One potential way to improve the imaging speed is with compressive sensing (CS), a technique that allows for a reduction in the number of measurements needed to record an image. However, most CS imaging methods require spatial light modulators (SLMs), which are subject to mechanical speed limitations. Here, we demonstrate an etalon array based SLM without any moving elements that is unconstrained by either mechanical or electronic speed limitations. This novel spectral resonance modulator (SRM) shows great potential in an ultrafast compressive single pixel camera.

  2. Optical Transient Monitor (OTM) for BOOTES Project

    NASA Astrophysics Data System (ADS)

    Páta, P.; Bernas, M.; Castro-Tirado, A. J.; Hudec, R.

    2003-04-01

    The Optical Transient Monitor (OTM) is a software for control of three wide and ultra-wide filed cameras of BOOTES (Burst Observer and Optical Transient Exploring System) station. The OTM is a PC based and it is powerful tool for taking images from two SBIG CCD cameras in same time or from one camera only. The control program for BOOTES cameras is Windows 98 or MSDOS based. Now the version for Windows 2000 is prepared. There are five main supported modes of work. The OTM program could control cameras and evaluate image data without human interaction.

  3. Noise and sensitivity of x-ray framing cameras at Nike (abstract)

    NASA Astrophysics Data System (ADS)

    Pawley, C. J.; Deniz, A. V.; Lehecka, T.

    1999-01-01

    X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.

  4. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    PubMed

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  5. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  6. Multiple-target tracking implementation in the ebCMOS camera system: the LUSIPHER prototype

    NASA Astrophysics Data System (ADS)

    Doan, Quang Tuyen; Barbier, Remi; Dominjon, Agnes; Cajgfinger, Thomas; Guerin, Cyrille

    2012-06-01

    The domain of the low light imaging systems progresses very fast, thanks to detection and electronic multiplication technology evolution, such as the emCCD (electron multiplying CCD) or the ebCMOS (electron bombarded CMOS). We present an ebCMOS camera system that is able to track every 2 ms more than 2000 targets with a mean number of photons per target lower than two. The point light sources (targets) are spots generated by a microlens array (Shack-Hartmann) used in adaptive optics. The Multiple-Target-Tracking designed and implemented on a rugged workstation is described. The results and the performances of the system on the identification and tracking are presented and discussed.

  7. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  8. White light phase shifting interferometry and color fringe analysis for the detection of contaminants in water

    NASA Astrophysics Data System (ADS)

    Dubey, Vishesh; Singh, Veena; Ahmad, Azeem; Singh, Gyanendra; Mehta, Dalip Singh

    2016-03-01

    We report white light phase shifting interferometry in conjunction with color fringe analysis for the detection of contaminants in water such as Escherichia coli (E.coli), Campylobacter coli and Bacillus cereus. The experimental setup is based on a common path interferometer using Mirau interferometric objective lens. White light interferograms are recorded using a 3-chip color CCD camera based on prism technology. The 3-chip color camera have lesser color cross talk and better spatial resolution in comparison to single chip CCD camera. A piezo-electric transducer (PZT) phase shifter is fixed with the Mirau objective and they are attached with a conventional microscope. Five phase shifted white light interferograms are recorded by the 3-chip color CCD camera and each phase shifted interferogram is decomposed into the red, green and blue constituent colors, thus making three sets of five phase shifted intererograms for three different colors from a single set of white light interferogram. This makes the system less time consuming and have lesser effect due to surrounding environment. Initially 3D phase maps of the bacteria are reconstructed for red, green and blue wavelengths from these interferograms using MATLAB, from these phase maps we determines the refractive index (RI) of the bacteria. Experimental results of 3D shape measurement and RI at multiple wavelengths will be presented. These results might find applications for detection of contaminants in water without using any chemical processing and fluorescent dyes.

  9. One-Meter Telescope in Kolonica Saddle - 4 Years of Operation

    NASA Astrophysics Data System (ADS)

    Kudzej, I.; Dubovsky, P. A.

    2010-12-01

    The actual technical status of 1 meter Vihorlat National Telescope (VNT) at Astronomical Observatory at Kolonica Saddle is presented. Cassegrain and Nasmyth focus, autoguiding system, computer controlled focusing and fine movements and other improvements achieved recently. For two channel photoelectric photometer the system of channels calibration based on artificial light source is described. For CCD camera FLI PL1001E actually installed in Cassegrain focus we presents transformation coefficients from our instrumental to international photometric BVRI system. The measurements were done during regular observations when good photometry of the constant field stars was available. Before FLI camera acquisition we used SBIG ST9 camera. Transformation coefficients for this instrument are presented as well. In the second part of the paper we presents results of variable stars observations with 1 meter telescope in recent four years. The first experimental electronic measurements were done in 2006. Both with CCD cameras and with two channel photoelectric photometer. Starting in 2007 the regular observing program is in operation. There are only few stars suitable for two channel photoelectric photometer observation. Generally the photometer is better when fast brightness changes (time scale of seconds) must be recorded. Thus the majority of observations is done with CCD detectors. We presents an brief overview of most important observing programs: long term monitoring of selected intermediate polars, eclipse observations of SW Sex stars. Occasional observing campaigns were performed on several interesting objects: OT J071126.0+440405, V603 Aql, V471 Tau eclipse timings, Z And in outburst.

  10. Video monitoring system for car seat

    NASA Technical Reports Server (NTRS)

    Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

    2004-01-01

    A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.

  11. Frequency division multiplexed multi-color fluorescence microscope system

    NASA Astrophysics Data System (ADS)

    Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan

    2017-10-01

    Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.

  12. Intense acoustic bursts as a signal-enhancement mechanism in ultrasound-modulated optical tomography.

    PubMed

    Kim, Chulhong; Zemp, Roger J; Wang, Lihong V

    2006-08-15

    Biophotonic imaging with ultrasound-modulated optical tomography (UOT) promises ultrasonically resolved imaging in biological tissues. A key challenge in this imaging technique is a low signal-to-noise ratio (SNR). We show significant UOT signal enhancement by using intense time-gated acoustic bursts. A CCD camera captured the speckle pattern from a laser-illuminated tissue phantom. Differences in speckle contrast were observed when ultrasonic bursts were applied, compared with when no ultrasound was applied. When CCD triggering was synchronized with burst initiation, acoustic-radiation-force-induced displacements were detected. To avoid mechanical contrast in UOT images, the CCD camera acquisition was delayed several milliseconds until transient effects of acoustic radiation force attenuated to a satisfactory level. The SNR of our system was sufficiently high to provide an image pixel per acoustic burst without signal averaging. Because of the substantially improved SNR, the use of intense acoustic bursts is a promising signal enhancement strategy for UOT.

  13. A matter of collection and detection for intraoperative and noninvasive near-infrared fluorescence molecular imaging: To see or not to see?

    PubMed Central

    Zhu, Banghe; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2014-01-01

    Purpose: Although fluorescence molecular imaging is rapidly evolving as a new combinational drug/device technology platform for molecularly guided surgery and noninvasive imaging, there remains no performance standards for efficient translation of “first-in-humans” fluorescent imaging agents using these devices. Methods: The authors employed a stable, solid phantom designed to exaggerate the confounding effects of tissue light scattering and to mimic low concentrations (nM–pM) of near-infrared fluorescent dyes expected clinically for molecular imaging in order to evaluate and compare the commonly used charge coupled device (CCD) camera systems employed in preclinical studies and in human investigational studies. Results: The results show that intensified CCD systems offer greater contrast with larger signal-to-noise ratios in comparison to their unintensified CCD systems operated at clinically reasonable, subsecond acquisition times. Conclusions: Camera imaging performance could impact the success of future “first-in-humans” near-infrared fluorescence imaging agent studies. PMID:24506637

  14. A matter of collection and detection for intraoperative and noninvasive near-infrared fluorescence molecular imaging: To see or not to see?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Banghe; Rasmussen, John C.; Sevick-Muraca, Eva M., E-mail: Eva.Sevick@uth.tmc.edu

    2014-02-15

    Purpose: Although fluorescence molecular imaging is rapidly evolving as a new combinational drug/device technology platform for molecularly guided surgery and noninvasive imaging, there remains no performance standards for efficient translation of “first-in-humans” fluorescent imaging agents using these devices. Methods: The authors employed a stable, solid phantom designed to exaggerate the confounding effects of tissue light scattering and to mimic low concentrations (nM–pM) of near-infrared fluorescent dyes expected clinically for molecular imaging in order to evaluate and compare the commonly used charge coupled device (CCD) camera systems employed in preclinical studies and in human investigational studies. Results: The results show thatmore » intensified CCD systems offer greater contrast with larger signal-to-noise ratios in comparison to their unintensified CCD systems operated at clinically reasonable, subsecond acquisition times. Conclusions: Camera imaging performance could impact the success of future “first-in-humans” near-infrared fluorescence imaging agent studies.« less

  15. A Three-Line Stereo Camera Concept for Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Sandau, Rainer; Hilbert, Stefan; Venus, Holger; Walter, Ingo; Fang, Wai-Chi; Alkalai, Leon

    1997-01-01

    This paper presents a low-weight stereo camera concept for planetary exploration. The camera uses three CCD lines within the image plane of one single objective. Some of the main features of the camera include: focal length-90 mm, FOV-18.5 deg, IFOV-78 (mu)rad, convergence angles-(+/-)10 deg, radiometric dynamics-14 bit, weight-2 kg, and power consumption-12.5 Watts. From an orbit altitude of 250 km the ground pixel size is 20m x 20m and the swath width is 82 km. The CCD line data is buffered in the camera internal mass memory of 1 Gbit. After performing radiometric correction and application-dependent preprocessing the data is compressed and ready for downlink. Due to the aggressive application of advanced technologies in the area of microelectronics and innovative optics, the low mass and power budgets of 2 kg and 12.5 Watts is achieved, while still maintaining high performance. The design of the proposed light-weight camera is also general purpose enough to be applicable to other planetary missions such as the exploration of Mars, Mercury, and the Moon. Moreover, it is an example of excellent international collaboration on advanced technology concepts developed at DLR, Germany, and NASA's Jet Propulsion Laboratory, USA.

  16. Design of a Day/Night Star Camera System

    NASA Technical Reports Server (NTRS)

    Alexander, Cheryl; Swift, Wesley; Ghosh, Kajal; Ramsey, Brian

    1999-01-01

    This paper describes the design of a camera system capable of acquiring stars during both the day and night cycles of a high altitude balloon flight (35-42 km). The camera system will be filtered to operate in the R band (590-810 nm). Simulations have been run using MODTRAN atmospheric code to determine the worse case sky brightness at 35 km. With a daytime sky brightness of 2(exp -05) W/sq cm/str/um in the R band, the sensitivity of the camera system will allow acquisition of at least 1-2 stars/sq degree at star magnitude limits of 8.25-9.00. The system will have an F2.8, 64.3 mm diameter lens and a 1340X1037 CCD array digitized to 12 bits. The CCD array is comprised of 6.8 X 6.8 micron pixels with a well depth of 45,000 electrons and a quantum efficiency of 0.525 at 700 nm. The camera's field of view will be 6.33 sq degree and provide attitude knowledge to 8 arcsec or better. A test flight of the system is scheduled for fall 1999.

  17. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  18. Burbank uses video camera during installation and routing of HRCS Video Cables

    NASA Image and Video Library

    2012-02-01

    ISS030-E-060104 (1 Feb. 2012) --- NASA astronaut Dan Burbank, Expedition 30 commander, uses a video camera in the Destiny laboratory of the International Space Station during installation and routing of video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.

  19. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  20. Distributing digital video to multiple computers

    PubMed Central

    Murray, James A.

    2004-01-01

    Video is an effective teaching tool, and live video microscopy is especially helpful in teaching dissection techniques and the anatomy of small neural structures. Digital video equipment is more affordable now and allows easy conversion from older analog video devices. I here describe a simple technique for bringing digital video from one camera to all of the computers in a single room. This technique allows students to view and record the video from a single camera on a microscope. PMID:23493464

  1. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  2. Airborne infrared video radiometry as a low-cost tool for remote sensing of the environment, two mapping examples from Israel of urban heat islands and mineralogical site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ben-Dor, E.; Saaroni, H.; Ochana, D.

    1996-10-01

    In this study we examined the capability of a laboratory infrared video camera for use in remote sensing of the environment. The instrument used, INFRAMETRICS 760, was mounted onboard a Bell 206 helicopter. Under the flight conditions examined, the radiometer proved itself to be very stable and produced high-quality thermal images in a real-time mode. We studied two different environmental aspects, as follows: (1) Urban heat island of the most dense city in Israel, Tel-Aviv- and (2) lithological distribution of a well-known mineralogical site in Israel, Makhtesh Ramon. The radiometer used in both studies was able to produce a temperaturemore » presentation, rather than a gray scale from an altitude of 7,000 and 10,000 feet and at 70 knots air speed. The instrument produced a high-quality set of data in terms of signal-to-noise, stability, temperature accuracy and spatial resolution. In the Tel-Aviv case, the results showed that the urban heat island of the city can be depicted in a very high spatial and thermal resolutions domain and that a significant correlation exists between ground objects and the surrounding air temperature values. Based on the flight results, we could generated an isotherm map of the city that, for the first time, located the urban heat island of the city both in meso- and microscales. In the case of Makhtesh Ramon, we found that under field conditions, the radiometer, coupled with a VIS-CCD camera can provide significant ATI parameters of typical rocks that characterize tile study area. Although more study is planned and suggested based on the current data, it was concluded that the airborne thermal video radiometry, is a promising, inexpensive tool for monitoring the environment on a real-time basis. 10 refs., 5 figs., 1 tab.« less

  3. Measurements of 42 Wide CPM Pairs with a CCD

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard

    2015-11-01

    This paper addresses the use of a Skyris 618C color CCD camera as a means of obtaining data for analysis in the measurement of wide common proper motion stars. The equipment setup is described and data collection procedure outlined. Results of the measures of 42 CPM stars are presented, showing the Skyris is a reliable device for the measurement of double stars.

  4. Video-rate optical dosimetry and dynamic visualization of IMRT and VMAT treatment plans in water using Cherenkov radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glaser, Adam K., E-mail: Adam.K.Glaser@dartmouth.edu, E-mail: Brian.W.Pogue@dartmouth.edu; Andreozzi, Jacqueline M.; Davis, Scott C.

    Purpose: A novel technique for optical dosimetry of dynamic intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) plans was investigated for the first time by capturing images of the induced Cherenkov radiation in water. Methods: A high-sensitivity, intensified CCD camera (ICCD) was configured to acquire a two-dimensional (2D) projection image of the Cherenkov radiation induced by IMRT and VMAT plans, based on the Task Group 119 (TG-119) C-Shape geometry. Plans were generated using the Varian Eclipse treatment planning system (TPS) and delivered using 6 MV x-rays from a Varian TrueBeam Linear Accelerator (Linac) incident on a water tank dopedmore » with the fluorophore quinine sulfate. The ICCD acquisition was gated to the Linac target trigger pulse to reduce background light artifacts, read out for a single radiation pulse, and binned to a resolution of 512 × 512 pixels. The resulting videos were analyzed temporally for various regions of interest (ROI) covering the planning target volume (PTV) and organ at risk (OAR), and summed to obtain an overall light intensity distribution, which was compared to the expected dose distribution from the TPS using a gamma-index analysis. Results: The chosen camera settings resulted in 23.5 frames per second dosimetry videos. Temporal intensity plots of the PTV and OAR ROIs confirmed the preferential delivery of dose to the PTV versus the OAR, and the gamma analysis yielded 95.9% and 96.2% agreement between the experimentally captured Cherenkov light distribution and expected TPS dose distribution based upon a 3%/3 mm dose difference and distance-to-agreement criterion for the IMRT and VMAT plans, respectively. Conclusions: The results from this initial study demonstrate the first documented use of Cherenkov radiation for video-rate optical dosimetry of dynamic IMRT and VMAT treatment plans. The proposed modality has several potential advantages over alternative methods including the real-time nature of the acquisition, and upon future refinement may prove to be a robust and novel dosimetry method with both research and clinical applications.« less

  5. World's fastest and most sensitive astronomical camera

    NASA Astrophysics Data System (ADS)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these corrections to be done at an even higher rate, more than one thousand times a second, and this is where OCam is essential. "The quality of the adaptive optics correction strongly depends on the speed of the camera and on its sensitivity," says Philippe Feautrier from the LAOG, France, who coordinated the whole project. "But these are a priori contradictory requirements, as in general the faster a camera is, the less sensitive it is." This is why cameras normally used for very high frame-rate movies require extremely powerful illumination, which is of course not an option for astronomical cameras. OCam and its CCD220 detector, developed by the British manufacturer e2v technologies, solve this dilemma, by being not only the fastest available, but also very sensitive, making a significant jump in performance for such cameras. Because of imperfect operation of any physical electronic devices, a CCD camera suffers from so-called readout noise. OCam has a readout noise ten times smaller than the detectors currently used on the VLT, making it much more sensitive and able to take pictures of the faintest of sources. "Thanks to this technology, all the new generation instruments of ESO's Very Large Telescope will be able to produce the best possible images, with an unequalled sharpness," declares Jean-Luc Gach, from the Laboratoire d'Astrophysique de Marseille, France, who led the team that built the camera. "Plans are now underway to develop the adaptive optics detectors required for ESO's planned 42-metre European Extremely Large Telescope, together with our research partners and the industry," says Hubin. Using sensitive detectors developed in the UK, with a control system developed in France, with German and Spanish participation, OCam is truly an outcome of a European collaboration that will be widely used and commercially produced. More information The three French laboratories involved are the Laboratoire d'Astrophysique de Marseille (LAM/INSU/CNRS, Université de Provence; Observatoire Astronomique de Marseille Provence), the Laboratoire d'Astrophysique de Grenoble (LAOG/INSU/CNRS, Université Joseph Fourier; Observatoire des Sciences de l'Univers de Grenoble), and the Observatoire de Haute Provence (OHP/INSU/CNRS; Observatoire Astronomique de Marseille Provence). OCam and the CCD220 are the result of five years work, financed by the European commission, ESO and CNRS-INSU, within the OPTICON project of the 6th Research and Development Framework Programme of the European Union. The development of the CCD220, supervised by ESO, was undertaken by the British company e2v technologies, one of the world leaders in the manufacture of scientific detectors. The corresponding OPTICON activity was led by the Laboratoire d'Astrophysique de Grenoble, France. The OCam camera was built by a team of French engineers from the Laboratoire d'Astrophysique de Marseille, the Laboratoire d'Astrophysique de Grenoble and the Observatoire de Haute Provence. In order to secure the continuation of this successful project a new OPTICON project started in June 2009 as part of the 7th Research and Development Framework Programme of the European Union with the same partners, with the aim of developing a detector and camera with even more powerful functionality for use with an artificial laser star. This development is necessary to ensure the image quality of the future 42-metre European Extremely Large Telescope. ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".

  6. Linear array of photodiodes to track a human speaker for video recording

    NASA Astrophysics Data System (ADS)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  7. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    PubMed

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p < 0.001) and identified a lower proportion of chimpanzees (GLMM post hoc test: est. = -2.17914, SE = 0.08490, Z = -25.666, p < 0.001) compared to in-person observations. However, the observer could view the 2 ha enclosure 15 times faster by camera compared to in person. In addition to these results, we provide recommendations to animal facilities considering the installation of a video camera system. Despite some limitations of remote monitoring, we posit that there are substantial benefits of using camera systems in sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  8. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  9. Camera network video summarization

    NASA Astrophysics Data System (ADS)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  10. Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras

    DTIC Science & Technology

    1990-04-01

    poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital

  11. Coaxial fundus camera for opthalmology

    NASA Astrophysics Data System (ADS)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  12. Optimized algorithm for the spatial nonuniformity correction of an imaging system based on a charge-coupled device color camera.

    PubMed

    de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell

    2007-01-10

    We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.

  13. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  14. Snowfall Retrivals Using a Video Disdrometer

    NASA Astrophysics Data System (ADS)

    Newman, A. J.; Kucera, P. A.

    2004-12-01

    A video disdrometer has been recently developed at NASA/Wallops Flight Facility in an effort to improve surface precipitation measurements. One of the goals of the upcoming Global Precipitation Measurement (GPM) mission is to provide improved satellite-based measurements of snowfall in mid-latitudes. Also, with the planned dual-polarization upgrade of US National Weather Service weather radars, there is potential for significant improvements in radar-based estimates of snowfall. The video disdrometer, referred to as the Rain Imaging System (RIS), was deployed in Eastern North Dakota during the 2003-2004 winter season to measure size distributions, precipitation rate, and density estimates of snowfall. The RIS uses CCD grayscale video camera with a zoom lens to observe hydrometers in a sample volume located 2 meters from end of the lens and approximately 1.5 meters away from an independent light source. The design of the RIS may eliminate sampling errors from wind flow around the instrument. The RIS operated almost continuously in the adverse conditions often observed in the Northern Plains. Preliminary analysis of an extended winter snowstorm has shown encouraging results. The RIS was able to provide crystal habit information, variability of particle size distributions for the lifecycle of the storm, snowfall rates, and estimates of snow density. Comparisons with coincident snow core samples and measurements from the nearby NWS Forecast Office indicate the RIS provides reasonable snowfall measurements. WSR-88D radar observations over the RIS were used to generate a snowfall-reflectivity relationship from the storm. These results along with several other cases will be shown during the presentation.

  15. On-line content creation for photo products: understanding what the user wants

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner

    2015-03-01

    This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.

  16. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  17. Rugged Video System For Inspecting Animal Burrows

    NASA Technical Reports Server (NTRS)

    Triandafils, Dick; Maples, Art; Breininger, Dave

    1992-01-01

    Video system designed for examining interiors of burrows of gopher tortoises, 5 in. (13 cm) in diameter or greater, to depth of 18 ft. (about 5.5 m), includes video camera, video cassette recorder (VCR), television monitor, control unit, and power supply, all carried in backpack. Polyvinyl chloride (PVC) poles used to maneuver camera into (and out of) burrows, stiff enough to push camera into burrow, but flexible enough to bend around curves. Adult tortoises and other burrow inhabitants observable, young tortoises and such small animals as mice obscured by sand or debris.

  18. Using a Video Camera to Measure the Radius of the Earth

    ERIC Educational Resources Information Center

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  19. CCD BVI c observations of Cepheids

    NASA Astrophysics Data System (ADS)

    Berdnikov, L. N.; Kniazev, A. Yu.; Sefako, R.; Kravtsov, V. V.; Zhujko, S. V.

    2014-02-01

    In 2008-2013, we obtained 11333 CCD BVI c frames for 57 Cepheids from the General Catalogue of Variable Stars. We performed our observations with the 76-cm telescope of the South African Astronomical Observatory (SAAO, South Africa) and the 40-cm telescope of the Cerro Armazones Astronomical Observatory of the Universidad Católica del Norte (OCA, Chile) using the SBIG ST-10XME CCD camera. The tables of observations, the plots of light curves, and the current light elements are presented. Comparison of our light curves with those constructed from photoelectric observations shows that the differences between their mean magnitudes exceed 0ṃ05 in 20% of the cases. This suggests the necessity of performing CCD observations for all Cepheids.

  20. Instrumentation for Infrared Airglow Clutter.

    DTIC Science & Technology

    1987-03-10

    gain, and filter position to the Camera Head, and monitors these parameters as well as preamp video. GAZER is equipped with a Lenzar wide angle, low...Specifications/Parameters VIDEO SENSOR: Camera ...... . LENZAR Intensicon-8 LLLTV using 2nd gen * micro-channel intensifier and proprietary camera tube

  1. Advanced imaging system

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This document describes the Advanced Imaging System CCD based camera. The AIS1 camera system was developed at Photometric Ltd. in Tucson, Arizona as part of a Phase 2 SBIR contract No. NAS5-30171 from the NASA/Goddard Space Flight Center in Greenbelt, Maryland. The camera project was undertaken as a part of the Space Telescope Imaging Spectrograph (STIS) project. This document is intended to serve as a complete manual for the use and maintenance of the camera system. All the different parts of the camera hardware and software are discussed and complete schematics and source code listings are provided.

  2. Development of Measurement Device of Working Radius of Crane Based on Single CCD Camera and Laser Range Finder

    NASA Astrophysics Data System (ADS)

    Nara, Shunsuke; Takahashi, Satoru

    In this paper, what we want to do is to develop an observation device to measure the working radius of a crane truck. The device has a single CCD camera, a laser range finder and two AC servo motors. First, in order to measure the working radius, we need to consider algorithm of a crane hook recognition. Then, we attach the cross mark on the crane hook. Namely, instead of the crane hook, we try to recognize the cross mark. Further, for the observation device, we construct PI control system with an extended Kalman filter to track the moving cross mark. Through experiments, we show the usefulness of our device including new control system of mark tracking.

  3. Illumination box and camera system

    DOEpatents

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  4. First Light for USNO 1.3-meter Telescope

    NASA Astrophysics Data System (ADS)

    Monet, A. K. B.; Harris, F. H.; Harris, H. C.; Monet, D. G.; Stone, R. C.

    2001-11-01

    The US Naval Observatory Flagstaff Station has recently achieved first light with its newest telescope -- a 1.3--meter, f/4 modified Ritchey-Chretien,located on the grounds of the station. The instrument was designed to produce a well-corrected field 1.7--degrees in diameter, and is expected to provide wide-field imaging with excellent astrometric properties. A number of test images have been obtained, using a temporary CCD camera in both drift and stare mode, and the results have been quite encouraging. Several astrometric projects are planned for this instrument, which will be operated in fully automated fashion. This paper will describe the telescope and its planned large-format mosaic CCD camera, and will preview some of the research for which it will be employed.

  5. Microwave transient analyzer

    DOEpatents

    Gallegos, C.H.; Ogle, J.W.; Stokes, J.L.

    1992-11-24

    A method and apparatus for capturing and recording indications of frequency content of electromagnetic signals and radiation is disclosed including a laser light source and a Bragg cell for deflecting a light beam at a plurality of deflection angles dependent upon frequency content of the signal. A streak camera and a microchannel plate intensifier are used to project Bragg cell output onto either a photographic film or a charge coupled device (CCD) imager. Timing markers are provided by a comb generator and a one shot generator, the outputs of which are also routed through the streak camera onto the film or the CCD imager. Using the inventive method, the full range of the output of the Bragg cell can be recorded as a function of time. 5 figs.

  6. Large Meteoroid Impact on the Moon on 17 March 2013

    NASA Technical Reports Server (NTRS)

    Moser, Danielle E.; Suggs, Robert M.; Suggs, Ronnie J.

    2014-01-01

    Since early 2006, NASA's Marshall Space Flight Center has observed over 300 impact flashes on the Moon, produced by meteoroids striking the lunar surface. On 17 March 2013 at 03:50:54.312 UTC, the brightest flash of an 8-year routine observing campaign was observed in two 0.35 m telescopes outfitted with Watec 902H2 Ultimate monochrome CCD cameras recording interleaved 30 fps video. Standard CCD photometric techniques, described in [1], were applied to the video after saturation correction, yielding a peak R magnitude of 3.0 +/- 0.4 in a 1/30 second video exposure. This corresponds to a luminous energy of 7.1 × 10(exp 6) J. Geographic Information System (GIS) tools were used to georeference the lunar impact imagery and yielded a crater location at 20.60 +/- 0.17deg N, 23.92 +/- 0.30deg W. The camera onboard the Lunar Reconnaissance Orbiter (LRO), a NASA spacecraft mapping the Moon from lunar orbit, discovered the fresh crater associated with this impact by comparing post-impact images from 28 July 2013 to pre-impact images on 12 Feb 2012. The images show fresh, bright ejecta around an 18 m diameter circular crater, with a 15 m inner diameter measured from the level of pre-existing terrain, at 20.7135deg N, 24.3302deg W. An asymmetrical ray pattern with both high and low reflectance ejecta zones extends 1-2 km beyond the crater, and a series of mostly low reflectance splotches can be seen within 30 km of the crater - likely due to secondary impacts [2]. The meteoroid impactor responsible for this event may have been part of a stream of large particles encountered by the Earth/Moon associated with the Virginid Meteor Complex, as evidenced by a cluster of 5 fireballs seen in Earth's atmosphere on the same night by the NASA All Sky Fireball Network [3] and the Southern Ontario Meteor Network [4]. Assuming a velocity-dependent luminous efficiency (ratio of luminous energy to kinetic energy) from [5] and an impact velocity of 25.6 km/s derived from fireball measurements, the impactor kinetic energy was 5.4 × 10(exp 9) J and the impactor mass was 16 kg. Assuming an impact angle of 56deg from horizontal (based on fireball orbit measurements), a regolith density of 1500 kg/m(exp 3), and impactor density between 1800 and 3000 kg/m(exp 3), the impact crater diameter was estimated to be 8-18 m at the pre-impact surface and 10-23 m rim-to-rim using the Holsapple [6] and Gault [7] models, a result consistent with the observed crater.

  7. SU-F-J-190: Time Resolved Range Measurement System Using Scintillator and CCD Camera for the Slow Beam Extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saotome, N; Furukawa, T; Mizushima, K

    2016-06-15

    Purpose: To investigate the time structure of the range, we have verified the rang shift due to the betatron tune shift with several synchrotron parameters. Methods: A cylindrical plastic scintillator block and a CCD camera were installed on the black box. Using image processing, the range was determined the 80 percent of distal dose of the depth light distribution. The root mean square error of the range measurement using the scintillator and CCD system is about 0.2 mm. Range measurement was performed at interval of 170 msec. The chromaticity of the synchrotron was changed in the range of plus ormore » minus 1% from reference chromaticity in this study. All of the particle inside the synchrotron ring were extracted with the output beam intensity 1.8×10{sup 8} and 5.0×10{sub 7} particle per sec. Results: The time strictures of the range were changed by changing of the chromaticity. The reproducibility of the measurement was sufficient to observe the time structures of the range. The range shift was depending on the number of the residual particle inside the synchrotron ring. Conclusion: In slow beam extraction for scanned carbon-ion therapy, the range shift is undesirable because it causes the dose uncertainty in the target. We introduced the time-resolved range measurement using scintillator and CCD system. The scintillator and CCD system have enabled to verify the range shift with sufficient spatial resolution and reproducibility.« less

  8. Nonlinear feedback model attitude control using CCD in magnetic suspension system

    NASA Technical Reports Server (NTRS)

    Lin, CHIN-E.; Hou, Ann-San

    1994-01-01

    A model attitude control system for a CCD camera magnetic suspension system is studied in this paper. In a recent work, a position and attitude sensing method was proposed. From this result, model position and attitude of a magnetic suspension system can be detected by generating digital outputs. Based on this achievement, a control system design using nonlinear feedback techniques for magnetic suspended model attitude control is proposed.

  9. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  10. The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity

    NASA Astrophysics Data System (ADS)

    Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.

    2009-08-01

    The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA and Onboard Processing. The DEA incorpo-rates the circuit elements required for data processing, compression, and buffering. It also includes all power conversion and regulation capabilities for both the DEA and the camera head. The DEA has an 8 GB non-volatile flash memory plus 128 MB volatile storage. Images can be commanded as full-frame or sub-frame and the camera has autofocus and autoexposure capa-bilities. MAHLI can also acquire 720p, ~7 Hz high definition video. Onboard processing includes options for Bayer pattern filter interpolation, JPEG-based compression, and focus stack merging (z-stacking). Malin Space Science Systems (MSSS) built and will operate the MAHLI. Alliance Spacesystems, LLC, designed and built the lens mechanical assembly. MAHLI shares common electronics, detector, and software designs with the MSL Mars Descent Imager (MARDI) and the 2 MSL Mast Cameras (Mastcam). Pre-launch images of geologic materials imaged by MAHLI are online at: http://www.msss.com/msl/mahli/prelaunch_images/.

  11. Ground-based remote sensing with long lens video camera for upper-stem diameter and other tree crown measurements

    Treesearch

    Neil A. Clark; Sang-Mook Lee

    2004-01-01

    This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...

  12. Performance of PHOTONIS' low light level CMOS imaging sensor for long range observation

    NASA Astrophysics Data System (ADS)

    Bourree, Loig E.

    2014-05-01

    Identification of potential threats in low-light conditions through imaging is commonly achieved through closed-circuit television (CCTV) and surveillance cameras by combining the extended near infrared (NIR) response (800-10000nm wavelengths) of the imaging sensor with NIR LED or laser illuminators. Consequently, camera systems typically used for purposes of long-range observation often require high-power lasers in order to generate sufficient photons on targets to acquire detailed images at night. While these systems may adequately identify targets at long-range, the NIR illumination needed to achieve such functionality can easily be detected and therefore may not be suitable for covert applications. In order to reduce dependency on supplemental illumination in low-light conditions, the frame rate of the imaging sensors may be reduced to increase the photon integration time and thus improve the signal to noise ratio of the image. However, this may hinder the camera's ability to image moving objects with high fidelity. In order to address these particular drawbacks, PHOTONIS has developed a CMOS imaging sensor (CIS) with a pixel architecture and geometry designed specifically to overcome these issues in low-light level imaging. By combining this CIS with field programmable gate array (FPGA)-based image processing electronics, PHOTONIS has achieved low-read noise imaging with enhanced signal-to-noise ratio at quarter moon illumination, all at standard video frame rates. The performance of this CIS is discussed herein and compared to other commercially available CMOS and CCD for long-range observation applications.

  13. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  14. Patterned Video Sensors For Low Vision

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  15. Video model deformation system for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. A rudimentary theory section is followed by a description of the video-based system and control measures required to protect cameras from the hostile environment. Preliminary results obtained with the same camera placement as planned for NTF are presented and plans for facility testing with a specially designed test wing are discussed.

  16. Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam

    NASA Astrophysics Data System (ADS)

    Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A. E.; Engelhardt, M.

    2005-04-01

    When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2×107 cm s, which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300×1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200 rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points.

  17. Free-viewpoint video of human actors using multiple handheld Kinects.

    PubMed

    Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian

    2013-10-01

    We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.

  18. Experimental research on femto-second laser damaging array CCD cameras

    NASA Astrophysics Data System (ADS)

    Shao, Junfeng; Guo, Jin; Wang, Ting-feng; Wang, Ming

    2013-05-01

    Charged Coupled Devices (CCD) are widely used in military and security applications, such as airborne and ship based surveillance, satellite reconnaissance and so on. Homeland security requires effective means to negate these advanced overseeing systems. Researches show that CCD based EO systems can be significantly dazzled or even damaged by high-repetition rate pulsed lasers. Here, we report femto - second laser interaction with CCD camera, which is probable of great importance in future. Femto - second laser is quite fresh new lasers, which has unique characteristics, such as extremely short pulse width (1 fs = 10-15 s), extremely high peak power (1 TW = 1012W), and especially its unique features when interacting with matters. Researches in femto second laser interaction with materials (metals, dielectrics) clearly indicate non-thermal effect dominates the process, which is of vast difference from that of long pulses interaction with matters. Firstly, the damage threshold test are performed with femto second laser acting on the CCD camera. An 800nm, 500μJ, 100fs laser pulse is used to irradiate interline CCD solid-state image sensor in the experiment. In order to focus laser energy onto tiny CCD active cells, an optical system of F/5.6 is used. A Sony production CCDs are chose as typical targets. The damage threshold is evaluated with multiple test data. Point damage, line damage and full array damage were observed when the irradiated pulse energy continuously increase during the experiment. The point damage threshold is found 151.2 mJ/cm2.The line damage threshold is found 508.2 mJ/cm2.The full-array damage threshold is found to be 5.91 J/cm2. Although the phenomenon is almost the same as that of nano laser interaction with CCD, these damage thresholds are substantially lower than that of data obtained from nano second laser interaction with CCD. Then at the same time, the electric features after different degrees of damage are tested with electronic multi meter. The resistance values between clock signal lines are measured. Contrasting the resistance values of the CCD before and after damage, it is found that the resistances decrease significantly between the vertical transfer clock signal lines values. The same results are found between the vertical transfer clock signal line and the earth electrode (ground).At last, the damage position and the damage mechanism were analyzed with above results and SEM morphological experiments. The point damage results in the laser destroying material, which shows no macro electro influence. The line damage is quite different from that of point damage, which shows deeper material corroding effect. More importantly, short circuits are found between vertical clock lines. The full array damage is even more severe than that of line damage starring with SEM, while no obvious different electrical features than that of line damage are found. Further researches are anticipated in femto second laser caused CCD damage mechanism with more advanced tools. This research is valuable in EO countermeasure and/or laser shielding applications.

  19. Hyper Suprime-Cam: Camera dewar design

    NASA Astrophysics Data System (ADS)

    Komiyama, Yutaka; Obuchi, Yoshiyuki; Nakaya, Hidehiko; Kamata, Yukiko; Kawanomoto, Satoshi; Utsumi, Yousuke; Miyazaki, Satoshi; Uraguchi, Fumihiro; Furusawa, Hisanori; Morokuma, Tomoki; Uchida, Tomohisa; Miyatake, Hironao; Mineo, Sogo; Fujimori, Hiroki; Aihara, Hiroaki; Karoji, Hiroshi; Gunn, James E.; Wang, Shiang-Yu

    2018-01-01

    This paper describes the detailed design of the CCD dewar and the camera system which is a part of the wide-field imager Hyper Suprime-Cam (HSC) on the 8.2 m Subaru Telescope. On the 1.°5 diameter focal plane (497 mm in physical size), 116 four-side buttable 2 k × 4 k fully depleted CCDs are tiled with 0.3 mm gaps between adjacent chips, which are cooled down to -100°C by two pulse tube coolers with a capability to exhaust 100 W heat at -100°C. The design of the dewar is basically a natural extension of Suprime-Cam, incorporating some improvements such as (1) a detailed CCD positioning strategy to avoid any collision between CCDs while maximizing the filling factor of the focal plane, (2) a spherical washers mechanism adopted for the interface points to avoid any deformation caused by the tilt of the interface surface to be transferred to the focal plane, (3) the employment of a truncated-cone-shaped window, made of synthetic silica, to save the back focal space, and (4) a passive heat transfer mechanism to exhaust efficiently the heat generated from the CCD readout electronics which are accommodated inside the dewar. Extensive simulations using a finite-element analysis (FEA) method are carried out to verify that the design of the dewar is sufficient to satisfy the assigned errors. We also perform verification tests using the actually assembled CCD dewar to supplement the FEA and demonstrate that the design is adequate to ensure an excellent image quality which is key to the HSC. The details of the camera system, including the control computer system, are described as well as the assembling process of the dewar and the process of installation on the telescope.

  20. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    PubMed Central

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851

  1. Data Mining and Information Technology: Its Impact on Intelligence Collection and Privacy Rights

    DTIC Science & Technology

    2007-11-26

    sources include: Cameras - Digital cameras (still and video ) have been improving in capability while simultaneously dropping in cost at a rate...citizen is caught on camera 300 times each day.5 The power of extensive video coverage is magnified greatly by the nascent capability for voice and...software on security videos and tracking cell phone usage in the local area. However, it would only return the names and data of those who

  2. Forensic applications of infrared imaging for the detection and recording of latent evidence.

    PubMed

    Lin, Apollo Chun-Yen; Hsieh, Hsing-Mei; Tsai, Li-Chin; Linacre, Adrian; Lee, James Chun-I

    2007-09-01

    We report on a simple method to record infrared (IR) reflected images in a forensic science context. Light sources using ultraviolet light have been used previously in the detection of latent prints, but the use of infrared light has been subjected to less investigation. IR light sources were used to search for latent evidence and the images were captured by either video or using a digital camera with a CCD array sensitive to IR wavelength. Bloodstains invisible to the eye, inks, tire prints, gunshot residue, and charred document on dark background are selected as typical matters that may be identified during a forensic investigation. All the evidence types could be detected and identified using a range of photographic techniques. In this study, a one in eight times dilution of blood could be detected on 10 different samples of black cloth. When using 81 black writing inks, the observation rates were 95%, 88% and 42% for permanent markers, fountain pens and ball-point pens, respectively, on the three kinds of dark cloth. The black particles of gunshot residue scattering around the entrance hole under IR light were still observed at a distance of 60 cm from three different shooting ranges. A requirement of IR reflectivity is that there is a contrast between the latent evidence and the background. In the absence of this contrast no latent image will be detected, which is similar to all light sources. The use of a video camera allows the recording of images either at a scene or in the laboratory. This report highlights and demonstrates the robustness of IR to detect and record the presence of latent evidence.

  3. A mathematical model of the inline CMOS matrix sensor for investigation of particles in hydraulic liquids

    NASA Astrophysics Data System (ADS)

    Kornilin, DV; Kudryavtsev, IA

    2016-10-01

    One of the most effective ways to diagnose the state of hydraulic system is an investigation of the particles in their liquids. The sizes of such particles range from 2 to 200 gm and their concentration and shape reveal important information about the current state of equipment and the necessity of maintenance. In-line automatic particle counters (APC), which are built into hydraulic system, are widely used for determination of particle size and concentration. These counters are based on a single photodiode and a light emitting diode (LED); however, samples of liquid are needed for analysis using microscope or industrial video camera in order to get information about particle shapes. The act of obtaining the sample leads to contamination by other particles from the air or from the sample tube, meaning that the results are usually corrupted. Using the CMOS or CCD matrix sensor without any lens for inline APC is the solution proposed by authors. In this case the matrix sensors are put into the liquid channel of the hydraulic system and illuminated by LED. This system could be stable in arduous conditions like high pressure and the vibration of the hydraulic system; however, the image or signal from that matrix sensor needs to be processed differently in comparison with the signal from microscope or industrial video camera because of relatively short distance between LED and sensor. This paper introduces mathematical model of a sensor with CMOS and LED, which can be built into hydraulic system. It is also provided a computational algorithm and results, which can be useful for calculation of particle sizes and shapes using the signal from the CMOS matrix sensor.

  4. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  5. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  6. A New Remote Sensing Filter Radiometer Employing a Fabry-Perot Etalon and a CCD Camera for Column Measurements of Methane in the Earth Atmosphere

    NASA Technical Reports Server (NTRS)

    Georgieva, E. M.; Huang, W.; Heaps, W. S.

    2012-01-01

    A portable remote sensing system for precision column measurements of methane has been developed, built and tested at NASA GSFC. The sensor covers the spectral range from 1.636 micrometers to 1.646 micrometers, employs an air-gapped Fabry-Perot filter and a CCD camera and has a potential to operate from a variety of platforms. The detector is an XS-1.7-320 camera unit from Xenics Infrared solutions which combines an uncooled InGaAs detector array working up to 1.7 micrometers. Custom software was developed in addition to the graphical user basic interface X-Control provided by the company to help save and process the data. The technique and setup can be used to measure other trace gases in the atmosphere with minimal changes of the etalon and the prefilter. In this paper we describe the calibration of the system using several different approaches.

  7. VizieR Online Data Catalog: Imaging observations of iPTF 13ajg (Vreeswijk+, 2014)

    NASA Astrophysics Data System (ADS)

    Vreeswijk, P. M.; Savaglio, S.; Gal-Yam, A.; De Cia, A.; Quimby, R. M.; Sullivan, M.; Cenko, S. B.; Perley, D. A.; Filippenko, A. V.; Clubb, K. I.; Taddia, F.; Sollerman, J.; Leloudas, G.; Arcavi, I.; Rubin, A.; Kasliwal, M. M.; Cao, Y.; Yaron, O.; Tal, D.; Ofek, E. O.; Capone, J.; Kutyrev, A. S.; Toy, V.; Nugent, P. E.; Laher, R.; Surace, J.; Kulkarni, S. R.

    2017-08-01

    iPTF 13ajg was imaged with the Palomar 48 inch (P48) Oschin iPTF survey telescope equipped with a 12kx8k CCD mosaic camera (Rahmer et al. 2008SPIE.7014E..4YR) in the Mould R filter, the Palomar 60 inch and CCD camera (Cenko et al. 2006PASP..118.1396C) in Johnson B and Sloan Digital Sky Survey (SDSS) gri, the 2.56 m Nordic Optical Telescope (on La Palma, Canary Islands) with the Andalucia Faint Object Spectrograph and Camera (ALFOSC) in SDSS ugriz, the 4.3 m Discovery Channel Telescope (at Lowell Observatory, Arizona) with the Large Monolithic Imager (LMI) in SDSS r, and with LRIS (Oke et al. 1995PASP..107..375O) and the Multi-Object Spectrometer for Infrared Exploration (MOSFIRE; McLean et al. 2012SPIE.8446E..0JM), both mounted on the 10 m Keck-I telescope (on Mauna Kea, Hawaii), in g and Rs with LRIS and J and Ks with MOSFIRE. (1 data file).

  8. Imaging with organic indicators and high-speed charge-coupled device cameras in neurons: some applications where these classic techniques have advantages.

    PubMed

    Ross, William N; Miyazaki, Kenichi; Popovic, Marko A; Zecevic, Dejan

    2015-04-01

    Dynamic calcium and voltage imaging is a major tool in modern cellular neuroscience. Since the beginning of their use over 40 years ago, there have been major improvements in indicators, microscopes, imaging systems, and computers. While cutting edge research has trended toward the use of genetically encoded calcium or voltage indicators, two-photon microscopes, and in vivo preparations, it is worth noting that some questions still may be best approached using more classical methodologies and preparations. In this review, we highlight a few examples in neurons where the combination of charge-coupled device (CCD) imaging and classical organic indicators has revealed information that has so far been more informative than results using the more modern systems. These experiments take advantage of the high frame rates, sensitivity, and spatial integration of the best CCD cameras. These cameras can respond to the faster kinetics of organic voltage and calcium indicators, which closely reflect the fast dynamics of the underlying cellular events.

  9. Elemental mapping and microimaging by x-ray capillary optics.

    PubMed

    Hampai, D; Dabagov, S B; Cappuccio, G; Longoni, A; Frizzi, T; Cibin, G; Guglielmotti, V; Sala, M

    2008-12-01

    Recently, many experiments have highlighted the advantage of using polycapillary optics for x-ray fluorescence studies. We have developed a special confocal scheme for micro x-ray fluorescence measurements that enables us to obtain not only elemental mapping of the sample but also simultaneously its own x-ray imaging. We have designed the prototype of a compact x-ray spectrometer characterized by a spatial resolution of less than 100 microm for fluorescence and less than 10 microm for imaging. A couple of polycapillary lenses in a confocal configuration together with a silicon drift detector allow elemental studies of extended samples (approximately 3 mm) to be performed, while a CCD camera makes it possible to record an image of the same samples with 6 microm spatial resolution, which is limited only by the pixel size of the camera. By inserting a compound refractive lens between the sample and the CCD camera, we hope to develop an x-ray microscope for more enlarged images of the samples under test.

  10. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  11. Night airglow in RGB mode

    NASA Astrophysics Data System (ADS)

    Mikhalev, Aleksandr; Podlesny, Stepan; Stoeva, Penka

    2016-09-01

    To study dynamics of the upper atmosphere, we consider results of the night sky photometry, using a color CCD camera and taking into account the night airglow and features of its spectral composition. We use night airglow observations for 2010-2015, which have been obtained at the ISTP SB RAS Geophysical Observatory (52° N, 103° E) by the camera with KODAK KAI-11002 CCD sensor. We estimate the average brightness of the night sky in R, G, B channels of the color camera for eastern Siberia with typical values ranging from ~0.008 to 0.01 erg*cm-2*s-1. Besides, we determine seasonal variations in the night sky luminosities in R, G, B channels of the color camera. In these channels, luminosities decrease in spring, increase in autumn, and have a pronounced summer maximum, which can be explained by scattered light and is associated with the location of the Geophysical Observatory. We consider geophysical phenomena with their optical effects in R, G, B channels of the color camera. For some geophysical phenomena (geomagnetic storms, sudden stratospheric warmings), we demonstrate the possibility of the quantitative relationship between enhanced signals in R and G channels and increases in intensities of discrete 557.7 and 630 nm emissions, which are predominant in the airglow spectrum.

  12. 3D digital image correlation using a single 3CCD colour camera and dichroic filter

    NASA Astrophysics Data System (ADS)

    Zhong, F. Q.; Shao, X. X.; Quan, C.

    2018-04-01

    In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.

  13. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  14. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  15. Flat Field Anomalies in an X-ray CCD Camera Measured Using a Manson X-ray Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. J. Haugh and M. B. Schneider

    2008-10-31

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 μm square pixels, and 15 μm thick. Amore » multi-anode Manson X-ray source, operating up to 10kV and 10W, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/ΔE≈10. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within ±1% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation occurred at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was not observable below 4 keV. We were also able to observe debris, damage, and surface defects on the CCD chip. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.« less

  16. Software and hardware complex for observation of star occultations by asteroids

    NASA Astrophysics Data System (ADS)

    Karbovsky, V.; Kleshchonok, V.; Buromsky, M.

    2017-12-01

    The preparation to the program for observation of star occultations by asteroids on the AZT-2 telescope was started in 2016. A new method for registration of occultation with a CCD camera in the synchronous transfer mode was proposed and developed. The special program was written to control the CCD camera and record images during such observations. The speed of image transfer can vary within wide limits, which makes it possible to carry out observations in a wide range of stellar magnitudes. The telescope AZT-2 is used, which has the largest mirror diameter in Kiev (D = 0.7 m. F = 10.5 m). A 3-fold optical reducer was produced, which providing a field of view with a CCD camera Apogee Alta U47 10 arcminutes and the equivalent focal length of the telescope 3.2 meters. The results of test observations are presented. The program is implemented jointly by the Main Astronomical Observatory of the National Academy of Sciences of Ukraine and the Astronomical Observatory of the Taras Shevchenko National University of Kyiv. Regular observations of star occultation by asteroids are planned with the help of this complex. % Z https://occultations.org Kleshchonok,V.V.,Buromsky,M. I. 2005, Kinematics and Physics of Celestial Bodies, 21, 5, 405 Kleshchonok, V.V., Buromskii, N. I., Khat’ko,I.V.2008, Kinematics and Physics of Celestial Bodies, 24, 2, 114

  17. Research on automatic Hartmann test of membrane mirror

    NASA Astrophysics Data System (ADS)

    Zhong, Xing; Jin, Guang; Liu, Chunyu; Zhang, Peng

    2010-10-01

    Electrostatic membrane mirror is ultra-lightweight and easy to acquire a large diameter comparing with traditional optical elements, so its development and usage is the trend of future large mirrors. In order to research the control method of the static stretching membrane mirror, the surface configuration must be tested. However, membrane mirror's shape is always changed by variable voltages on the electrodes, and the optical properties of membrane materials using in our experiment are poor, so it is difficult to test membrane mirror by interferometer and null compensator method. To solve this problem, an automatic optical test procedure for membrane mirror is designed based on Hartmann screen method. The optical path includes point light source, CCD camera, splitter and diffuse transmittance screen. The spots' positions on the diffuse transmittance screen are pictured by CCD camera connected with computer, and image segmentation and centroid solving is auto processed. The CCD camera's lens distortion is measured, and fixing coefficients are given to eliminate the spots' positions recording error caused by lens distortion. To process the low sampling Hartmann test results, Zernike polynomial fitting method is applied to smooth the wave front. So low frequency error of the membrane mirror can be measured then. Errors affecting the test accuracy are also analyzed in this paper. The method proposed in this paper provides a reference for surface shape detection in membrane mirror research.

  18. 50 CFR 216.155 - Requirements for monitoring and reporting.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...

  19. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  20. New low noise CCD cameras for Pi-of-the-Sky project

    NASA Astrophysics Data System (ADS)

    Kasprowicz, G.; Czyrkowski, H.; Dabrowski, R.; Dominik, W.; Mankiewicz, L.; Pozniak, K.; Romaniuk, R.; Sitek, P.; Sokolowski, M.; Sulej, R.; Uzycki, J.; Wrochna, G.

    2006-10-01

    Modern research trends require observation of fainter and fainter astronomical objects on large areas of the sky. This implies usage of systems with high temporal and optical resolution with computer based data acquisition and processing. Therefore Charge Coupled Devices (CCD) became so popular. They offer quick picture conversion with much better quality than film based technologies. This work is theoretical and practical study of the CCD based picture acquisition system. The system was optimized for "Pi of The Sky" project. But it can be adapted to another professional astronomical researches. The work includes issue of picture conversion, signal acquisition, data transfer and mechanical construction of the device.

  1. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  2. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    PubMed

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  3. Development of X-ray CCD camera based X-ray micro-CT system

    NASA Astrophysics Data System (ADS)

    Sarkar, Partha S.; Ray, N. K.; Pal, Manoj K.; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y.; Sinha, A.; Gadkari, S. C.

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  4. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  5. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  6. Crystallization of the collagen-like polypeptide (PPG)10 aboard the International Space Station. 1. Video observation.

    PubMed

    Vergara, Alessandro; Corvino, Ermanno; Sorrentino, Giosué; Piccolo, Chiara; Tortora, Alessandra; Carotenuto, Luigi; Mazzarella, Lelio; Zagari, Adriana

    2002-10-01

    Single chains of the collagen model polypeptide with sequence (Pro-Pro-Gly)(10), hereafter referred to as (PPG)(10), aggregate to form rod-shaped triple helices. Crystals of (PPG)(10) were grown in the Advanced Protein Crystallization Facility (APCF) both onboard the International Space Station (ISS) and on Earth. The experiments allow the direct comparison of four different crystallization environments for the first time: solution in microgravity ((g), agarose gel in (g, solution on earth, and gel on earth. Both on board and on ground, the crystal growth was monitored by a CCD video camera. The image analysis provided information on the spatial distribution of the crystals, their movement and their growth rate. The analysis of the distribution of crystals reveals that the crystallization process occurs as it does in batch conditions. Slow motions have been observed onboard the ISS. Different to Space-Shuttle experiment, the crystals onboard the ISS moved coherently and followed parallel trajectories. Growth rate and induction time are very similar both in gel and in solution, suggesting that the crystal growth rate is controlled by the kinetics at the interface under the used experimental conditions. These results provide the first data in the crystallogenesis of (PPG)(10), which is a representative member of non-globular, rod-like proteins.

  7. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  8. Initial Demonstration of 9-MHz Framing Camera Rates on the FAST UV Drive Laser Pulse Trains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Edstrom Jr., D.; Ruan, J.

    2016-10-09

    We report the configuration of a Hamamatsu C5680 streak camera as a framing camera to record transverse spatial information of green-component laser micropulses at 3- and 9-MHz rates for the first time. The latter is near the time scale of the ~7.5-MHz revolution frequency of the Integrable Optics Test Accelerator (IOTA) ring and its expected synchroton radiation source temporal structure. The 2-D images are recorded with a Gig-E readout CCD camera. We also report a first proof of principle with an OTR source using the linac streak camera in a semi-framing mode.

  9. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  10. A New Observatory for Eastern College: A Dream Realized

    NASA Astrophysics Data System (ADS)

    Bradstreet, D. H.

    1996-12-01

    The Eastern College Observatory began as a rooftop observing deck with one Celestron 8 telescope in 1976 as the workhorse instrument of the observational astronomy lab within the core curriculum. For 20 years the observing deck served as the crude observatory, being augmented through the years by other computerized Celestron 8's and a 17.5" diameter Dobsonian with computerized setting circles. The lab consisted primarily of visual observations and astrophotography. In 1987 plans were set into motion to raise money to build a permanent Observatory on the roof of the main classroom building. Fundraising efforts included three Jog-A-Thons (raising more than $40,000) and many donations from individuals and foundations. The fundraising was completed in 1996 and a two telescope observatory was constructed in the summer of 1996 complete with warm room, CCD cameras, computers, spectrograph, video network, and computerized single channel photometer. The telescopes are computerized 16" diameter Meade LX200 Schmidt-Cassegrains, each coupled to Gateway Pentium Pro 200 MHz computers. SBIG ST-8 CCD cameras were also secured for each telescope and an Optec SSP-7 photometer and Optomechanics Research 10C Spectrograph were also purchased. A Daystar H-alpha solar filter and Thousand Oaks visual light solar filter have expanded the Observatory's functionality to daytime observing as well. This is especially useful for the thousands of school children who frequent the Planetarium each year. The Observatory primarily serves the core astronomy lab where students must observe and photograph a prescribed number of celestial objects in a semester. Advanced students can take directed studies where they conduct photometry on eclipsing binaries or other variable stars or search for new asteroids. In addition, the Observatory and Planetarium are open to the public. Interested members of the community can reserve time on the telescopes and receive training and supervision from lab assistants. The lessons learned from building the Observatory as well as structural plans, equipment and curriculum development will be discussed in this poster.

  11. The High Definition Earth Viewing (HDEV) Payload

    NASA Technical Reports Server (NTRS)

    Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris

    2017-01-01

    The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.

  12. Instrumentation development for space debris optical observation system in Indonesia: Preliminary results

    NASA Astrophysics Data System (ADS)

    Dani, Tiar; Rachman, Abdul; Priyatikanto, Rhorom; Religia, Bahar

    2015-09-01

    An increasing number of space junk in orbit has raised their chances to fall in Indonesian region. So far, three debris of rocket bodies have been found in Bengkulu, Gorontalo and Lampung. LAPAN has successfully developed software for monitoring space debris that passes over Indonesia with an altitude below 200 km. To support the software-based system, the hardware-based system has been developed based on optical instruments. The system has been under development in early 2014 which consist of two systems: the telescopic system and wide field system. The telescopic system uses CCD cameras and a reflecting telescope with relatively high sensitivity. Wide field system uses DSLR cameras, binoculars and a combination of CCD with DSLR Lens. Methods and preliminary results of the systems will be presented.

  13. Image Information Obtained Using a Charge-Coupled Device (CCD) Camera During an Immersion Liquid Evaporation Process for Measuring the Refractive Index of Solid Particles.

    PubMed

    Niskanen, Ilpo; Sutinen, Veijo; Thungström, Göran; Räty, Jukka

    2018-06-01

    The refractive index is a fundamental physical property of a medium, which can be used for the identification and purity issues of all media. Here we describe a refractive index measurement technique to determine simultaneously the refractive index of different solid particles by monitoring the transmittance of light from a suspension using a charge-coupled device (CCD) camera. An important feature of the measurement is the liquid evaporation process for the refractive index matching of the solid particle and the immersion liquid; this was realized by using a pair of volatile and non-volatile immersion liquids. In this study, refractive indices of calcium fluoride (CaF 2 ) and barium fluoride (BaF 2 ) were determined using the proposed method.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conder, A.; Mummolo, F. J.

    The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.

  15. An Automatic Portable Telecine Camera.

    DTIC Science & Technology

    1978-08-01

    five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the

  16. Opto-mechanical design of the G-CLEF flexure control camera system

    NASA Astrophysics Data System (ADS)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  17. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  18. An affordable wearable video system for emergency response training

    NASA Astrophysics Data System (ADS)

    King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.

    2009-02-01

    Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.

  19. Caught on Camera: Special Education Classrooms and Video Surveillance

    ERIC Educational Resources Information Center

    Heintzelman, Sara C.; Bathon, Justin M.

    2017-01-01

    In Texas, state policy anticipates that installing video cameras in special education classrooms will decrease student abuse inflicted by teachers. Lawmakers assume that collecting video footage will prevent teachers from engaging in malicious actions and prosecute those who choose to harm children. At the request of a parent, Section 29.022 of…

  20. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  1. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  2. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  3. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  4. [Multimedia (visual collaboration) brings true nature of human life].

    PubMed

    Tomita, N

    2000-03-01

    Videoconferencing system, high-quality visual collaboration, is bringing Multimedia into a society. Multimedia, high quality media such as TV broadcast, looks expensive because it requires broadband network with 100-200 Mpbs bandwidth or 3,700 analog telephone lines. However, thanks to the existing digital-line called N-ISDN (Narrow Integrated Service Digital Network) and PictureTel's audio/video compression technologies, it becomes far less expensive. N-ISDN provides 128 Kbps bandwidth, over twice wider than analog line. PictureTel's technology instantly compress audio/video signal into 1/1,000 in size. This means, with ISDN and PictureTel technology. Multimedia is materialized over even single ISDN line. This will allow doctor to remotely meet face-to-face with a medical specialist or patients to interview, conduct physical examinations, review records, and prescribe treatments. Bonding multiple ISDN lines will further improve video quality that enables remote surgery. Surgeon can perform an operation on internal organ by projecting motion video from Endoscope's CCD camera to large display monitor. Also, PictureTel provides advanced technologies of eliminating background noise generated by surgical knives or scalpels during surgery. This will allow sound of the breath or heartbeat be clearly transmitted to the remote site. Thus, Multimedia eliminates the barrier of distance, enabling people to be just at home, to be anywhere in the world, to undergo up-to-date medical treatment by expertise. This will reduce medical cost and allow people to live in the suburbs, in less pollution, closer to the nature. People will foster more open and collaborative environment by participating in local activities. Such community-oriented life-style will atone for mass consumption, materialistic economy in the past, then bring true happiness and welfare into our life after all.

  5. Optical sample-position sensing for electrostatic levitation

    NASA Technical Reports Server (NTRS)

    Sridharan, G.; Chung, S.; Elleman, D.; Whim, W. K.

    1989-01-01

    A comparative study is conducted for optical position-sensing techniques applicable to micro-G conditions sample-levitation systems. CCD sensors are compared with one- and two-dimensional position detectors used in electrostatic particle levitation. In principle, the CCD camera method can be improved from current resolution levels of 200 microns through the incorporation of a higher-pixel device and more complex digital signal processor interface. Nevertheless, the one-dimensional position detectors exhibited superior, better-than-one-micron resolution.

  6. A comparison of imaging methods for use in an array biosensor

    NASA Technical Reports Server (NTRS)

    Golden, Joel P.; Ligler, Frances S.

    2002-01-01

    An array biosensor has been developed which uses an actively-cooled, charge-coupled device (CCD) imager. In an effort to save money and space, a complementary metal-oxide semiconductor (CMOS) camera and photodiode were tested as replacements for the cooled CCD imager. Different concentrations of CY5 fluorescent dye in glycerol were imaged using the three different detection systems with the same imaging optics. Signal discrimination above noise was compared for each of the three systems.

  7. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system.

    PubMed

    Saotome, Naoya; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji

    2016-04-01

    Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors' facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. The results of this study demonstrate that the authors' range check system is capable of quick and easy range verification with sufficient accuracy.

  8. Dynamic light scattering microscopy

    NASA Astrophysics Data System (ADS)

    Dzakpasu, Rhonda

    An optical microscope technique, dynamic light scattering microscopy (DLSM) that images dynamically scattered light fluctuation decay rates is introduced. Using physical optics we show theoretically that within the optical resolution of the microscope, relative motions between scattering centers are sufficient to produce significant phase variations resulting in interference intensity fluctuations in the image plane. The time scale for these intensity fluctuations is predicted. The spatial coherence distance defining the average distance between constructive and destructive interference in the image plane is calculated and compared with the pixel size. We experimentally tested DLSM on polystyrene latex nanospheres and living macrophage cells. In order to record these rapid fluctuations, on a slow progressive scan CCD camera, we used a thin laser line of illumination on the sample such that only a single column of pixels in the CCD camera is illuminated. This allowed the use of the rate of the column-by-column readout transfer process as the acquisition rate of the camera. This manipulation increased the data acquisition rate by at least an order of magnitude in comparison to conventional CCD cameras rates defined by frames/s. Analysis of the observed fluctuations provides information regarding the rates of motion of the scattering centers. These rates, acquired from each position on the sample are used to create a spatial map of the fluctuation decay rates. Our experiments show that with this technique, we are able to achieve a good signal-to-noise ratio and can monitor fast intensity fluctuations, on the order of milliseconds. DLSM appears to provide dynamic information about fast motions within cells at a sub-optical resolution scale and provides a new kind of spatial contrast.

  9. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saotome, Naoya, E-mail: naosao@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke

    Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator blockmore » and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.« less

  10. Surface temperature/heat transfer measurement using a quantitative phosphor thermography system

    NASA Technical Reports Server (NTRS)

    Buck, G. M.

    1991-01-01

    A relative-intensity phosphor thermography technique developed for surface heating studies in hypersonic wind tunnels is described. A direct relationship between relative emission intensity and phosphor temperature is used for quantitative surface temperature measurements in time. The technique provides global surface temperature-time histories using a 3-CCD (Charge Coupled Device) video camera and digital recording system. A current history of technique development at Langley is discussed. Latest developments include a phosphor mixture for a greater range of temperature sensitivity and use of castable ceramics for inexpensive test models. A method of calculating surface heat-transfer from thermal image data in blowdown wind tunnels is included in an appendix, with an analysis of material thermal heat-transfer properties. Results from tests in the Langley 31-Inch Mach 10 Tunnel are presented for a ceramic orbiter configuration and a four-inch diameter hemisphere model. Data include windward heating for bow-shock/wing-shock interactions on the orbiter wing surface, and a comparison with prediction for hemisphere heating distribution.

  11. On-ground and in-orbit characterisation plan for the PLATO CCD normal cameras

    NASA Astrophysics Data System (ADS)

    Gow, J. P. D.; Walton, D.; Smith, A.; Hailey, M.; Curry, P.; Kennedy, T.

    2017-11-01

    PLAnetary Transits and Ocillations (PLATO) is the third European Space Agency (ESA) medium class mission in ESA's cosmic vision programme due for launch in 2026. PLATO will carry out high precision un-interrupted photometric monitoring in the visible band of large samples of bright solar-type stars. The primary mission goal is to detect and characterise terrestrial exoplanets and their systems with emphasis on planets orbiting in the habitable zone, this will be achieved using light curves to detect planetary transits. PLATO uses a novel multi- instrument concept consisting of 26 small wide field cameras The 26 cameras are made up of a telescope optical unit, four Teledyne e2v CCD270s mounted on a focal plane array and connected to a set of Front End Electronics (FEE) which provide CCD control and readout. There are 2 fast cameras with high read-out cadence (2.5 s) for magnitude ~ 4-8 stars, being developed by the German Aerospace Centre and 24 normal (N) cameras with a cadence of 25 s to monitor stars with a magnitude greater than 8. The N-FEEs are being developed at University College London's Mullard Space Science Laboratory (MSSL) and will be characterised along with the associated CCDs. The CCDs and N-FEEs will undergo rigorous on-ground characterisation and the performance of the CCDs will continue to be monitored in-orbit. This paper discusses the initial development of the experimental arrangement, test procedures and current status of the N-FEE. The parameters explored will include gain, quantum efficiency, pixel response non-uniformity, dark current and Charge Transfer Inefficiency (CTI). The current in-orbit characterisation plan is also discussed which will enable the performance of the CCDs and their associated N-FEE to be monitored during the mission, this will include measurements of CTI giving an indication of the impact of radiation damage in the CCDs.

  12. Back-illuminate fiber system research for multi-object fiber spectroscopic telescope

    NASA Astrophysics Data System (ADS)

    Zhou, Zengxiang; Liu, Zhigang; Hu, Hongzhuan; Wang, Jianping; Zhai, Chao; Chu, Jiaru

    2016-07-01

    In the telescope observation, the position of fiber will highly influence the spectra efficient input in the fiber to the spectrograph. When the fibers were back illuminated on the spectra end, they would export light on the positioner end, so the CCD cameras could capture the photo of fiber tip position covered the focal plane, calculates the precise position information by light centroid method and feeds back to control system. A set of fiber back illuminated system was developed which combined to the low revolution spectro instruments in LAMOST. It could provide uniform light output to the fibers, meet the requirements for the CCD camera measurement. The paper was introduced the back illuminated system design and different test for the light resource. After optimization, the effect illuminated system could compare with the integrating sphere, meet the conditions of fiber position measurement.Using parallel controlled fiber positioner as the spectroscopic receiver is an efficiency observation system for spectra survey, has been used in LAMOST recently, and will be proposed in CFHT and rebuilt telescope Mayall. In the telescope observation, the position of fiber will highly influence the spectra efficient input in the fiber to the spectrograph. When the fibers were back illuminated on the spectra end, they would export light on the positioner end, so the CCD cameras could capture the photo of fiber tip position covered the focal plane, calculates the precise position information by light centroid method and feeds back to control system. After many years on these research, the back illuminated fiber measurement was the best method to acquire the precision position of fibers. In LAMOST, a set of fiber back illuminated system was developed which combined to the low revolution spectro instruments in LAMOST. It could provide uniform light output to the fibers, meet the requirements for the CCD camera measurement and was controlled by high-level observation system which could shut down during the telescope observation. The paper was introduced the back illuminated system design and different test for the light resource. After optimization, the effect illuminated system could compare the integrating sphere, meet the conditions of fiber position measurement.

  13. Phase shifting white light interferometry using colour CCD for optical metrology and bio-imaging applications

    NASA Astrophysics Data System (ADS)

    Upputuri, Paul Kumar; Pramanik, Manojit

    2018-02-01

    Phase shifting white light interferometry (PSWLI) has been widely used for optical metrology applications because of their precision, reliability, and versatility. White light interferometry using monochrome CCD makes the measurement process slow for metrology applications. WLI integrated with Red-Green-Blue (RGB) CCD camera is finding imaging applications in the fields optical metrology and bio-imaging. Wavelength dependent refractive index profiles of biological samples were computed from colour white light interferograms. In recent years, whole-filed refractive index profiles of red blood cells (RBCs), onion skin, fish cornea, etc. were measured from RGB interferograms. In this paper, we discuss the bio-imaging applications of colour CCD based white light interferometry. The approach makes the measurement faster, easier, cost-effective, and even dynamic by using single fringe analysis methods, for industrial applications.

  14. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    PubMed

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  15. Mosaic CCD method: A new technique for observing dynamics of cometary magnetospheres

    NASA Technical Reports Server (NTRS)

    Saito, T.; Takeuchi, H.; Kozuba, Y.; Okamura, S.; Konno, I.; Hamabe, M.; Aoki, T.; Minami, S.; Isobe, S.

    1992-01-01

    On April 29, 1990, the plasma tail of Comet Austin was observed with a CCD camera on the 105-cm Schmidt telescope at the Kiso Observatory of the University of Tokyo. The area of the CCD used in this observation is only about 1 sq cm. When this CCD is used on the 105-cm Schmidt telescope at the Kiso Observatory, the area corresponds to a narrow square view of 12 ft x 12 ft. By comparison with the photograph of Comet Austin taken by Numazawa (personal communication) on the same night, we see that only a small part of the plasma tail can be photographed at one time with the CCD. However, by shifting the view on the CCD after each exposure, we succeeded in imaging the entire length of the cometary magnetosphere of 1.6 x 10(exp 6) km. This new technique is called 'the mosaic CCD method'. In order to study the dynamics of cometary plasma tails, seven frames of the comet from the head to the tail region were twice imaged with the mosaic CCD method and two sets of images were obtained. Six microstructures, including arcade structures, were identified in both the images. Sketches of the plasma tail including microstructures are included.

  16. Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.

    ERIC Educational Resources Information Center

    Foss, Kurt; Kahan, Robert S.

    This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…

  17. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    PubMed

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  18. Review of intelligent video surveillance with single camera

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Fan, Jiu-lun; Wang, DianWei

    2012-01-01

    Intelligent video surveillance has found a wide range of applications in public security. This paper describes the state-of- the-art techniques in video surveillance system with single camera. This can serve as a starting point for building practical video surveillance systems in developing regions, leveraging existing ubiquitous infrastructure. In addition, this paper discusses the gap between existing technologies and the requirements in real-world scenario, and proposes potential solutions to reduce this gap.

  19. Body worn camera

    NASA Astrophysics Data System (ADS)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  20. Evaluating video digitizer errors

    NASA Astrophysics Data System (ADS)

    Peterson, C.

    2016-01-01

    Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.

  1. A stroboscopic technique for using CCD cameras in flow visualization systems for continuous viewing and stop action photography

    NASA Technical Reports Server (NTRS)

    Franke, John M.; Rhodes, David B.; Jones, Stephen B.; Dismond, Harriet R.

    1992-01-01

    A technique for synchronizing a pulse light source to charge coupled device cameras is presented. The technique permits the use of pulse light sources for continuous as well as stop action flow visualization. The technique has eliminated the need to provide separate lighting systems at facilities requiring continuous and stop action viewing or photography.

  2. State of the art in video system performance

    NASA Technical Reports Server (NTRS)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  3. Quasi-Speckle Measurements of Close Double Stars With a CCD Camera

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard

    2017-01-01

    CCD measurements of visual double stars have been an active area of amateur observing for several years now. However, most CCD measurements rely on “lucky imaging” (selecting a very small percentage of the best frames of a larger frame set so as to get the best “frozen” atmosphere for the image), a technique that has limitations with regards to how close the stars can be and still be cleanly resolved in the lucky image. In this paper, the author reports how using deconvolution stars in the analysis of close double stars can greatly enhance the quality of the autocorellogram, leading to a more precise solution using speckle reduction software rather than lucky imaging.

  4. SpUpNIC (Spectrograph Upgrade: Newly Improved Cassegrain) on the South African Astronomical Observatory's 74-inch telescope

    NASA Astrophysics Data System (ADS)

    Crause, Lisa A.; Carter, Dave; Daniels, Alroy; Evans, Geoff; Fourie, Piet; Gilbank, David; Hendricks, Malcolm; Koorts, Willie; Lategan, Deon; Loubser, Egan; Mouries, Sharon; O'Connor, James E.; O'Donoghue, Darragh E.; Potter, Stephen; Sass, Craig; Sickafoose, Amanda A.; Stoffels, John; Swanevelder, Pieter; Titus, Keegan; van Gend, Carel; Visser, Martin; Worters, Hannah L.

    2016-08-01

    SpUpNIC (Spectrograph Upgrade: Newly Improved Cassegrain) is the extensively upgraded Cassegrain Spectrograph on the South African Astronomical Observatory's 74-inch (1.9-m) telescope. The inverse-Cassegrain collimator mirrors and woefully inefficient Maksutov-Cassegrain camera optics have been replaced, along with the CCD and SDSU controller. All moving mechanisms are now governed by a programmable logic controller, allowing remote configuration of the instrument via an intuitive new graphical user interface. The new collimator produces a larger beam to match the optically faster Folded-Schmidt camera design and nine surface-relief diffraction gratings offer various wavelength ranges and resolutions across the optical domain. The new camera optics (a fused silica Schmidt plate, a slotted fold flat and a spherically figured primary mirror, both Zerodur, and a fused silica field-flattener lens forming the cryostat window) reduce the camera's central obscuration to increase the instrument throughput. The physically larger and more sensitive CCD extends the available wavelength range; weak arc lines are now detectable down to 325 nm and the red end extends beyond one micron. A rear-of-slit viewing camera has streamlined the observing process by enabling accurate target placement on the slit and facilitating telescope focus optimisation. An interactive quick-look data reduction tool further enhances the user-friendliness of SpUpNI

  5. Performance Characterization of the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) CCD Cameras

    NASA Technical Reports Server (NTRS)

    Joiner, Reyann; Kobayashi, Ken; Winebarger, Amy; Champey, Patrick

    2014-01-01

    The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a sounding rocket instrument currently being developed by NASA's Marshall Space Flight Center (MSFC), the National Astronomical Observatory of Japan (NAOJ), and other partners. The goal of this instrument is to observe and detect the Hanle effect in the scattered Lyman-Alpha UV (121.6nm) light emitted by the Sun's chromosphere. The polarized spectrum imaged by the CCD cameras will capture information about the local magnetic field, allowing for measurements of magnetic strength and structure. In order to make accurate measurements of this effect, the performance characteristics of the three on- board charge-coupled devices (CCDs) must meet certain requirements. These characteristics include: quantum efficiency, gain, dark current, read noise, and linearity. Each of these must meet predetermined requirements in order to achieve satisfactory performance for the mission. The cameras must be able to operate with a gain of 2.0+/- 0.5 e--/DN, a read noise level less than 25e-, a dark current level which is less than 10e-/pixel/s, and a residual non- linearity of less than 1%. Determining these characteristics involves performing a series of tests with each of the cameras in a high vacuum environment. Here we present the methods and results of each of these performance tests for the CLASP flight cameras.

  6. Beats: Video Monitors and Cameras.

    ERIC Educational Resources Information Center

    Worth, Frazier

    1996-01-01

    Presents a method to teach the concept of beats as a generalized phenomenon rather than teaching it only in the context of sound. Involves using a video camera to film a computer terminal, 16-mm projector, or TV monitor. (JRH)

  7. Improving Photometric Calibration of Meteor Video Camera Systems.

    PubMed

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera band pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  8. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  9. Upgrading and testing program for narrow band high resolution planetary IR imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Wattson, R. B.; Rappaport, S.

    1977-01-01

    An imaging spectrometer, intended primarily for observations of the outer planets, which utilizes an acoustically tuned optical filter (ATOF) and a charge coupled device (CCD) television camera was modified to improve spatial resolution and sensitivity. The upgraded instrument was a spatial resolving power of approximately 1 arc second, as defined by an f/7 beam at the CCD position and it has this resolution over the 50 arc second field of view. Less vignetting occurs and sensitivity is four times greater. The spectral resolution of 15 A over the wavelength interval 6500 A - 11,000 A is unchanged. Mechanical utility has been increased by the use of a honeycomb optical table, mechanically rigid yet adjustable optical component mounts, and a camera focus translation stage. The upgraded instrument was used to observe Venus and Saturn.

  10. Microwave transient analyzer

    DOEpatents

    Gallegos, Cenobio H.; Ogle, James W.; Stokes, John L.

    1992-01-01

    A method and apparatus for capturing and recording indications of frequency content of electromagnetic signals and radiation is disclosed including a laser light source (12) and a Bragg cell (14) for deflecting a light beam (22) at a plurality of deflection angles (36) dependent upon frequency content of the signal. A streak camera (26) and a microchannel plate intensifier (28) are used to project Bragg cell (14) output onto either a photographic film (32) or a charge coupled device (CCD) imager (366). Timing markers are provided by a comb generator (50) and a one shot generator (52), the outputs of which are also routed through the streak camera (26) onto the film (32) or the CCD imager (366). Using the inventive method, the full range of the output of the Bragg cell (14) can be recorded as a function of time.

  11. Position-sensitive detection of ultracold neutrons with an imaging camera and its implications to spectroscopy

    DOE PAGES

    Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles; ...

    2016-05-16

    Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m 0gδx. Here, the symbols δE, δx, m 0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. Asmore » a result, this method allows different types of UCN spectroscopy and other applications.« less

  12. Position-sensitive detection of ultracold neutrons with an imaging camera and its implications to spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles

    Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m 0gδx. Here, the symbols δE, δx, m 0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. Asmore » a result, this method allows different types of UCN spectroscopy and other applications.« less

  13. Near-infrared fluorescence imaging with a mobile phone (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ghassemi, Pejhman; Wang, Bohan; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, T. Joshua

    2017-03-01

    Mobile phone cameras employ sensors with near-infrared (NIR) sensitivity, yet this capability has not been exploited for biomedical purposes. Removing the IR-blocking filter from a phone-based camera opens the door to a wide range of techniques and applications for inexpensive, point-of-care biophotonic imaging and sensing. This study provides proof of principle for one of these modalities - phone-based NIR fluorescence imaging. An imaging system was assembled using a 780 nm light source along with excitation and emission filters with 800 nm and 825 nm cut-off wavelengths, respectively. Indocyanine green (ICG) was used as an NIR fluorescence contrast agent in an ex vivo rodent model, a resolution test target and a 3D-printed, tissue-simulating vascular phantom. Raw and processed images for red, green and blue pixel channels were analyzed for quantitative evaluation of fundamental performance characteristics including spectral sensitivity, detection linearity and spatial resolution. Mobile phone results were compared with a scientific CCD. The spatial resolution of CCD system was consistently superior to the phone, and green phone camera pixels showed better resolution than blue or green channels. The CCD exhibited similar sensitivity as processed red and blue pixels channels, yet a greater degree of detection linearity. Raw phone pixel data showed lower sensitivity but greater linearity than processed data. Overall, both qualitative and quantitative results provided strong evidence of the potential of phone-based NIR imaging, which may lead to a wide range of applications from cancer detection to glucose sensing.

  14. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  15. A CCD Spectrometer for One Dollar

    NASA Astrophysics Data System (ADS)

    Beaver, J.; Robert, D.

    2011-09-01

    We describe preliminary tests on a very low-cost system for obtaining stellar spectra for instructional use in an introductory astronomy laboratory. CCD imaging with small telescopes is now commonplace and relatively inexpensive. Giving students direct experience taking stellar spectra, however, is much more difficult, and the equipment can easily be out of reach for smaller institutions, especially if one wants to give the experience to large numbers of students. We have performed preliminary tests on an extremely low-cost (about $1.00) objective grating that can be coupled with an existing CCD camera or commercial digital single-lens reflex (DSLR) camera and a small telescope typical of introductory astronomy labs. With this equipment we believe it is possible for introductory astronomy students to take stellar spectra that are of high enough quality to distinguish between many MK spectral classes, or to determine standard B and V magnitudes. We present observational tests of this objective grating used on an 8" Schmidt-Cassegrain with a low-end, consumer DSLR camera. Some low-cost strategies for reducing the raw data are compared, with an eye toward projects ranging from individual undergraduate research projects to use by many students in a non-majors introductory astronomy lab. Toward this end we compare various trade offs between complexity of the observing and data reduction processes and the usefulness of the final results. We also describe some undergraduate astronomy education projects that this system could potentially be used for. Some of these projects could involve data-sharing collaborations between students at different institutions.

  16. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    PubMed Central

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  17. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    PubMed

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  18. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  19. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  20. Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach

    NASA Astrophysics Data System (ADS)

    Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte

    2007-01-01

    We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.

  1. CVD2014-A Database for Evaluating No-Reference Video Quality Assessment Algorithms.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Vaahteranoksa, Mikko; Vuori, Tero; Oittinen, Pirkko; Hakkinen, Jukka

    2016-07-01

    In this paper, we present a new video database: CVD2014-Camera Video Database. In contrast to previous video databases, this database uses real cameras rather than introducing distortions via post-processing, which results in a complex distortion space in regard to the video acquisition process. CVD2014 contains a total of 234 videos that are recorded using 78 different cameras. Moreover, this database contains the observer-specific quality evaluation scores rather than only providing mean opinion scores. We have also collected open-ended quality descriptions that are provided by the observers. These descriptions were used to define the quality dimensions for the videos in CVD2014. The dimensions included sharpness, graininess, color balance, darkness, and jerkiness. At the end of this paper, a performance study of image and video quality algorithms for predicting the subjective video quality is reported. For this performance study, we proposed a new performance measure that accounts for observer variance. The performance study revealed that there is room for improvement regarding the video quality assessment algorithms. The CVD2014 video database has been made publicly available for the research community. All video sequences and corresponding subjective ratings can be obtained from the CVD2014 project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  2. Joint Video Stitching and Stabilization from Moving Cameras.

    PubMed

    Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef

    2016-09-08

    In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.

  3. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  4. Turbulent Mixing and Combustion for High-Speed Air-Breathing Propulsion Application

    DTIC Science & Technology

    2007-08-12

    deficit (the velocity of the wake relative to the free-stream velocity), decays rapidly with downstream distance, so that the streamwise velocity is...switched laser with double-pulse option) and a new imaging system (high-resolution: 4008x2672 pix2, low- noise (cooled) Cooke PCO-4000 CCD camera). The...was designed in-house for high-speed low- noise image acquisition. The KFS CCD image sensor was designed by Mark Wadsworth of JPL and has a resolution

  5. Development of a CCD based solar speckle imaging system

    NASA Astrophysics Data System (ADS)

    Nisenson, Peter; Stachnik, Robert V.; Noyes, Robert W.

    1986-02-01

    A program to develop software and hardware for the purpose of obtaining high angular resolution images of the solar surface is described. The program included the procurement of a Charge Coupled Devices imaging system; an extensive laboratory and remote site testing of the camera system; the development of a software package for speckle image reconstruction which was eventually installed and tested at the Sacramento Peak Observatory; and experiments of the CCD system (coupled to an image intensifier) for low light level, narrow spectral band solar imaging.

  6. Modeling Pluto-Charon Mutual Events. 2; CCD Observations with the 60 in. Telescope at Palomar Mountain

    NASA Technical Reports Server (NTRS)

    Buratti, B. J.; Dunbar, R. S.; Tedesco, E. F.; Gibson, J.; Marcialis, R. L.; Wong, F.; Bennett, S.; Dobrovolskis, A.

    1995-01-01

    We present observations of 15 Pluto-Charon mutual events which were obtained with the 60 in. telescope at Palomar Mountain Observatory. A CCD camera and Johnson V filter were used for the observations, except for one event that was observed with a Johnson B filter, and another event that was observed with a Gunn R filter. We observed two events in their entirety, and three pairs of complementary mutual occultation-transit events.

  7. New nova candidate in M81

    NASA Astrophysics Data System (ADS)

    Henze, M.; Sala, G.; Jose, J.; Figueira, J.; Hernanz, M.

    2016-06-01

    We report the discovery of a new nova candidate in the M81 galaxy on 16x200s stacked R filter CCD images, obtained with the 80 cm Ritchey-Chretien F/9.6 Joan Oro telescope at Observatori Astronomic del Montsec, owned by the Catalan Government and operated by the Institut d'Estudis Espacials de Catalunya, Spain, using a Finger Lakes PL4240-1-BI CCD Camera (with a Class 1 Basic Broadband coated 2k x 2k chip with 13.5 microns sq. pixels).

  8. Results of the IMO Video Meteor Network - June 2017, and effective collection area study

    NASA Astrophysics Data System (ADS)

    Molau, Sirko; Crivello, Stefano; Goncalves, Rui; Saraiva, Carlos; Stomeo, Enrico; Kac, Javor

    2017-12-01

    Over 18000 meteors were recorded by the IMO Video Meteor Network cameras during more than 7100 hours of observing time during 2017 June. The June Bootids were not detectable this year. Nearly 50 Daytime Arietids were recorded in 2017, and a first flux density profile for this shower in the optical domain is calculated, using video data from the period 2011-2017. Effective collection area of video cameras is discussed in more detail.

  9. Repurposing video recordings for structure motion estimations

    NASA Astrophysics Data System (ADS)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  10. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  11. Fluorescence endoscopic video system

    NASA Astrophysics Data System (ADS)

    Papayan, G. V.; Kang, Uk

    2006-10-01

    This paper describes a fluorescence endoscopic video system intended for the diagnosis of diseases of the internal organs. The system operates on the basis of two-channel recording of the video fluxes from a fluorescence channel and a reflected-light channel by means of a high-sensitivity monochrome television camera and a color camera, respectively. Examples are given of the application of the device in gastroenterology.

  12. A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.

    PubMed

    Leung, Brian; Chau, Tom

    2010-01-01

    The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.

  13. DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  14. Opportunistic traffic sensing using existing video sources (phase II).

    DOT National Transportation Integrated Search

    2017-02-01

    The purpose of the project reported on here was to investigate methods for automatic traffic sensing using traffic surveillance : cameras, red light cameras, and other permanent and pre-existing video sources. Success in this direction would potentia...

  15. Design of a frequency domain instrument for simultaneous optical tomography and magnetic resonance imaging of small animals

    NASA Astrophysics Data System (ADS)

    Masciotti, James M.; Rahim, Shaheed; Grover, Jarrett; Hielscher, Andreas H.

    2007-02-01

    We present a design for frequency domain instrument that allows for simultaneous gathering of magnetic resonance and diffuse optical tomographic imaging data. This small animal imaging system combines the high anatomical resolution of magnetic resonance imaging (MRI) with the high temporal resolution and physiological information provided by diffuse optical tomography (DOT). The DOT hardware comprises laser diodes and an intensified CCD camera, which are modulated up to 1 GHz by radio frequency (RF) signal generators. An optical imaging head is designed to fit inside the 4 cm inner diameter of a 9.4 T MRI system. Graded index fibers are used to transfer light between the optical hardware and the imaging head within the RF coil. Fiducial markers are integrated into the imaging head to allow the determination of the positions of the source and detector fibers on the MR images and to permit co-registration of MR and optical tomographic images. Detector fibers are arranged compactly and focused through a camera lens onto the photocathode of the intensified CCD camera.

  16. The kinelite project. A new powerful motion analyser for spacelab and space station

    NASA Astrophysics Data System (ADS)

    Venet, M.; Pinard, H.; McIntyre, J.; Berthoz, A.; Lacquaniti, F.

    The goal of the Kinelite Project is to develop a space qualified motion analysis system to be used in space by the scientific community, mainly to support neuroscience protocols. The measurement principle of the Kinelite is to determine, by triangulation mean, the 3D position of small, lightweight, reflective markers positionned at the different points of interest. The scene is illuminated by Infra Red flashes and the reflected light is acquired by up to 8 precalibrated and synchronized CCD cameras. The main characteristics of the system are: - Camera field of view: 45 °, - Number of cameras: 2 to 8, - Acquisition frequency: 25, 50, 100 or 200 Hz, - CCD format: 256 × 256, - Number of markers: up to 64, - 3D accuracy: 2 mm, - Main dimensions: 45 cm × 45 cm × 30 cm, - Mass: 23 kg, - Power consumption: less than 200 W. The Kinelite will first fly aboard the NASA Spacelab; it will be used, during the NEUROLAB mission (4/98), to support the "Frames of References and Internal Models" (Principal Investigator: Pr. A.BERTHOZ, Co Investigators: J. Mc INTYRE, F. LACQUANITI).

  17. Miniature Spatial Heterodyne Raman Spectrometer with a Cell Phone Camera Detector.

    PubMed

    Barnett, Patrick D; Angel, S Michael

    2017-05-01

    A spatial heterodyne Raman spectrometer (SHRS) with millimeter-sized optics has been coupled with a standard cell phone camera as a detector for Raman measurements. The SHRS is a dispersive-based interferometer with no moving parts and the design is amenable to miniaturization while maintaining high resolution and large spectral range. In this paper, a SHRS with 2.5 mm diffraction gratings has been developed with 17.5 cm -1 theoretical spectral resolution. The footprint of the SHRS is orders of magnitude smaller than the footprint of charge-coupled device (CCD) detectors typically employed in Raman spectrometers, thus smaller detectors are being explored to shrink the entire spectrometer package. This paper describes the performance of a SHRS with 2.5 mm wide diffraction gratings and a cell phone camera detector, using only the cell phone's built-in optics to couple the output of the SHRS to the sensor. Raman spectra of a variety of samples measured with the cell phone are compared to measurements made using the same miniature SHRS with high-quality imaging optics and a high-quality, scientific-grade, thermoelectrically cooled CCD.

  18. New technology and techniques for x-ray mirror calibration at PANTER

    NASA Astrophysics Data System (ADS)

    Freyberg, Michael J.; Budau, Bernd; Burkert, Wolfgang; Friedrich, Peter; Hartner, Gisela; Misaki, Kazutami; Mühlegger, Martin

    2008-07-01

    The PANTER X-ray Test Facility has been utilized successfully for developing and calibrating X-ray astronomical instrumentation for observatories such as ROSAT, Chandra, XMM-Newton, Swift, etc. Future missions like eROSITA, SIMBOL-X, or XEUS require improved spatial resolution and broader energy band pass, both for optics and for cameras. Calibration campaigns at PANTER have made use of flight spare instrumentation for space applications; here we report on a new dedicated CCD camera for on-ground calibration, called TRoPIC. As the CCD is similar to ones used for eROSITA (pn-type, back-illuminated, 75 μm pixel size, frame store mode, 450 μm micron wafer thickness, etc.) it can serve as prototype for eROSITA camera development. New techniques enable and enhance the analysis of measurements of eROSITA shells or silicon pore optics. Specifically, we show how sub-pixel resolution can be utilized to improve spatial resolution and subsequently the characterization of of mirror shell quality and of point spread function parameters in particular, also relevant for position reconstruction of astronomical sources in orbit.

  19. Performance measurement of commercial electronic still picture cameras

    NASA Astrophysics Data System (ADS)

    Hsu, Wei-Feng; Tseng, Shinn-Yih; Chiang, Hwang-Cheng; Cheng, Jui-His; Liu, Yuan-Te

    1998-06-01

    Commercial electronic still picture cameras need a low-cost, systematic method for evaluating the performance. In this paper, we present a measurement method to evaluating the dynamic range and sensitivity by constructing the opto- electronic conversion function (OECF), the fixed pattern noise by the peak S/N ratio (PSNR) and the image shading function (ISF), and the spatial resolution by the modulation transfer function (MTF). The evaluation results of individual color components and the luminance signal from a PC camera using SONY interlaced CCD array as the image sensor are then presented.

  20. Manned observations technology development, FY 1992 report

    NASA Technical Reports Server (NTRS)

    Israel, Steven

    1992-01-01

    This project evaluated the suitability of the NASA/JSC developed electronic still camera (ESC) digital image data for Earth observations from the Space Shuttle, as a first step to aid planning for Space Station Freedom. Specifically, image resolution achieved from the Space Shuttle using the current ESC system, which is configured with a Loral 15 mm x 15 mm (1024 x 1024 pixel array) CCD chip on the focal plane of a Nikon F4 camera, was compared to that of current handheld 70 mm Hasselblad 500 EL/M film cameras.

Top